report
stringlengths 320
1.32M
| summary
stringlengths 127
13.7k
|
---|---|
The ESEA was created to improve the academic achievement of disadvantaged children. The Improving America’s Schools Act of 1994, which reauthorized ESEA, required states to develop state academic content standards, which specify what all students are expected to know and be able to do, and academic achievement standards, which are explicit definitions of what students must know and be able to do to demonstrate proficiency. In addition, the 1994 reauthorization required assessments aligned to those standards. The most recent reauthorization of the ESEA, the No Child Left Behind Act of 2001, built on the 1994 requirements by, among other things, increasing the number of grades and subject areas in which states were required to assess students. NCLBA also required states to establish goals for the percentage of students attaining proficiency on ESEA assessments that are used to hold schools and districts accountable for the academic performance of students. Schools and districts failing to meet state proficiency goals for 2 or more years must take actions, proscribed by NCLBA, in order to improve student achievement. Every state, district, and school receiving funds under Title Part A of ESEA—the federal formula grant program dedicated to improving the academic achievement of the disadvantaged—is required to implement the changes described in NCLBA. ESEA assessments may contain one or more of various item types, including multiple choice, open/constructed response, checklists, rating scales, and work samples or portfolios. GAO’s prior work has found that item type is a major factor influencing the overall cost of state assessments and that multiple choice items are less expensive to score than open/constructed response items. Figure 1 describes several item types states use to assess student knowledge. NCLBA authorized additional funding to states for these assessments under the Grants for State Assessments program. Each year states have received a $3 million base amount regardless of its size, plus an additional amount based on its share of the nation’s school age population. States must first use the funds to pay the cost of developing the additional state standards and assessments. If a state has already developed the required standards and assessments, NCLBA allows these funds to be used to administer assessments or for other activities, such as developing challenging state academic standards in subject areas other than those required by NCLBA and ensuring that state assessments remain valid and reliable. In years that the grants have been awarded, the Grants for Enhanced Assessment Instruments program (Enhanced Assessment grants) has provided between $4 million and $17 million to several states. Applicants for Enhanced Assessment grants receive preference if they plan to fund assessments for students with disabilities, for Limited English Proficiency (LEP) students or are part of a collaborative effort between states. States may also use other federal funds for assessment-related activities, such as funds for students with disabilities, and funds provided under the American Recovery and Reinvestment Act of 2009 (ARRA). ARRA provides about $100 billion for education through a number of different programs, including the State Fiscal Stabilization Fund (SFSF). In order to receive SFSF funds, states must provide certain assurances, including that the state is committed to improving the quality of state academic standards and assessments. In addition, Education recently announced plans to make $4.35 billion in incentive grants available to states through SFSF on a competitive basis. These grants—referred to by Education as the Race to the Top program—can be used by states for, among other things, improving the quality of assessments. Like other students, those with disabilities must be included in statewide ESEA assessments. This is accomplished in different ways, depending on the effects of a student’s disability. Most students with disabilities participate in the regular statewide assessment either without accommodations or with appropriate accommodations, such as having unlimited time to complete the assessments, using large print or Braille editions of the assessments, or being provided individualized or small group administration of the assessments. States are permitted to use alternate academic achievement standards to evaluate the performance of students with the most significant cognitive disabilities. Alternate achievement standards must be linked to the state’s grade-level academic content standards but may include prerequisite skills within the continuum of skills culminating in grade-level proficiency. For these students, a state must offer alternate assessments that measure students’ performance. For example, the alternate assessment might assess students’ knowledge of fractions by splitting groups of objects into two, three, or more equal parts. While alternate assessments can be administered to all eligible children, the number of proficient and advanced scores from alternate assessments based on alternate achievement standards included in Adequate Yearly Progress (AYP) decisions generally is limited to 1 percent of the total tested population at the state and district levels. In addition, states may develop modified academic achievement standards—achievement standards that define proficiency at a lower level than the achievement standards used for the general assessment population, but are still aligned with grade-level content standards—and use alternate assessments based on those standards for eligible students whose disabilities preclude them from achieving grade-level proficiency within the same period of time as other students. States may include scores from such assessments in making AYP decisions but those scores generally are capped at 2 percent of the total tested population. States are also required to include LEP students in their ESEA assessments. To assess these students, states have the option of developing assessments in students’ native languages. These assessments are designed to cover the content in state academic content standards at the same level of difficulty and complexity as the general assessments. In the absence of native language assessments, states are required to provide testing accommodations for LEP students, such as providing additional time to complete the test, allowing the use of a dictionary, administering assessments in small groups, or simplified instructions. By law, Education is responsible for determining whether or not states’ assessments comply with statutory requirements. The standards and assessments peer review process used by Education to determine state compliance began under the 1994 reauthorization of ESEA and is an ongoing process that states go through whenever they develop new assessments. In the first step of the peer review process, a group of at least three experts—peer reviewers—examines evidence submitted by the state to demonstrate compliance with NCLBA requirements, identifies areas for which additional state evidence is needed, and summarizes their comments. The reviewers are state assessment directors, researchers, and others selected for their expertise in assessments. After the peer reviewers complete their review, an Education official assigned to the state reviews the peer reviewers’ comments and the state’s evidence and, using the same guidelines as the peer reviewers, makes a recommendation on whether the state meets, partially meets, or does not meet each assessment system critical element and on whether the state’s assessment system should be approved. A group of Education officials from the relevant Education offices—including a representative from the Office of the Assistant Secretary of Elementary and Secondary Education—meet as a panel to discuss the findings. The panel makes a recommendation about whether to approve the state and the Assistant Secretary makes the final approval decision. Afterwards a letter is sent to the state notifying them of whether they have been approved, and—if the state was not approved— Education’s letter identifies why the state was not approved. States also receive a copy of the peer reviewers’ written comments as a technical assistance tool to support improvement. Education has the authority to withhold federal funds provided for state administration until it determines that the state has fulfilled ESEA assessment requirements and has taken this step with several states since NCLBA was enacted. Education also provides states with technical assistance in meeting the academic assessment requirements. ESEA assessments must be valid and reliable for the purposes for which they are intended and aligned to challenging state academic standards. Education has interpreted these requirements in its peer review guidance to mean that states must show evidence of technical quality—including validity and reliability—and alignment with academic standards. According to Education’s peer review guidance, the main consideration in determining validity is whether states have evidence that their assessment results can be interpreted in a manner consistent with their intended purposes. See appendix III for a complete description of the evidence used by Education to determine validity. A reliable assessment, according to the peer review guidance, minimizes the many sources of unwanted variation in assessment results. To show evidence of consistency of assessment results, states are required to (1) make a reasonable effort to determine the types of error that may distort interpretations of the findings, (2) estimate the likely magnitude of these distortions, and (3) make every possible effort to alert the users to this lack of certainty. As part of this requirement, states are required to demonstrate that assessment security guidelines are clearly specified and followed. See appendix IV for a full description of the reliability requirements. Alignment, according to Education’s peer review guidance, means that states’ assessment systems adequately measure the knowledge and skills specified in state academic content standards. If a state’s assessments do not adequately measure the knowledge and skills specified in its content standards or if they measure something other than what these standards specify, it will be difficult to determine whether students have achieved the intended knowledge and skills. See appendix V for details about the characteristics states need to consider to ensure that its standards and assessments are aligned. In its guidance and peer review process, Education requires that—as one component of demonstrating alignment between state assessments and academic standards—states must demonstrate that their assessments are as cognitively challenging as their standards. To demonstrate this, states have contracted with organizations to assess the alignment of their ESEA assessments with the states’ standards. These organizations have developed similar models of measuring the cognitive challenge of assessment items. For example, the Webb model categorizes items into four levels—depths of knowledge—ranging in complexity from level 1— recall, which is the least difficult for students to answer, to level 4— extended thinking, which is the most difficult for students to answer. Table 1 provides an illustration, using the Webb model, of how depth of knowledge levels may be measured. State ESEA assessment expenditures have increased in nearly every state since the enactment of NCLBA in 2002, and the majority of these states reported that adding assessments was a major reason for the increased expenditures. Forty-eight of 49 states that responded to our survey said their states’ overall annual expenditures for ESEA assessments have increased, and over half of these 48 states indicated that adding assessments to their state assessment systems was a major reason for increased expenditures. In other cases, even states that were testing students in reading/language arts and mathematics in all of the grades that were required when NCLBA was enacted reported that assessment expenditures increased due to additional assessments. For example, officials in Texas—which was assessing general population students in all of the required grades at the time NCLBA was enacted—told us that they created additional assessments for students with disabilities. In addition to the cost of adding new assessments, states reported that increased vendor costs have also contributed to the increased cost of assessments. On our survey, increasing vendor costs was the second most frequent reason that states cited for increased ESEA assessment costs. One vendor official told us that shortly after the 2002 enactment of NCLBA, states benefited from increased competition because many new vendors entered the market and wanted to gain market share, which drove down prices. In addition, vendors were still learning about the level of effort and costs required to complete this type of work. Consequently, as the ESEA assessment market has stabilized and vendors have gained experience pricing assessments, the cost of ESEA assessment contracts have increased to reflect the true cost of vendor assessment work. One assessment vendor that works with over half of the states on ESEA assessments told us that vendor costs have also been increasing as states have been moving toward more sophisticated and costly procedures and reporting. Nearly all states reported higher expenditures for assessment vendors than for state assessment staff. According to our survey responses, 44 out of the 46 states that responded said that of the total cost of ESEA assessments, much more was paid to vendors than to state employees. For example, one state reported it paid approximately $83 million to vendors and approximately $1 million to state employees in the 2007-08 school year. The 20 states that provided information for the costs of both vendors and state employees in 2007-08 reported spending more than $350 million for vendors to develop, administer, score, and report the results of ESEA assessments—more than 10 times the amount they spent on state employees. State expenditures for ESEA assessment vendors, which were far larger than expenditures for state staff, varied. Spending for vendors on ESEA assessments in the 40 states that reported spending figures on our survey ranged from $500,000 to $83 million, and in total all 40 states spent more than $640 million for vendors to develop, administer, score, and report results of the ESEA assessments in 2007-08. The average cost in these 40 states was about $16 million. See figure 2 for the distribution of state expenditures for vendors in 2007-08. Over half of the states reported that the majority of their funding for ESEA assessments—including funding for expenses other than vendors—came from their state governments. Of the 44 states that responded to the survey question, 26 reported that the majority of their state’s total funding for ESEA assessments came from state government funds for 2007-08, and 18 reported that less than half came from state funds. For example, officials from one state that we visited, Maryland, reported that 84 percent of their total funding for ESEA assessments came from state government funds and that 16 percent of the state’s funding for ESEA assessments came from the federal Grants for State Assessments program in 2007-08. In addition to state funds, all states reported using Education’s Grants for State Assessments for ESEA assessments, and 17 of 45 states responding to the survey question reported using other federal funds for assessments. One state reported that all of its funding for ESEA assessments came from the Grants for State Assessments program. The other federal funds used by states for assessments included Enhanced Assessment grants. More than half of the states reported that assessment development costs were more expensive than any other component of the student assessment process, such as administering or scoring assessments. Twenty-three of 43 states that responded to the question in our survey told us that test and item development and revision was the largest assessment cost for 2007- 08. For example, Texas officials said that the cost of developing tests is higher than the costs associated with any other component of the assessment process. After test and item development costs, scoring was most frequently cited as the most costly activity, with 12 states reporting it as their largest assessment cost. Similarly, states reported that test and item development was the largest assessment cost for alternate assessments, followed by scoring. See figure 3 for more information. The cost of developing assessments was affected by whether states release assessment items to the public. According to state and vendor officials, development costs are related to the percentage of items states release to the public every year because new items must be developed to replace released items. According to vendor officials, nearly all states release at least some test items to the public, but they vary in the percentage of items that they release. In states that release 100 percent of their test items each year, assessment costs are generally high and steady over time because states must develop additional items every year. However, some states release only a portion of items. For example, Rhode Island state officials told us that they release 20 to 50 percent of their reading and math assessment items every year. State and vendor officials told us that despite the costs associated with the release of ESEA assessment items, releasing assessment items builds credibility with parents and helps policymakers and the public understand how assessment items relate to state content standards. The cost of development has been particularly challenging for smaller states. Assessment vendors and Education officials said that the price of developing an assessment is fixed regardless of state size and that, as a result smaller states with fewer students usually have higher per pupil costs for development. For example, state assessment officials from South Dakota told us that their state and other states with small student populations have the same development costs as states with large assessment populations, regardless of the number of students being assessed. In contrast to development costs, administration and scoring costs vary based on the number of students being assessed and the item types used. Although large and small states face similar costs for development, each has control over some factors—such as item type and releasing test items—that can increase or decrease costs. State officials from the four states we visited told us that alternate assessments based on alternate achievement standards were far more expensive on a per pupil basis than general assessments. In Maryland, state officials told us that general assessments cost $30 per pupil, and alternate assessments cost between $300 and $400 per pupil. Rhode Island state officials also reported that alternate assessments cost much more than general assessments. These officials also said that, in addition to direct costs, the administration of alternate assessments has resulted in significant indirect costs, such as professional development for teachers. Technical advisors and district and state officials told us that developing alternate assessments is costly on a per pupil basis because the number of students taking these assessments is small. See appendix VI for more information about states’ use of various item types for alternate assessments. In light of recent economic conditions, many states have experienced fiscal reductions, including within ESEA assessment budgets. As of January 2009, 19 states said their state’s total ESEA assessment budget had been reduced as a result of state fiscal cutbacks. Fourteen states said their state’s total ESEA assessment budgets had not been reduced, but 10 of these states also said they anticipated future reductions. Half of the 46 states that responded to the question told us that in developing their budget proposals for the next fiscal year they anticipated a reduction in state funds for ESEA assessments. For example, one state that responded to our survey said it had been asked to prepare for a 15 percent reduction in state funds. States have most often chosen multiple choice items over other item types on assessments. In 2003, we reported that the majority of states used a combination of multiple choice and a limited number of open-ended items for their assessments. According to our survey, multiple choice items comprise the majority of unweighted score points (points)—the number of points that can be earned based on the number of items answered correctly—for ESEA reading/language arts and mathematics general assessments administered by most responding states. Specifically, 38 of 48 states that responded said that multiple choice items comprise all or most of the points for their reading/ language arts assessments, and 39 states said that multiple choice items comprise all or most of the points for mathematics assessments. Open/constructed response items are the second most frequently used item type for reading/language arts or mathematics general assessments. All states that responded to our survey reported using multiple choice items on their general reading/language arts and mathematics assessments, and most used some open/constructed response items. See appendix VI for more information about the types of items used by states on assessments. Some states also reported on our survey that, since 2002, they have increased their use of multiple choice items and decreased their use of other item types. Of the 47 states that responded to our survey question, 10 reported increasing the use of multiple choice items on reading/language arts general assessments, and 11 reported increasing their use of multiple choice items on mathematics assessments. For example, prior to the enactment of NCLBA, Maryland administered an assessment that was fully comprised of open/constructed response items, but state assessment officials told us that they have moved to an assessment that is primarily multiple choice and plan to eliminate open/constructed response items from assessments. However, several states reported that they have decreased the use of multiple choice items and/or increased the use of open/constructed response items. For more information about how states reported changing the mix of items on their assessments, see figure 4. States reported that total cost of use and the ability to score assessments quickly were key considerations in choosing multiple choice item types. In response to our survey, most states reported considering the cost of different item types and the ability to score the tests quickly when making decisions about item types for ESEA assessments. Officials from the states we visited reported choosing multiple choice items because they can be scored inexpensively within challenging time frames. State officials, assessment experts, and vendors told us that multiple choice item types are scored electronically, which is inexpensive, but that open/constructed response items are usually scored manually, making them more expensive to score. Multiple scorers of open/constructed response items are sometimes involved to ensure consistency, but this also increases costs. In addition, state officials said that training scorers of open/constructed response items is costly. For example, assessment officials in Texas told us that the state has a costly 3-week long training process for teachers to become qualified to assess the open-ended responses. State assessment officials also told us that they used multiple choice items because they can be scored quickly, and assessment vendors reported that states were under pressure to release assessment results to the public before the beginning of the next school year in accordance with NCLBA requirements. For example, assessment officials from South Dakota told us that they explored using open/constructed response items on their assessments but that they ultimately determined it would not be feasible to return results in the required period of time. States also reported considering whether item types would meet certain technical considerations, such as validity and reliability. Texas assessment officials said that using multiple choice items allows the state more time to check test scores for reliability. Despite the cost- and time-saving benefits to states, the use of multiple choice items on assessments has limited the content included in the assessments. Many state assessment officials, alignment experts, and vendor officials told us that items possess different characteristics that affect how amenable they are to testing various types of content. State officials and their technical advisors told us that they have faced significant trade-offs between their efforts to assess highly cognitively complex content and their efforts to accommodate cost and time pressures. All four of the states that we visited reported separating at least a minor portion of standards into those that are used for ESEA assessment and those that are for instructional purposes only. Three of the four states reported that standards for instructional purposes only included highly cognitively complex material that could not be assessed using multiple choice items. For example, a South Dakota assessment official told us that a cognitively complex portion of the state’s new reading standards could not be tested by multiple choice; therefore, the state identified these standards as for instructional purposes only and did not include them in ESEA assessments. In addition to these three states, officials from the fourth state—Maryland—told us that they do not include certain content in their standards because it is difficult to assess. Many state officials and experts we spoke with told us that multiple choice items limit states from assessing highly cognitively complex content. For example, Texas assessment officials told us that some aspects of state standards, such as a student’s ability to conduct scientific research, cannot be assessed using multiple choice. This does not necessarily indicate that state assessments were not aligned to state standards. For example, if the content in standards does not include the highest cognitive level, assessments that do not address the highest cognitive level could be aligned to standards. cognitively challenging content is more expensive and time-consuming than for less challenging multiple choice items. Vendor officials had differing views about whether multiple choice items assess cognitively complex content. For example, officials from three vendors said that multiple choice items can address cognitively complex content. However, officials from another vendor told us that it is not possible to measure certain highly cognitively complex content with multiple choice items. Moreover, two other vendors told us that there are certain content and testing purposes that are more amenable to assessment with item types other than with multiple choice items. Several of the vendors reported that there are some standards that, because of practical limitations faced by states, cannot be assessed on standardized, paper-and-pencil assessments. For example, one vendor official told us that performance-based tasks enabled states to assess a wider variety of content but that the limited funds and quick turnaround times required under the law require states to eliminate these item types. Although most state officials, state technical advisors, and alignment experts said that ESEA assessments should include more open/constructed response items and other item types, they also said that multiple choice items have strengths and that there are challenges with other types of items. For example, in 2008 a national panel of assessment experts appointed and overseen by Education reported that multiple choice items do not measure different aspects of mathematics competency than open/constructed response items. Also, alignment experts said that multiple choice items can quickly and effectively assess lower level content, which is also important to assess. Moreover, open/constructed response items do not always assess highly complex content, according to an alignment expert. This point has been corroborated by several researchers who have found that performance tasks, which are usually intended to assess higher-level cognitive content may inadvertently measure low-level content. For example, one study describes a project in which students were given a collection of insects and asked to organize them for display. High-scoring students were supposed to demonstrate complex thinking skills by sorting insects based on scientific classification systems, rather than less complex criteria, such as whether or not insects are able to fly. However, analysis of student responses showed that high scorers could not be distinguished from low scorers in terms of their knowledge of the insects’ features or of the scientific classification system. The presence or absence of highly complex content in assessments can impact classroom curriculum. Several research studies have found that content contained in assessments influences what teachers teach in the classroom. One study found that including open-ended items on an assessment prompted teachers to ask students to explain their thinking and emphasize problem solving more often. Assessment experts told us that the particular content that is tested impacts classroom curriculum. For example, one assessment expert told us that the focus on student results, combined with the focus on multiple choice items, has led to teachers teaching a narrow curriculum that is focused on basic skills. Under the federal peer review process, Education and peer reviewers examined evidence that ESEA assessments are aligned with the state’s academic standards. Specifically, peer reviewers examined state evidence that assessments cover the full depth and breadth of the state academic standards in terms of cognitive complexity and level of difficulty. However, consistent with federal law, it is Education’s policy not to directly examine a state’s academic standards, assessments, or specific test items. Education officials told us that it is not the department’s role to evaluate standards and assessments themselves and that few at Education have the expertise that would be required to do so. Instead, they explained that Education’s role is to evaluate the evidence provided by states to determine whether the necessary requirements are met. As an alternative to using mostly multiple choice items on ESEA assessments, states used a variety of practices to reduce costs and meet quick turnaround times while also attempting to assess cognitively complex material. For example, some states have developed and administered ESEA assessments in collaboration with other states, which has allowed these states to pool resources and use a greater diversity of item types. In addition, some states administered assessments at the beginning of the year that test students on material taught during the prior year to allow additional time for scoring of open-response items, or administered assessments online to decrease turnaround time for reporting results. States have reported advantages and disadvantages associated with each of these practices: Collaboration among states: All four states that we visited—Maryland, Texas, South Dakota, and Rhode Island—indicated interest in collaborating with other states in the development of ESEA reading/language arts or mathematics assessments, as of March 2009, but only Rhode Island was. Under the New England Common Assessments Program (NECAP), Rhode Island, Vermont, New Hampshire, and Maine share a vendor, a common set of standards, and item development costs. Under this agreement, the cost of administration and scoring are based on per pupil rates. NECAP states use a combination of multiple choice, short answer, and open/constructed response items. According to Rhode Island assessment officials, more rigorous items, including half of their math items, are typically embedded within open/constructed response items. When asked about the benefits of working in collaboration with other states to develop ESEA assessments, assessment officials for Rhode Island told us that the fiscal savings are very apparent. Specifically, they stated that Rhode Island will save approximately $250,000 per year with the addition of Maine to the NECAP consortium because, as Rhode Island assessment officials noted, Maine will take on an additional share of item development costs. Also, officials said that with a multi-state partnership, Rhode Island is able to pay more for highly skilled people who share a common vision. Finally, they said that higher standards are easier to defend politically as part of collaboration because there are more stakeholders in favor of them. An assessment expert from New Hampshire said that the consortium has been a “lifesaver” because it has saved the state considerable funding and allowed it to meet ESEA assessment requirements. Assessment experts from Rhode Island and New Hampshire told us that there are some challenges to working in collaboration with other states to develop ESEA assessments. Because decisions are made by consensus and the NECAP states have philosophical differences in areas such as item development, scoring, and use of item types, decision-making is a lengthy process. In addition, a Rhode Island official said that assessment leadership in the states changes frequently, which also makes decision- making difficult. Beginning of year test administration: NECAP states currently administer assessments in the beginning of the year, which eases time pressures associated with the scoring of open/constructed response items. As a result, the inclusion of open/constructed response items on the assessment has been easier because there is enough time to meet NCLBA deadlines for reporting results. However, Rhode Island officials said that there are challenges to administering tests at the beginning of the year. For example, one official stated that coordinating testing with the already challenging start of school is daunting. For example, she said that state assessment officials are required to use school enrollment lists to print school labels for individual tests, but because enrollment lists often change in the beginning of the year, officials are required to correct a lot of data. District assessment officials also cited this as a major problem. Computerized testing: Of the states we visited, Texas was the only one administering a portion of its ESEA assessments online, but Maryland and Rhode Island were moving toward this goal. One assessment vendor with whom we spoke said that many states are anticipating this change in the not-too-distant future. Assessment vendors and state assessment officials cited some major benefits of online assessment. For example, one vendor told us that online test administration reduces costs by using technology for automated scoring. They also told us that states are using online assessments to address cognitively complex content in standards that are difficult to assess, such as scientific knowledge that is best demonstrated through experiments. In addition, assessment officials told us that online assessments are less cumbersome and easier than paper tests to manage at the school level if schools have the required technology and that they enable quicker turnaround on scores. State and district assessment officials and a vendor with whom we spoke also cited several challenges associated with administering tests online, including security of the tests; variability in students’ computer literacy; strain on school computer resources, computer classrooms/labs, and interruption of classroom/lab instruction; and lack of necessary computer infrastructure. State officials are responsible for guiding the development of the state assessment program and overseeing vendors, but states varied in their capacity to fulfill these roles. State officials reported that they are responsible for making key decisions about the direction of their states’ assessment programs, such as whether to develop alternate assessments based on modified achievement standards, or online assessments. In addition, state officials said that they are responsible for overseeing the assessment vendors used by their states. However, state assessment offices varied based on the measurement expertise of their staff. About three-quarters of the 48 responding states had at least one state assessment staff member with a Ph.D. in psychometrics or another measurement-related field. Three states—North Carolina, South Carolina, and Texas—each reported having five staff with this expertise. However, 13 states did not have any staff with this expertise. In addition, states varied in the number of full-time equivalent professional staff (FTE) dedicated to ESEA assessments from 55 professional staff in Texas to 1 professional staff in Idaho and the District of Columbia. See figure 5 for more information about the number of FTEs dedicated to ESEA in the states. Small states had less assessment staff capacity than larger states. The capacity of state assessment offices was related to the amount of funding spent on state assessment programs in different states, according to state officials. For example, South Dakota officials told us that they had tried to hire someone with psychometric expertise but that they would need to quadruple the salary that they could offer to compete with the salaries being offered by other organizations. State officials said that assessment vendors can often pay higher salaries than states and that it is difficult to hire and retain staff with measurement-related expertise. State officials and assessment experts told us that the capacity of state assessment offices was the key challenge for states implementing NCLBA. Greater state capacity allows states to be more thoughtful in developing their state assessment systems, and provide greater oversight of their assessment vendors, according to state officials. Officials in Texas and other states said that having high assessment staff capacity—both in terms of number of staff and measurement-related expertise—allows them to research and implement practices that improve student assessment. For example, Texas state officials said that they conduct research regarding how LEP students and students with disabilities can best be included in ESEA assessments, which state officials said helped them improve the state’s assessments for these students. In contrast, officials in lower capacity states said that they struggled to meet ESEA assessment requirements and did not have the capacity to conduct research or implement additional strategies. For example, officials in South Dakota told us that they had not developed alternate assessments based on modified achievement standards because they did not have the staff capacity or funding to implement these assessments. Also, of three states we visited that completed a checklist of important assessment quality control steps, those with fewer assessment staff addressed fewer key quality control steps. Specifically, Rhode Island, South Dakota, and Texas reviewed and completed a CCSSO checklist on student assessment, the Quality Control Checklist for Processing, Scoring, and Reporting. These states varied with regard to fulfilling the steps outlined by this checklist. For example, state officials in Texas, which has 55 full-time professional staff working on ESEA assessments, including multiple staff with measurement-related expertise, reported that they fulfill 31 of the 33 steps described in the checklist and address the 2 other steps in certain circumstances. Officials in Rhode Island, who told us that they have six assessment staff and work in conjunction with other states in its assessment consortium, said that they fulfill 27 of the 33 steps. South Dakota, which had three professional full-time staff working on ESEA assessments—and no staff with measurement-related expertise— addressed nine of the steps, according to state officials. For example, South Dakota officials said that the state does not verify the accuracy of answer keys in the data file provided by the vendor using actual student responses, which increases the risk of incorrectly scoring assessments. Because South Dakota does not have staff with measurement-related expertise and has fewer state assessment staff, there are fewer individuals to fulfill these quality control steps than in a state with greater capacity, according to state officials. Having staff with psychometric or other measurement-related expertise improved states’ ability to oversee the work of vendors. For example, the CCSSO checklist recommends that states have psychometric or other research expertise for nearly all of the 33 steps. Having staff with measurement-related expertise allows states to know what key technical questions or data to ask of vendors, according to state officials, and without this expertise they would be more dependent on vendors. State advisors from technical advisory committees (TAC)—panels of assessment experts that states convene to assist them with technical oversight—said that TACs are useful, but that they generally only meet every 6 months. For example, one South Dakota TAC member said that TACs can provide guidance and expertise, but that ensuring the validity and reliability of a state assessment system is a full-time job. The TAC member said that questions arise on a regular basis for which it would be helpful to bring measurement-related expertise to bear. Officials from assessment vendors varied in what they told us. Several told us that states do not need measurement-related expertise, but others said that states needed this expertise on staff. Education’s Inspector General (OIG) found reliability issues with management controls over state ESEA assessments. Specifically, the OIG found that Tennessee did not have sufficient monitoring of contractor activities for the state assessments such as ensuring that individuals scoring open/constructed response items had proper qualifications. In addition, the OIG found that the state lacked written policies and procedures describing internal controls for scoring and reporting. Although most states have met peer review expectations for validity and reliability of their general assessments, ensuring the validity of alternate assessments for students with disabilities is still a challenge. For example, our review of Education documents as of July 15, 2009, showed that 12 states’ reading/language arts and mathematics standards and assessment systems—which include general assessments and alternate assessments based on alternate achievement standards—had not received full approval under Education’s peer review process and that alternate assessments were a factor preventing approval in 11 of these states. In the four states where alternate assessments were the only issue preventing full approval, technical quality (which includes validity and reliability) or alignment was a problem. For example, in a letter to Hawaii education officials dated October 30, 2007, documenting steps the state must take to gain full approval of its standards and assessments system, Education officials wrote that Hawaii officials needed to document the validity and alignment of the state alternate assessment. States had more difficulty assessing the validity and reliability of alternate assessments using alternate achievement standards than ESEA assessments for the general student population. In our survey, nearly two- thirds of the states reported that assessing the validity and reliability of alternate assessments with alternate achievement standards was either moderately or very difficult. In contrast, few states reported that either validity or reliability were moderately or very difficult for general assessments. We identified two specific challenges to the development of valid and reliable alternate assessments with alternate achievement standards. First, ensuring the validity and reliability of these alternate assessments has been challenging because of the highly diverse population of students being assessed. Alternate assessments are administered to students with a wide range of significant cognitive disabilities. For example, some students may only be able to communicate by moving their eyes and blinking. As a result, measuring the achievement of these students often requires greater individualization. In addition, because these assessments are administered to relatively small student populations, it can be difficult for states to gather the evidence needed to demonstrate their validity and reliability. In addition, developing valid and reliable alternate assessments with alternate achievement standards has been challenging for states because there is a lack of research about the development of these assessments, according to state officials and assessment experts. States have been challenged to design alternate assessments that appropriately measure what eligible students know and provide similar scores for similar levels of performance. Experts and state officials told us that more research would help them ensure validity and reliability. An Education official agreed that alternate assessments are still a challenge for states and said that there is little consensus about what types of alternate assessments are psychometrically appropriate. Although there is currently a lack of research, Education is providing assistance to states with alternate assessments and has funded a number of grants to help states implement alternate assessments. States that have chosen to implement alternate assessments with modified achievement standards and native language assessments have faced similar challenges, but relatively few states are implementing these assessments. On our survey, 8 of the 47 states responding to this question reported that in 2007-08 they administered alternate assessments based on modified achievement standards, which are optional for states, and several more reported being in the process of developing these assessments. Fifteen states reported administering native language assessments, which are also optional. States reported mixed results regarding the difficulty of assessing the validity and reliability of these assessments, with about two- thirds indicating that each of these tasks was moderately or very difficult for both the alternate assessments with modified achievement standards and native language assessments. Officials in states that are not offering these assessments reported that they lacked the funds necessary to develop these assessments or that they lacked the staff or time. The four states that we visited and districts in those states had taken steps to ensure the security of ESEA assessments. Each of the four states had a test administration manual that is intended to establish controls over the processes and procedures used by school districts when they administer the assessments. For example, the Texas test administration manual covered procedures for keeping assessment materials secure prior to administration, ensuring proper administration, returning student answer forms for scoring, and notifying administrators in the event of assessment irregularities. States also required teachers administering the assessments to sign forms saying that they would ensure security and had penalties for teachers or administrators who violated the rules. For example, South Dakota officials told us that teachers who breach the state’s security measures could lose their teaching licenses. Despite these efforts, there have been a number of documented instances of teachers and administrators cheating in recent years. For example, researchers in one major city examined the frequency of cheating by test administrators. They estimated that at least 4 to 5 percent of the teachers and administrators cheated on student assessments by changing student responses on answer sheets, providing correct answers to students, or illegitimately obtaining copies of exams prior to the test date and teaching students using knowledge of the precise exam items. Further, the study found that teachers’ and administrators’ decisions about whether to cheat responded to incentives. For example, when schools faced the possibility of being sanctioned for low assessment scores, teachers were more likely to cheat. In addition, the study found that teachers in low-performing classrooms were more likely to cheat. In our work, we identified several gaps in state assessment security policies. For example, assessment security experts said that many states do not conduct any statistical analyses of assessment results to detect indications of cheating. Among our site visit states, one state—Rhode Island—reported analyzing test results for unexpected gains in schools’ performance. Another state, Texas, had conducted an erasure analysis to determine whether schools or classrooms had an unusually high number of erased responses that were changed to correct responses, possibly indicating cheating. These types of analysis were described as a key component of assessment security by security experts. In addition, we identified one specific state assessment policy where teachers had an opportunity to change test answers. South Dakota’s assessment administration manual required classroom teachers to inspect all student answers to multiple choice items and darken any marks that were too light for scanners to read. Further, teachers were instructed to erase any stray marks, and ensure that, when a student had changed an answer, the unwanted response was completely erased. This policy provided teachers an opportunity to change the answers, and improve assessment results. South Dakota officials told us that they had considered taking steps to mitigate the potential for cheating, such as contracting for an analysis that would identify patterns of similar erasure marks that could indicate cheating, but that it was too expensive for the state. States’ assessment security policies and procedures were examined during Education’s standards and assessments peer review process. According to Education’s peer review guidance, which Education officials told us were the criteria used by peer reviewers to examine state assessment systems, states must demonstrate the establishment of clear criteria for the administration, scoring, analysis, and reporting components of state assessment systems. One example of evidence of adequate security procedures listed in the peer review guidance was that the state uses training and monitoring to ensure that people responsible for handling or administering state assessments properly protect the security of the assessments. Education indicated that a state could submit as evidence documentation that the state’s test security policy and consequences for violating the policy are communicated to educators, and documentation of the state’s plan for training and monitoring assessment administration. According to Education officials, similar indicators are included in Education’s ongoing efforts to monitor state administration and implementation of ESEA assessment requirements. Although test security was included as a component in the peer review process, we identified several gaps in how the process evaluated assessment security. The peer reviewers did not examine whether states used any type of data analysis to review student assessment results for irregularities. When we spoke with Education’s director of student achievement and school accountability programs—who manages the standards and assessments peer review process—about how assessment security was examined in the peer review process, he told us that security was not a focus of peer review. The official indicated that the review already required a great deal of time and effort by reviewers and state officials and that Education had given a higher priority to other assessment issues. In addition, the state policy described above in which teachers darken marks or erase unwanted responses was approved through the peer review process. The Education official who manages the standards and assessments peer review process told us that the peer review requirements, including the assessment security portion, were based on the Standards for Educational and Psychological Testing when they were developed in 1999. The Standards provide general guidelines for assessment security, such as that test users have the responsibility of protecting the security of test materials at all times. However, they do not provide comprehensive best practices for assessment security issues. The Association of Test Publishers developed draft assessment security guidelines in 2007. In addition, in the spring 2010, the Association of Test Publishers and CCSSO plan to release a best practices guide for state departments of education that is expected to offer best practices for test security. Education has made certain modifications to the peer review process but does not plan to update the assessment security requirements. Education updated the peer review protocols to address issues with the alternate assessment using modified achievement standards after those regulations were released. In addition, Education has made certain modifications to the process that were requested by states. However, Education officials indicated that they do not have plans to update the peer review assessment security requirements. Education provided technical assistance to states in a variety of ways. Education provided technical assistance through meetings, written guidance, user guides, contact with Education staff, and assistance from its Comprehensive Centers and Clearinghouses. In our survey, states reported they most often used written guidance and Education-sponsored meetings and found these helpful. States reported mixed results in obtaining assistance from Education staff. Some reported receiving consistent helpful support while others reported staff were not helpful or responsive. Relevant program offices within Education provided additional assistance as needed. For example, the Office of Special Education Programs provided assistance to states in developing alternate assessments for students with disabilities and the Office of English Language Acquisition, Language Enhancement, and Academic Achievement for Limited English Proficient Students assisted states in developing their assessments for LEP students. In addition, beginning in 2002, Education awarded competitive Enhanced Assessment Grants to state collaboratives working on a variety of assessment topics such as developing valid and reliable assessments for students with disabilities and LEP students. For example, one consortium of 14 states and jurisdictions was awarded about $836,000 to investigate and provide information on the validity of accommodations for future assessments for LEP students with disabilities, a group of students with dual challenges. States awarded grants are required to share the outcomes of their projects with other states at national conferences; however, since these are multi- year projects, the results of many of them are not yet available. Education’s peer review process did not allow for direct communication between states and peer reviewers that could have more quickly resolved questions or problems that arose throughout the peer review process. After states submitted evidence of compliance with ESEA assessment requirements to Education, groups of three reviewers examined the materials and made recommendations to Education. To ensure the anonymity of the peer reviewers, Education did not permit communication between reviewers and state officials. Instead, Education liaisons periodically relayed peer reviewers’ questions and comments to the states and then relayed answers back to the peer reviewers. Education officials told us the assurance of anonymity was an important factor in their ability to recruit peer reviewers who may not have felt comfortable making substantive comments on states’ assessment systems if their identity was known. However, the lack of direct communication resulted in miscommunication and prevented quick resolutions to questions arising during the peer review process. State officials and reviewers told us that there was not enough communication between states and reviewers during the process, preventing the quick resolution of questions that arose during the review process. For example, one state official reported on our survey that the lack of direct communication with peer reviewers led to misunderstandings that could have been readily resolved with a conversation with peer reviewers. A number of the peer reviewers who we surveyed provided similar information. For example, one said that the process was missing direct communication, which would allow state officials to provide immediate responses to the reviewers’ questions. The Education official who manages the standards and assessments peer review process recognized that the lack of communication, such as a state not understanding how to interpret peer reviewers’ comments, created confusion. Two experts we interviewed about peer review processes in general said that communication between reviewers and state officials is critical to having an efficient process that avoids miscommunication and unnecessary work. State officials said that the peer review process was extensive and that miscommunication made it more challenging. In response to states’ concerns, Education has taken steps to improve the peer review process by offering states the option of having greater communication with reviewers after the peer review process is complete. However, the department has not taken action to allow direct communication between states and peer reviewers during the process to ensure a quick resolution to questions or issues that arise, preferring to continue its reliance on Education staff to relay information between states and peer reviewers and protecting the anonymity of the peer reviewers. In some cases, the final approval decisions made by Education, which has final decision-making authority, differed from the peer reviewers’ written comments, but Education could not tell us how often this occurred. Education’s panels assessed each state’s assessment system using the same guidelines used by the peer reviewers, and agency officials told us that peer reviewers’ comments carried considerable weight in the agency’s final decisions. However, Education officials said that—in addition to peer reviewers’ comments—they also considered other factors in determining whether a state should receive full approval, including the time needed by the state to come into compliance and the scope of the outstanding issues. Education and state officials told us that, in some cases, Education reached different decisions than the peer reviewers. For example, the Education official who manages the standards and assessments peer review process described a situation in which the state was changing its content standards and frequently submitting new documentation for its mathematics assessment as the new content standards were incorporated. Education officials told us the peer reviewers got confused by the documentation, but Education officials gave the state credit for the most recent documentation. However, Education could not tell us how often the agency’s final decisions matched the written comments of the peer reviewers because it did not track this information. In cases in which Education’s final decisions differed from the peer reviewers’ comments, Education did not explain to states why it reached its decisions. Although Education released the official decision letters describing reasons that states had not been approved through peer review, the letters did not document whether their decisions differed from the peer reviewers’ comments or why their decisions were different. Because Education did not communicate this to states, it was unclear to states how written peer reviewer comments related to Education’s decisions about peer review approval. For example, in our survey, one state reported that the comments provided to the state by peer reviewers and the letters sent to the state by Education describing their final decisions about approval status did not match. State officials we interviewed reported confusion about what issues needed to be addressed to receive full approval of their assessment system. For example, some state officials reported confusion about how to receive final peer review approval when the written summary of the peer review comments differed from the steps necessary to receive full approval that were outlined in the official decision letters from Education. The Education official who manages the standards and assessments peer review process said that in some cases the differences between decision letters and peer reviewers’ written comments led to state officials being unclear about whether they were required to address the issues in Education’s decision letters, comments from peer reviewers, or both. NCLBA set lofty goals for states to work toward having all students reach academic proficiency by 2013-2014, and Congress has provided significant funding to assist states. NCLBA required a major expansion in the use of student assessments, and states must measure higher order thinking skills and understanding with these assessments. Education currently reviews states’ adherence to NCLBA standards and assessment requirements through its peer review process in which the agency examines evidence submitted by each state that is intended to show that state standards and assessment systems meet NCLBA requirements. However, ESEA, as amended, prohibits federal approval or certification of state standards. Education reviews the procedures that states use to develop their standards, but does not review the state standards on which ESEA assessments are based or evaluate whether state assessments cover highly cognitively complex content. As a result, there is no assurance that states include highly cognitively complex content in their assessments. Although Education does not assess whether state assessments cover highly complex content, Education’s peer review process does examine state assessment security procedures, which are critical to ensuring that assessments are valid and reliable. In addition, the security of ESEA assessments is critical because these assessments are the key tool used to hold schools accountable for student performance. However, Education has not made assessment security a focus of its peer review process and has not incorporated best practices in assessment security into its peer review protocols. Unless Education takes advantage of forthcoming best practices that include assessment security issues, incorporates them into the peer review process, and places proper emphasis on this important issue, some states may continue to rely on inadequate security procedures that could affect the reliability and validity of their assessment systems. State ESEA assessment systems are complex and require a great deal of time and effort from state officials to develop and maintain. Due to the size of these systems, the peer review process is an extensive process that also took a great deal of time and effort on the part of state officials. However, because Education, in an attempt to maintain peer reviewer confidentiality, does not permit direct communication between state officials and peer reviewers, miscommunication may have resulted in some states spending more time than necessary clarifying issues and providing additional documentation. While Education officials told us the assurance of anonymity was an important factor in their ability to recruit peer reviewers, anonymity should not automatically preclude communications between state officials and peer reviewers during the peer review process. For example, technological solutions could be used to retain anonymity while still allowing for direct communications. Direct communication between reviewers and state officials during the peer review process could reduce the amount of time and effort required of both peer reviewers and state officials. The standards and assessments peer review is a high-stakes decision- making process for states. States that do not meet ESEA requirements for their standards and assessments systems can ultimately lose federal Title I, Part A funds. Transparency is a critical element for ensuring that decisions are fully understood and peer review issues are addressed by states. However, because critical Education decisions about state standards and assessments systems sometimes differed from peer reviewers’ written comments, but the reasons behind these differences were not communicated to states, states were confused about the issues they needed to address. To help ensure the validity and reliability of ESEA assessments, we recommend that the Secretary of Education update Education’s peer review protocols to incorporate best practices in assessment security when they become available in spring 2010. To improve the efficiency of Education’s peer review process, the Secretary of Education should develop methods for peer reviewers and states to communicate directly during the peer review process so questions that arise can be addressed quickly. For example, peer reviewers could be assigned a generic e-mail address that would allow them to remain anonymous but still allow them to communicate directly with states. To improve the transparency of its approval decisions pertaining to states’ standards and assessment systems and help states understand what they need to do to improve their systems, in cases where the Secretary of Education’s peer review decisions differed from those of the reviewers, the Secretary should explain why they differed. We provided a draft of this report to the Secretary of Education for review and comment. Education’s comments are reproduced in appendix VII. In its comments, Education recognizes the value of test security practices in maintaining the validity and reliability of states’ assessment systems. However, regarding our recommendation to incorporate test security best practices into the peer review protocols, Education indicated that it believes that its current practices are sufficient to ensure that appropriate test security policies and procedures are implemented. Education officials indicated that states currently provide the agency with evidence of state statutes, rules of professional conduct, administrative manuals, and memoranda that address test security and reporting of test irregularities. Education officials also stated that additional procedures and requirements, such as security methods and techniques to uncover testing irregularities, are typically included in contractual agreements with test publishers or collective bargaining agreements and that details on these additional provisions are best handled locally based on the considerations of risk and cost. Furthermore, Education stated that it plans to continue to monitor test security practices and to require corrective action by states they find to have weak or incomplete test security practices. As stated in our conclusions, we continue to believe that Education should incorporate forthcoming best practices, including assessment security issues into the peer review process. Otherwise, some states may continue to rely on inadequate security procedures, which could ultimately affect the reliability and validity of their assessment systems. Education agreed with our recommendations to develop methods to improve communication during the review process and to identify for states why its peer review decisions in some cases differed from peer reviewers’ written comments. Education officials noted that the agency is considering the use of a secure server as a means for state officials to submit questions, documents, and other evidence to strengthen communication during the review process. Education also indicated that it will conduct a conference call prior to upcoming peer reviews to clarify why the agency’s approval decisions in some cases differ from peer reviewers’ written comments. Education also provided technical comments that we incorporated into the report as appropriate. We are sending copies of this report to appropriate congressional committees, the Secretary of Education, and other interested parties. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. Please contact me at (202) 512-7215 if you or your staff have any questions about this report. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Other major contributors to this report are listed in appendix VIII. The objectives of this study were to answer the following questions: (1) How have state expenditures on assessments required by the Elementary and Secondary Education Act of 1965 (ESEA) changed since the No Child Left Behind Act of 2001 (NCLBA) was enacted in 2002, and how have states spent funds? (2) What factors have states considered in making decisions about question (item) type and content of their ESEA assessments? (3) What challenges, if any, have states faced in ensuring the validity and reliability of their ESEA assessments? (4) To what extent has the U.S. Department of Education (Education) supported and overseen state efforts to comply with ESEA assessment requirements? To meet these objectives, we used a variety of methods, including document reviews of Education and state documents, a Web-based survey of the 50 states and the District of Columbia, interviews with Education officials and assessment experts, site visits in four states, and a review of the relevant federal laws and regulations. The survey we used was reviewed by several external reviewers, and we incorporated their comments as appropriate. We conducted this performance audit from August 2008 through September 2009 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. To learn how state expenditures for ESEA assessments have changed since NCLBA was enacted in 2002 and how states spent these funds, we analyzed responses to our state survey, which was administered to state assessment directors in January 2009. In the survey, we asked states to provide information about the percentage of their funding from federal and state sources, their use of contractors, cost and availability of human resources, and rank order cost of assessment activities. The survey used self-administered, electronic questionnaires that were posted on the Internet. We received responses from 49 states, for a 96 percent response rate. We did not receive responses from New York and Rhode Island. We reviewed state responses and followed up by telephone and e-mail with states for additional clarification and obtained corrected information for our final survey analysis. Nonresponse is one type of nonsampling error that could affect data quality. Other types of nonsampling error include variations in how respondents interpret questions, respondents’ willingness to offer accurate responses, and data collection and processing errors. We included steps in developing the survey, and collecting, editing, and analyzing survey data to minimize such nonsampling error. In developing the Web survey, we pretested draft versions of the instrument with state officials and assessment experts in various states to check the clarity of the questions and the flow and layout of the survey. On the basis of the pretests, we made slight to moderate revisions of the survey. Using a Web-based survey also helped remove error in our data collection effort. By allowing state assessment directors to enter their responses directly into an electronic instrument, this method automatically created a record for each assessment director in a data file and eliminated the need for and the errors (and costs) associated with a manual data entry process. In addition, the program used to analyze the survey data was independently verified to ensure the accuracy of this work. We also conducted site visits to four states—Maryland, Rhode Island, South Dakota, and Texas—that reflect a range of population size and results on Education’s assessment peer review. On these site visits we interviewed state officials, officials from two districts in each state— selected in consultation with state officials to cover heavily- and sparsely- populated areas—and technical advisors to each state. To gather information about factors states consider when making decisions about the item type and content of their assessments, we analyzed survey results. We asked states to provide information about their use of item types, including the types of items they use for each of their assessments (e.g., general, alternate, modified achievement standards, or native language), and changes in their relative use of multiple choice and open/constructed response items and factors influencing their decisions on which item types to use for reading/language arts and mathematics general assessments. We interviewed selected state officials and state technical advisors. We also interviewed officials from other states that had policies that helped address the challenge of including cognitively-complex content in state assessments. We interviewed four major assessment vendors to provide us a broad perspective of the views of the assessment industry. Vendors were selected in consultation with the Association of American Publishers because its members include the major assessment vendors states have contracted with for ESEA assessment work. We reviewed studies that our site visit states submitted as evidence for Education’s peer review approval process to document whether assessments are aligned with academic content standards, including the level of cognitive complexity in standards and assessments. We also spoke with representatives from three alignment organizations that states most frequently hire to conduct this type of study, and representatives of a fourth alignment organization that was used by one of our site visit states, who provided a national perspective on the cognitive complexity of assessment content. In addition, we reviewed selected academic research studies that examined the relationship between assessments and classroom curricula using GAO’s data reliability tests. We determined that the results of these research studies were sufficiently valid and reliable for the purposes of our work. To gather information about challenges states have faced in ensuring validity and reliability, we used our survey to collect information about state capacity and technical quality issues associated with assessments. We conducted reviews of state documents, such as assessment security protocols, and interviewed state officials. We asked state officials from the states we visited to complete a CCSSO checklist on student assessment— the Quality Control Checklist for Processing, Scoring, and Reporting—to show which steps they took to ensure quality control in high-stakes assessment programs. We used this specific document created by CCSSO because, as an association of public education officials, the organization provides considerable technical assistance to states on assessment. We confirmed with CCSSO that the document is still valid for state assessment programs and has not been updated. We also interviewed four assessment vendors and assessment security experts that were selected based on the extent of their involvement in statewide assessments. We also reviewed summaries of the peer review issues for states that have not yet been approved through the peer review process, the portion of peer review protocols that address assessment security, and the assessment security documents used to obtain approval in our four site visit states. To address the extent of Education’s support of ESEA assessment implementation, we reviewed Education guidance, summaries of Education assistance, peer review training documents, and previous GAO work on peer review processes. In addition, we analyzed survey results. We asked states to provide information on the federal role in state assessments, including their perspectives on technical assistance offered by Education and Education’s peer review process. We also asked peer reviewers to provide their perspectives on Education’s peer review process. Of the 76 peer reviewers Education provided us, we randomly sampled 20 and sent them a short questionnaire asking about their perspectives on the peer review process. We obtained responses from nine peer reviewers. In addition, we interviewed Education officials in charge of the peer review and assistance efforts. 1. Evidence based on test content (content validity). Content validity is the alignment of the standards and the assessment. 2. Evidence of the assessment’s relationship with other variables. This means documenting the validity of an assessment by confirming its positive relationship with other assessments or evidence that is known or assumed to be valid. For example, if students who do well on the assessment in question also do well on some trusted assessment or rating, such as teachers’ judgments, it might be said to be valid. It is also useful to gather evidence about what a test does not measure. For example, a test of mathematical reasoning should be more highly correlated with another math test, or perhaps with grades in math, than with a test of scientific reasoning or a reading comprehension test. 3. Evidence based on student response processes. The best opportunity for detecting and eliminating sources of test invalidity occurs during the test development process. Items need to be reviewed for ambiguity, irrelevant clues, and inaccuracy. More direct evidence bearing on the meaning of the scores can be gathered during the development process by asking students to “think-aloud” and describe the processes they “think” they are using as they struggle with the task. Many states now use this “assessment lab” approach to validating and refining assessment items and tasks. 4. Evidence based on internal structure. A variety of statistical techniques have been developed to study the structure of a test. These are used to study both the validity and the reliability of an assessment. The well-known technique of item analysis used during test development is actually a measure of how well a given item correlates with the other items on the test. A combination of several statistical techniques can help to ensure a balanced assessment, avoiding, on the one hand, the assessment of a narrow range of knowledge and skills but one that shows very high reliability, and on the other hand, the assessment of a very wide range of content and skills, triggering a decrease in the consistency of the results. In validating an assessment, the state must also consider the consequences of its interpretation and use. States must attend not only to the intended effects, but also to unintended effects. The disproportional placement of certain categories of students in special education as a result of accountability considerations rather than appropriate diagnosis is an example of an unintended—and negative—consequence of what had been considered proper use of instruments that were considered valid. The traditional methods of portraying the consistency of test results, including reliability coefficients and standard errors of measurement, should be augmented by techniques that more accurately and visibly portray the actual level of accuracy. Most of these methods focus on error in terms of the probability that a student with a given score, or pattern of scores, is properly classified at a given performance level, such as “proficient.” For school-level or district-level results, the report should indicate the estimated amount of error associated with the percent of students classified at each achievement level. For example, if a school reported that 47 percent of its students were proficient, the report might say that the reader could be confident at the 95 percent level that the school’s true percent of students at the proficient level is between 33 percent and 61 percent. Furthermore, since the focus on results in a Title I context is on improvement over time, the report should also indicate the accuracy of the year-to-year changes in scores. To ensure that its standards and assessments are aligned, states need to consider whether the assessments: Cover the full range of content specified in the state’s academic content standards, meaning that all of the standards are represented legitimately in the assessments. Measure both the content (what students know) and the process (what students can do) aspects of the academic content standards. Reflect the same degree and pattern of emphasis apparent in the academic content standards (e.g., if the academic content standards place a lot of emphasis on operations, then so too should the assessments). Reflect the full range of cognitive complexity and level of difficulty of the concepts and processes described, and depth represented, in the state’s academic content standards, meaning that the assessments are as demanding as the standards. Number of states that use this item type Number of states that responded to the question Number of states that did not respond or checked “no response” Bryon Gordon, Assistant Director, and Scott Spicer, Analyst-in-Charge, managed this assignment and made significant contributions to all aspects of this report. Jaime Allentuck, Karen Brown, and Alysia Darjean also made significant contributions. Additionally, Carolyn Boyce, Doreen Feldman, Cynthia Grant, Sheila R. McCoy, Luann Moy, and Charlie Willson aided in this assignment. | The No Child Left Behind Act of 2001 (NCLBA) requires states to develop high-quality academic assessments aligned with state academic standards. Education has provided states with about $400 million for NCLBA assessment implementation every year since 2002. GAO examined (1) changes in reported state expenditures on assessments, and how states have spent funds; (2) factors states have considered in making decisions about question (item) type and assessment content; (3) challenges states have faced in ensuring that their assessments are valid and reliable; and (4) the extent to which Education has supported state efforts to comply with assessment requirements. GAO surveyed state and District of Columbia assessment directors, analyzed Education and state documents, and interviewed assessment officials from Maryland, Rhode Island, South Dakota, and Texas and eight school districts in addition to assessment vendors and experts. States reported their overall annual expenditures for assessments have increased since passage of the No Child Left Behind Act of 2001 (NCLBA), which amended the Elementary and Secondary Education Act of 1965 (ESEA), and assessment development was the largest expense for most states. Forty-eight of 49 states that responded to our survey said that annual expenditures for ESEA assessments have increased since NCLBA was enacted. Over half of the states reported that overall expenditures grew due to development of new assessments. Test and question--also referred to as item--development was most frequently reported by states to be the largest ESEA assessment expense, followed by scoring. State officials in selected states reported that alternate assessments for students with disabilities were more costly than general population assessments. In addition, 19 states reported that assessment budgets had been reduced by state fiscal cutbacks. Cost and time pressures have influenced state decisions about assessment item type--such as multiple choice or open/constructed response--and content. States most often chose multiple choice items because they can be scored inexpensively within tight time frames resulting from the NCLBA requirement to release results before the next school year. State officials also reported facing trade-offs between efforts to assess highly complex content and to accommodate cost and time pressures. As an alternative to using mostly multiple choice, some states have developed practices, such as pooling resources from multiple states to take advantage of economies of scale, that let them reduce cost and use more open/constructed response items. Challenges facing states in their efforts to ensure valid and reliable assessments involved staff capacity, alternate assessments, and assessment security. State capacity to provide vendor oversight varied, both in terms of number of state staff and measurement-related expertise. Also, states have been challenged to ensure validity and reliability for alternate assessments. In addition, GAO identified several gaps in assessment security policies that were not addressed in Education's review process for overseeing state assessments that could affect validity and reliability. An Education official said that assessment security was not a focus of its review. The review process was developed before recent efforts to identify assessment security best practices. Education has provided assistance to states, but issues remain with communication during the review process. Education provided assistance in a variety of ways, and states reported that they most often used written guidance and Education-sponsored meetings and found these helpful. However, Education's review process did not allow states to communicate with reviewers during the process to clarify issues, which led to miscommunication. In addition, state officials were in some cases unclear about what review issues they were required to address because Education did not identify for states why its decisions differed from the reviewers' written comments. |
The TFF is a multidepartmental fund and has four primary goals: to (1) deprive criminals of assets used in or acquired through illegal activities; (2) encourage joint operations among federal, state, and local law enforcement agencies, as well as foreign countries; (3) protect the rights of individuals; and (4) strengthen law enforcement. TEOAF is responsible for providing management oversight of the TFF, which is the receipt account for the deposit of nontax forfeitures made by Treasury and DHS participating agencies. DHS components that participate in the TFF contribute revenues through forfeitures made as a result of their investigations and operations. They also receive payments and reimbursements from the fund for expenses incurred during the seizure and forfeiture process, such as investigative or transportation costs. Table 1 shows DHS component activities that contribute to the TFF. The asset forfeiture process involves a number of steps, including planning the seizure; seizing and taking custody of the asset; notifying interested parties; and addressing any claims and petitions, to include those from third parties. Within the asset forfeiture process, there are two types of forfeiture: administrative and judicial. In administrative forfeitures, a federal agency is permitted to commence forfeiture proceedings on seized assets without judicial involvement. In judicial forfeitures, both civil and criminal, assets may be forfeited to the United In civil forfeitures, States by filing a forfeiture action in a federal court.the action is against the assets and thus does not require that the owner of the assets be charged with a federal offense. The federal government must only prove a connection between the assets and the crime. In contrast, criminal forfeiture requires a conviction of the defendant before assets can be forfeited. According to TEOAF officials, it can take from many months to several years to complete the forfeiture process, depending on a variety of factors, including, among other things, the types of assets seized; number of parties involved; and, if applicable, the litigation process, with judicial forfeitures generally taking more time. DHS components that have forfeiture authority and are therefore revenue producers—ICE, USSS, and CBP—can conduct equitable sharing on behalf of the TFF with federal, state and local, and other law enforcement State and local law enforcement agencies typically qualify for entities.equitable sharing by participating directly with DHS components in joint investigations leading to the seizure and forfeiture of assets. Although such qualification is less common, state and local agencies can also qualify for equitable sharing by requesting that DHS components adopt a case initiated at the state or local level, provided that the assets in question are forfeitable under federal law. According to TEOAF officials, the equitable sharing of forfeiture proceeds from seizures has proved invaluable in fostering enhanced cooperation among federal, state and local, and other law enforcement entities. As the management component of the TFF, TEOAF provides guidance on the equitable sharing program, including setting forth policies, procedures, and oversight of the program. Treasury’s most recent guidance, which it issued in 2004, governs how state and local law enforcement agencies should apply for equitable sharing and how DHS components should make equitable sharing determinations. Treasury also established guidance on decision-making authority for equitable sharing. Specifically, the lead federal agency—in this case, the DHS component—is responsible for making equitable sharing determinations when forfeited assets are less than $1 million, which are designated as low-value determinations.making determinations when forfeited assets are $1 million or more, which are designated as high-value determinations. DHS components are responsible for managing equitable sharing in joint investigations with state and local law enforcement agencies and for following the equitable sharing guidance, such as ensuring that sharing in joint investigations reflects the degree of direct participation of the agency in the law enforcement effort resulting in the forfeiture. From fiscal years 2003 through 2013, Treasury reported that DHS components contributed about $3.6 billion to the TFF and obligated about $2.6 billion for costs associated with forfeiture activities. At the end of each fiscal year, a balance of funds remains in the TFF to maintain operations at the start of the next fiscal year, and, as available, to fund additional expenditures including funding law enforcement activities by For example, Treasury reported that from fiscal years TFF members.2003 through 2013, about $348 million of the fund’s remaining balances have been used to fund law enforcement activities and projects by DHS components. Treasury reported that from fiscal years 2003 through 2013, DHS components contributed approximately $3.6 billion to the TFF’s approximately $7 billion in total revenues, or 52 percent of total revenues. Over this period, the DHS components’ contribution to the TFF fluctuated annually, but generally remained above 50 percent or more of total TFF revenues per year. Among DHS components—ICE, USSS, and CBP—contributing to the TFF, ICE contributed the majority of revenue.approximately $1.1 billion in revenues, of which ICE contributed approximately $1 billion (91 percent) and USSS contributed $52 million and CBP contributed $51 million (approximately 4.5 percent each), as shown in figure 1. Treasury reported that from fiscal years 2003 through 2013, DHS component obligations from the TFF totaled approximately $2.6 billion, or 54 percent, of the TFF’s total obligations of approximately $4.8 billion. As revenues have fluctuated annually there generally has been a concurrent increase or decrease in obligations in support of asset forfeiture activities. In fiscal year 2013, DHS component obligations were the highest during this 11-year period, at $476 million, coinciding with an increase in revenues that year. Prior to 2013, obligations by DHS components generally ranged from $123 million to $287 million. As with revenues, ICE is responsible for the majority of obligations among DHS components contributing to the TFF. Figure 2 shows the obligations by each DHS component, as well as by non-DHS members of the fund, from fiscal years 2003 through 2013. According to TEOAF officials, the TFF in its capacity as a multidepartmental fund collects and uses revenues from forfeitures to focus resources to enhance support of more law enforcement efforts, including the quality of investigations. Accordingly, revenues resulting from forfeitures are used to obligate funds for the forfeiture program’s expenses in four major categories—equitable sharing payments, remission and mitigation payments, seizure investigative costs and asset management expenses, and other expenses. Equitable sharing payments: Treasury reported that from fiscal years 2003 through 2013, equitable sharing payments constituted the largest TFF obligation by DHS components. During this period, DHS components shared approximately $1.2 billion, or 45 percent of total DHS obligations, with a range of state and local law enforcement agencies across the country—as well as other federal agencies and foreign entities—that participated in law enforcement efforts resulting in forfeitures. Specifically, from fiscal years 2003 through 2012, DHS components’ obligations for equitable sharing payments ranged from $48 million to $136 million per year. However, in fiscal year 2013, DHS components shared approximately $355 million, the highest amount of obligations for equitable sharing payments by DHS components during this 11-year period. Among the three DHS components making equitable sharing payments, ICE made up over 90 percent of total DHS obligations for State and local agencies accounted for the equitable sharing payments.majority of sharing recipients, and accounted for an average of 96 percent of total obligations for equitable sharing payments from fiscal years 2010 through 2012. According to officials at all nine state and local law enforcement agencies we met with, the equitable sharing program has improved the relationship between federal agencies and their offices. Moreover, officials stated that under the current fiscal constraints, these funds are needed by their agencies and have allowed them to purchase equipment such as bulletproof vests, weapons, mobile computers, and police station security cameras. See figure 3 for equitable sharing payments made by DHS components to state and local law enforcement agencies within each state in fiscal year 2013. Payments for remission and mitigation: According to TEOAF officials, a priority of all TFF members is to return assets to victims of crime, and accordingly, remission and mitigation payments are another major cost category across all DHS components. No funds are shared with state and local law enforcement partners until remission and mitigation payments have been made to address compensating victims or other third parties for their financial losses. Treasury reported that from fiscal years 2003 through 2013, total obligations for DHS remissions and mitigation payments were approximately $477 million, or about 19 percent of total DHS obligations, and varied from 2 to 45 percent of DHS obligations each year. For example, in fiscal year 2008, DHS components made $128 million in obligations for remission and mitigation payments, or 45 percent of total obligations. In contrast, in fiscal year 2013, DHS components made $30 million in obligations for remission and mitigation payments, accounting for 6 percent of total obligations. Moreover, among DHS components, USSS made up between 60 and 90 percent of total DHS obligations for remission and mitigation payments from fiscal years 2010 through 2013. TEOAF officials attribute the variation in total obligations for remission and mitigation payments each year to the different types of investigations that lead to forfeiture from 1 year to the next. According to these officials, higher remission and mitigation payments in a fiscal year may be in part due to high-impact forfeitures resulting from fraud investigations with significant numbers of victims. Seizure investigative costs and asset management expenses: In addition to carrying out equitable sharing and making payments to victims, DHS components use funds to pay for the costs associated with the seizure of assets. Treasury reported that over this 11-year period, total obligations for seizure investigative costs and asset management expenses were $450 million, or approximately 18 percent of total DHS obligations. These costs included investigative and asset management expenses (e.g., salaries for positions supporting the asset forfeiture program, travel for oversight activities, overtime worked by specialists involved in securing seized merchandise, and equipment and supplies). For example, one of CBP’s primary responsibilities is to secure the border at and between points of entry. Accordingly, CBP is generally responding to reports and seizures of illegal narcotics and other contraband smuggling, including firearms and ammunition. These seizures result in additional costs, including the storage of assets and disposal or destruction expenses. Other expenses: All DHS components have a variety of other program operations expenses, including compensation to informants and reimbursement for the cost of training. Treasury reported that from fiscal years 2003 through 2013, DHS components had a total of $483 million in other expenses, or approximately 19 percent of total DHS component obligations, in other program operations expenses. These other expenses include a total of seven expense categories, such as asset-related contract services, funds to compensate the services of experts and consultants, reimbursement to state and local law enforcement agencies for overtime costs incurred during joint special operations, and training. At the end of each fiscal year, the TFF maintains a balance from revenue contributions into the fund that exceeds obligations incurred throughout the year. TFF balances at the end of each fiscal year have progressively increased since fiscal year 2003. Treasury reported that TFF balances totaled $75 million in fiscal year 2003 and $888 million in fiscal year 2013. TEOAF carries over funds at the end of each fiscal year to maintain operations including the anticipated costs associated with continuing forfeiture activities at the start of the next fiscal year, before revenue from TEOAF reported that from fiscal years 2003 forfeitures starts coming in.through 2008, it had carried over between $50 million to $70 million at the end of each fiscal year to maintain operations at the start of the next year. Since the end of fiscal year 2009, Treasury reported that a set amount of $100 million has been carried over to fund operations at the start of the next fiscal year. TEOAF uses balances in excess of this amount—excess unobligated balances—to cover additional obligations. include funding for law enforcement activities by TFF members, rescissions, and other uses. According to TEOAF officials, the balances available to cover these obligations vary each year, as they are determined by a variety of factors including the enacted budgets, negotiations with Congress, and ultimately the enacted rescissions. Figure 4 shows the carryover funds retained in the TFF at the end of each fiscal year to maintain operations, as well as the amounts set aside for additional obligations. TEOAF officials referred to these balances as Super Surplus, which represents the remaining unobligated balance at the close of the fiscal year after an amount is reserved to fund operations in the next fiscal year. Super Surplus can be used for any federal law enforcement purpose. For the purposes of this review, we refer to Super Surplus funds as excess unobligated balances. 31 U.S.C. § 9703(g)(4)(B). Treasury reported that from fiscal years 2003 through 2013, about $348 million of the excess unobligated balances has been obligated to fund DHS component law enforcement activities and projects. Figure 5 shows the funds received from fiscal years 2003 through 2013 by DHS components—ICE, USSS, CBP, and USCG—as well as the total funds received by other agencies—such as Treasury’s Financial Crimes and Enforcement Network—for law enforcement activities. DHS, per Treasury’s reported data, has submitted and received approval from Treasury to fund a variety of projects across all four DHS components that participate in the TFF. For example, CBP received $29.6 million in fiscal year 2010, of which $15 million was obligated to support the construction of Border Patrol facilities in southwest border locations and the purchase of equipment for these facilities; $6.8 million was used for the purchase and installation of Non-Intrusive Inspection equipment; and, the remainder was spread out for smaller purchases such as field and intelligence equipment.ICE received $21.3 million in fiscal year 2011 for a range of activities, including $6 million to defray the costs of Title III court-ordered intercepts, which support investigations related to the southwest border, among other things; $2 million to cover costs of investigative activities with ICE’s HSI, such as translation, transcription, and duplication services; $2 million to support Border Enforcement Security Taskforces; and $2.5 million to purchase a system to conduct multiple undercover operations online, simultaneously. The remainder was spread among other projects and needs such as replacement of an undercover operations database. USSS received about $27 million in fiscal year 2012, and obligated $11 million for the purchase of equipment and tools to enhance USSS’s protection capabilities, including metal detector equipment and X-ray equipment replacement, and $6 million to acquire desktop and laptop computers to replace the aging inventory of computers for USSS task forces. The remainder was spread among other projects and needs such as investigative software. USCG received $1.5 million in fiscal year 2013 to fund the upgrade and purchase of fingerprint biometric kits for patrol boats, cutters, and the Deployable Operations Group, allowing USCG to run fingerprints against other federal law enforcement databases. Overall, the total funds received by DHS components for law enforcement activities varied from year to year. According to TEOAF officials, because of the current fiscally constrained environment, the TFF excess unobligated balances available each year are important, as they help to fund innovative initiatives such as the purchase of equipment, training, and other programs that the fund’s members may otherwise not be able to fund. Additionally, the Deputy Assistant Director of DHS’s Budget Division, Office of the Chief Financial Officer, stated that DHS encourages components to request these funds, particularly to support innovative activities that develop new capabilities or provide proof of concept for new technologies or processes. Since 2009, TEOAF has retained excess unobligated balances to cover yearly proposed rescissions. In fiscal year 2009—the first year a TFF rescission was proposed and enacted—$30 million was rescinded from the TFF, which since increased to a $950 million rescission in 2013. The effect of these rescissions has been a reduction in TEOAF’s budgetary resources, thereby decreasing the amount of money TEOAF has available to obligate for allowable purposes. A rescission could potentially decrease the size of the federal deficit, provided the decreased spending from the rescission is not offset by increased spending elsewhere. For annual appropriations, rescinded funds are generally taken from an agency and returned to the Treasury before they are obligated. However, per OMB guidance, from fiscal years 2009 through 2013, rescinded funds from the TFF were not returned to the Treasury. Rescinded funds are generally permanent and deposited into the General Fund of the Treasury, which is not the same fund as the TFF. result, TEOAF treated the funds as unavailable for obligation for the remainder of the fiscal year for which the rescission was enacted. With the enactment of a new rescission for the subsequent fiscal year, TEOAF continued to treat the rescinded funds as unavailable for obligation and applied the amounts to the rescission in the next fiscal year. For example, the $30 million that was rescinded from the TFF in fiscal year 2009 was treated as unavailable for obligation in fiscal year 2009, and was then obligated again to cover part of the enacted $90 million rescission in fiscal year 2010. To make up the difference needed to meet the $90 million rescission in fiscal year 2010, TEOAF used excess unobligated balances in the amount of $60 million. According to TEOAF officials, one effect of these rescissions is that a larger portion of the balances available for additional obligations is being reserved to cover rescissions and is unavailable to fund other obligations such as law enforcement activities. In fiscal year 2014, Congress passed two rescissions of TFF funds totaling approximately $1.7 billion. First, the Bipartisan Budget Act of 2013 permanently canceled $867 million of the TFF’s unobligated balances and TEOAF returned the total to the General Fund of the Treasury. Accordingly, unlike in previous years, these funds will not be available for any purpose, including applying to any subsequent rescission. In addition, the Consolidated Appropriations Act of 2014 rescinded $836 million of the TFF’s unobligated balances. As in previous fiscal years, TEOAF did not return these rescinded funds from the TFF to the Treasury, and they are unavailable for obligation in fiscal year 2014. According to TEOAF officials, the TFF received revenues from several large forfeiture cases in the first quarter of fiscal year 2014 that helped enable the fund to operate with these rescissions under its current financial plan. DHS components have designed controls to help ensure compliance with Treasury’s equitable sharing guidance, such as designing multiple levels of review for equitable sharing determinations. However, added controls—specifically, full documentation of the basis for determinations and additional guidance on the factors to consider when making determinations—could further enhance transparency and consistency across determinations, among other things. DHS components have taken steps to help ensure the equitable sharing process complies with required time frames. DHS components that conduct equitable sharing—ICE, USSS, and CBP—have designed controls to help ensure compliance with Treasury’s guidance. The guidance requires that sharing in joint investigations reflect the degree of direct participation of the agency in the law enforcement effort resulting in the forfeiture, in accordance with federal Specifically, it directs responsible officials to base equitable sharing law.determinations on the work hours that all participating agencies expended on the investigation and then, if applicable, consider qualitative factors regarding additional contributions that agencies may have made, such as providing unique and indispensable assistance, to adjust percentages. Equitable Sharing Determination Example HSI initiated a task force investigation of a suspected money-laundering organization, which was identified as being involved in the laundering and transportation of narcotics proceeds from the United States into Colombia. As a result, the organization was found guilty of conspiracy to defraud the government and directed to forfeit $1,000,000. Three police departments and two county sheriffs’ offices participated in the task force and provided assistance in executing the search warrant, conducting interviews, and cataloguing evidence, among other support. Agencies received a share of the forfeiture proceeds based on the percentage of work hours that they contributed to the investigation. TEOAF increased one agency’s share because it provided a Special Assistant United States Attorney to handle the forfeiture, which was considered unique and indispensable assistance. The attorney had expertise in asset forfeiture cases, negotiated the terms of the forfeiture payments to help ensure recovery of the entire forfeited amount, and provided a range of other legal support. DHS components have designed controls to help ensure that equitable sharing packages contain required information and are reviewed and approved by appropriate component authorities to help ensure compliance with Treasury’s guidance. According to the guidance, state and local law enforcement agencies are to submit an application for equitable sharing in which they outline the asset that was seized, the number of work hours they expended on the investigation, and other contributions. One control to help ensure DHS components comply with guidance is the requirement to prepare and submit an equitable sharing decision form when making determinations. The decision form is to include estimated work hours, recommended and approved sharing percentages, and a narrative section for providing an overview of the case and describing specific agency contributions. Another control involves required signatures on the form documenting submission by field office officials and approval by component headquarters officials. Moreover, all three DHS components have designed multiple levels of review for equitable sharing packages. First, component field office officials—including the agents leading the investigation, asset forfeiture specialists, and supervisory agents or sector chiefs, among others—are to review equitable sharing applications and provide recommended sharing percentages on the equitable sharing decision form. HSI officials stated that HSI has an asset forfeiture specialist in each special agent in charge field office, and Border Patrol officials stated that Border Patrol has an asset forfeiture officer in each sector. According to USSS’s Asset Forfeiture Liaison, USSS does not have asset forfeiture specialists in its field offices, with the exception of New York, and instead has designated staff that review and process equitable sharing requests as collateral duties. Officials from all nine of the DHS component field offices with whom we spoke said that they review applications to ensure that they are complete and accurate. These officials noted that they work closely with state and local agencies requesting equitable sharing and as a result are knowledgeable about their work hours and other contributions. Officials from all nine of the state and local agencies we interviewed said that component field offices can contact them if they need clarification or additional information regarding their participation. After field office review, the equitable sharing package—which is to include the decision form, state and local equitable sharing applications, and other relevant documents, such as the forfeiture order—is submitted to component headquarters for approval. Each component has a full-time asset forfeiture liaison who is responsible for reviewing packages and overseeing all interactions with TEOAF on forfeitures and equitable sharing. The review process at the headquarters level varies across components. HSI headquarters officials stated that within the Asset Forfeiture Unit, an asset forfeiture specialist, program manager, section chief, and unit chief review all equitable sharing packages. According to Border Patrol officials, the Assistant Chief of the Asset Forfeiture Program, who is the asset forfeiture liaison to TEOAF, reviews all packages. USSS officials said that the asset forfeiture liaison and an administrative staff person review packages. Component headquarters officials told us that they review packages to ensure that they are complete, include the necessary forms and information, and reflect the degree of agency participation in the seizure. Components then are to submit the packages to TEOAF for payment authorization. DHS asset forfeiture liaisons are to be available to address any questions that TEOAF may have regarding their packages or obtain additional information about sharing determinations from the field offices if needed. Figure 6 shows the steps involved in making equitable sharing determinations. HSI and Border Patrol have also issued additional guidance to their field offices to help ensure compliance with Treasury’s guidance for making equitable sharing determinations. Specifically, HSI issued a memorandum in January 2013 to remind all offices on the proper procedures to follow when dealing with equitable sharing requests. HSI officials stated that the memorandum was intended to underscore and clarify Treasury’s guidance as it pertained to HSI cases. For example, it outlines time frames for submitting equitable sharing packages to the Asset Forfeiture Unit in HSI headquarters. It also reiterates Treasury’s guidance on how equitable sharing determinations should be made based on investigative work hours and qualitative factors, if applicable. In addition, the memorandum provides additional guidance on what should be considered unique and indispensable assistance. Specifically, it lists an additional example of such assistance—an undercover officer with a special skill or language ability not readily available elsewhere. Border Patrol issued additional guidance in 2006 that outlines the equitable sharing process in detail. The guidance lists the steps that the requesting agency, sector asset forfeiture office, headquarters asset forfeiture office, and TEOAF perform in the process. It also reiterates Treasury’s guidance on factors to consider when making equitable sharing determinations, among other things. USSS has not issued additional guidance because, according to the component’s asset forfeiture liaison, field office staff use Treasury’s guidance and can contact headquarters if they need assistance in making sharing determinations. HSI and USSS have also provided training to headquarters and field office staff responsible for equitable sharing. HSI headquarters officials said that they began providing training to field office staff in August 2012 and asset forfeiture officials from all 26 field offices have received training. Training sessions addressed the process for equitable sharing, factors to consider when making equitable sharing determinations, and what to include in packages submitted to HSI headquarters, among other things. USSS officials stated that since fiscal year 2011, USSS has provided 37 field office trainings for operational personnel—including a nationwide asset forfeiture training in February 2013—and has conducted 19 training sessions and seminars for field office senior management personnel. According to DHS component and TEOAF officials, both HSI and USSS training were funded by and provided in coordination with TEOAF. Officials from all six HSI and USSS field offices we contacted stated that the training provided useful information on equitable sharing requirements and processes. According to Border Patrol headquarters officials, Border Patrol has not provided training to its sectors because of the limited number of equitable sharing requests they process. According to TEOAF officials, equitable sharing determinations should be clearly supported by the information in the package. TEOAF requires high-value packages (forfeitures of $1 million or more) to include DHS component work hours and justifications for material deviations from work hour calculated percentages due to qualitative factors. This is not, however, required of low-value packages (forfeitures under $1 million). Accordingly, 31 of the 40 low-value equitable sharing packages that we reviewed were missing key information to support the basis for final sharing percentages. Specifically, these 31 packages did not include one or more of the following: DHS component work hours, support for how qualitative factors were applied to make determinations, and the rationale for changes made to sharing percentages recommended by the field offices.value packages that we reviewed were fully supported by the information in the package. In contrast, the equitable sharing determinations in the 5 high- Component work hours: Treasury’s guidance states that equitable sharing determinations are normally determined by comparing the number of investigative hours expended by state, local, and other requesting agencies and the lead component through the completion of the forfeiture. According to TEOAF officials, work hours should be the primary indicator of agency participation in a case. Example of How Work Hours Are Used to Determine Equitable Sharing In an HSI-led drug smuggling case, a state police department assisted an HSI field office in executing search warrants and conducting background checks and surveillance on drug-trafficking suspects, among other things, resulting in a seizure and forfeiture of $95,138 in currency. According to documents in the equitable sharing determination package, the state police expended an estimated 671 work hours on the case and HSI expended 670, resulting in a 50 percent share of forfeiture proceeds for the state police. State and local agency work hours were included in all 40 low-value equitable sharing packages that we reviewed. HSI included its own work hours in all 26 low-value HSI packages that we reviewed, but USSS and Border Patrol did not include this information in any of their packages. Specifically, all 10 USSS and 4 Border Patrol packages that we reviewed did not include lead component work hours or the total work hours contributed by all agencies involved in the case. USSS’s asset forfeiture liaison stated that this information was not included because the lead USSS component does not receive equitable sharing funds and the information is not explicitly required on the equitable sharing decision form. This official said, however, that he has directed field offices to include component work hours in equitable sharing decision forms starting in fiscal year 2014. We reviewed an additional USSS equitable sharing package that was approved by headquarters in fiscal year 2014 and found that the USSS field office included its work hours. Border Patrol’s Asset Forfeiture Liaison said that because most of the component’s seizures occur at checkpoints or while agents are on patrols—unlike HSI and USSS seizures, which result primarily from investigations—work hours can be difficult to measure and are not always used to make equitable sharing determinations. However, state and local agency work hours were included in all 4 Border Patrol equitable sharing packages that we reviewed. Border Patrol’s Asset Forfeiture Liaison stated that it would be useful to include Border Patrol work hours when possible if they are measured consistently across all participating agencies. In the absence of documented component work hours, we could not determine what proportion of total work hours components contributed to the case and how deciding authorities verified whether equitable sharing determinations were calculated in accordance with Treasury’s guidance. Application of qualitative factors: All 31 low-value packages we reviewed that did not include full support for sharing determinations did not contain clear documentation of how qualitative factors were used to adjust sharing determinations. Treasury’s guidance directs deciding officials to consider additional factors when the work hours do not adequately reflect the degree of agency participation in the investigation. These factors could include, for example, originating the information leading to the seizure or providing unique and indispensable assistance. The decision forms in the packages that we reviewed generally contained narratives that summarized what each agency contributed to the case, which included interviewing suspects, executing search warrants, conducting surveillance, providing a drug-sniffing canine, and a range of other investigative support. However, the forms did not identify which specific agency contributions were used to adjust percentages and what adjustments were made. For example, in 1 HSI equitable sharing package we reviewed, two police departments contributed the same number of work hours, but one received a 10 percent larger share than the other, resulting in a difference of about $48,000 in forfeiture proceeds. Both departments indicated on their application forms that they provided unique or indispensable assistance and originated the information leading to the seizure. HSI’s equitable sharing decision form for this package includes a summary of how each police department participated in the investigation, such as assisting in undercover operations, interviews, and search and surveillance, but does not explicitly state which specific contribution or contributions were used to adjust percentages or why one agency’s contribution was valued more than the other’s. HSI’s January 2013 guidance memorandum states that when work hours do not adequately reflect the degree of agency participation, it is critical that the narratives contained in the documents submitted to the Asset Forfeiture Unit in headquarters specifically detail the participation of all agencies involved. This helps to ensure that agency contributions outside of work hours are documented in the equitable sharing packages, but does not require field offices to identify which qualitative factors or contributions were used to adjust percentages and how these factors were applied. Moreover, in USSS and Border Patrol equitable sharing packages where component work hours were not documented, it was not possible to determine if adjustments were made to sharing percentages based on qualitative factors. Border Patrol’s Asset Forfeiture Liaison stated that Border Patrol could document this information if required. USSS’s Asset Forfeiture Liaison said that it would be administratively burdensome to specify which, if any, qualitative factors were used to adjust percentages and how they were applied because USSS primarily relies on field office staff to process equitable sharing requests as collateral duties, in addition to their other responsibilities. However, agency contributions are generally already included in the decision form narratives and specifying which contribution was used to adjust percentages could be done by including a short sentence or annotation. For example, in a USSS high-value package that we reviewed, TEOAF documented that a state agency’s percentages were adjusted upward because the agency conducted all key interviews in the investigation. Components could document the same kind of information for low-value packages with minimal additional narrative. Documenting the rationale for making adjustments to sharing percentages based on qualitative factors could improve transparency for approving authorities and officials overseeing equitable sharing regarding how and why adjustments are made when work hours alone do not fully reflect the degree of agency participation in the investigation. Such documentation could also help these officials better assess the extent to which qualitative factors were applied appropriately and consistently in determinations. Component headquarters changes to sharing percentages: USSS and HSI headquarters made changes to sharing percentages in 8 of the 31 low-value packages we reviewed that did not include full support for determinations, and in all 8 of these instances, the reasons for the changes were not documented. USSS and HSI officials noted that they contact field office staff to discuss any changes, but that the reasons for the changes are generally not included in the packages submitted to TEOAF. USSS headquarters made changes to sharing percentages recommended by field offices in 6 of the 10 USSS packages that we reviewed. For example, in 1 package, USSS headquarters decreased a state agency’s share from 60 to 40 percent—resulting in a difference of about $28,400—but the decision form did not note why the change was made. USSS’s Asset Forfeiture Liaison stated that these changes are primarily due to headquarters taking into account additional work that USSS agents perform to identify victims of financial crimes after the field offices submit the decision forms to headquarters. The liaison said that these additional work hours and resources are not reflected in the field office’s recommended sharing percentages or on the decision forms. According to the USSS liaison, headquarters has only one other official who reviews and approves equitable sharing determinations, and documenting the rationale for changes to sharing percentages would require additional work. However, he noted that including a short annotation would be feasible. HSI officials stated that the reasons for headquarters’ changes to field office sharing percentages are generally documented in HSI review forms and e-mail correspondence with the field and provided examples of such documentation. However, this support is not included in the packages that HSI submits to TEOAF for review and payment authorization. HSI officials said that they could include this Documenting the rationale for information if requested by TEOAF.changes to sharing percentages recommended by the field—by, for example, including a short sentence or annotation—could help enhance transparency regarding why changes were made and how final sharing percentages were determined. Standards for Internal Control in the Federal Government states that controls are to provide reasonable assurance for compliance with laws and regulations and help ensure that management’s directives are carried out, among other things. To achieve these objectives, it states that transactions should be promptly recorded to maintain their relevance and value to management in controlling operations and making decisions. This applies to the entire process or life cycle of a transaction or event from the initiation and authorization through its final classification in summary records. In addition, internal control standards state that all transactions and other significant events need to be clearly documented, and the documentation should be readily available for examination. In the absence of consistently documenting component work hours, how qualitative factors are applied to adjust sharing percentages, and the reasons for headquarters’ changes to percentages, it is unclear how equitable sharing deciding authorities evaluated the nature and value of the contributions of each of the agencies involved in the investigation. TEOAF officials said that clearly documenting the basis for equitable sharing determinations in low-value packages would be helpful for approving officials. This is important because, according to TEOAF officials, equitable sharing determinations have grown more complex in recent years as a result of the increase in large investigations that involve multiple agencies. These officials said that it would be feasible for TEOAF—as the manager of the TFF and the equitable sharing program— to issue a memorandum to DHS components to include additional information in equitable sharing packages and work with components as needed to implement it. Establishing a mechanism to ensure that the basis for low-value equitable sharing determinations is fully documented by all DHS components responsible for making determinations could enhance the transparency of decision making and help DHS components and TEOAF better ensure that equitable sharing decisions are made in compliance with Treasury’s guidance. Treasury has established general guidance on the qualitative factors to consider if work hours do not adequately reflect the degree of agency participation in the investigation. The guidance includes three examples of these factors—whether the agency originated all of the information leading to the seizure, provided unique and indispensable assistance, or could have achieved forfeiture under state law—and a short narrative describing each of them. For example, the guidance states that unique and indispensable assistance entails an agency providing support only it can provide, such as seizing assets in a jurisdiction hundreds of miles from where the investigation is being conducted or providing an informant who has access to documents that are essential to securing a conviction. The guidance does not provide specific information on how to apply these examples to adjust sharing percentages. Treasury is in the process of revising its guidance, and as of January 2014, the draft guidance contained a more abbreviated discussion of qualitative factors, with two examples and less detail regarding the contributions to consider. For example, the draft guidance does not provide examples of contributions that could be considered unique or indispensable assistance. TEOAF officials stated that the revised guidance is being finalized and indicated that they do not plan to include more information on qualitative factors in the guidance. The office expects to issue the guidance in 2014, but did not have a more specific time frame, in part because the guidance was undergoing an interagency review. HSI’s January 2013 memorandum includes additional guidance on qualitative factors, such as clarifying what types of activities are considered unique and indispensable assistance. USSS and Border Patrol have not issued similar guidance. TEOAF and DHS component officials stated that guidance on qualitative factors is general because equitable sharing determinations are made on a case-by-case basis and the facts and circumstances of each case must be considered in totality when making adjustments to sharing percentages. Accordingly, DHS component field office and headquarters officials said that they use their judgment and experience when determining if and how qualitative factors should be applied in making equitable sharing decisions. However, officials from six of the nine field offices we interviewed across all DHS components that conduct equitable sharing stated that additional guidance on qualitative factors would be useful. Specifically, these officials said that additional examples of factors, such as what constitutes extraordinary expenses; clarification of what is considered unique and indispensible assistance; guidance on how to apply factors, including more information on how to adjust percentages based on the type and significance of agency contributions; or illustrations of how factors were applied in real-world cases would be helpful for making equitable sharing determinations. Officials from the remaining three field offices said that additional guidance was not needed because sharing recommendations are made in consultation with requesting agencies or they can contact component headquarters to discuss any questions about qualitative factors. Nonetheless, headquarters officials from all three DHS components that conduct equitable sharing stated that additional guidance could help ensure a more consistent understanding of these factors among headquarters and field offices. For example, HSI officials said that adjustments related to qualitative factors are one of the reasons for headquarters changes to sharing percentages recommended by field offices. These officials noted that any additional guidance should continue to allow for determinations to be based on the facts and circumstances of each case. Officials from all nine state and local law enforcement agencies with whom we spoke were generally satisfied with the equitable shares that they received. These officials noted that they typically do not have visibility over the equitable sharing process after they have submitted their applications. In addition, the equitable sharing determinations we reviewed indicate that state and local agencies may not have a clear understanding of how some qualitative factors are defined and considered. For example, in 12 of the 15 low-value packages we reviewed where an agency indicated on its application form that it incurred extraordinary expenses during the investigation, the expenses were not clearly described in the narrative. In 1 equitable sharing package we reviewed for a currency-smuggling case, a police department checked that it had incurred extraordinary expenses and stated that its officer had conducted surveillance, assisted in a search of a suspect’s house, and participated in the interview of two individuals who were detained as part of the investigation. However, we could not determine how these activities constituted extraordinary expenses. Treasury’s guidance does not include incurring extraordinary expenses as an example of a qualitative factor, despite this factor being included in the equitable sharing application. In addition, the application includes two other factors to consider when assessing agency contributions that are not included in Treasury’s guidance. Providing guidance on qualitative factors that are listed on the application form, including what they entail and how to apply them, could help officials from state and local agencies, as well as DHS components, have a better and more consistent understanding of these factors. Standards for Internal Control in the Federal Government calls for significant events to be clearly documented in directives, policies, or manuals to help ensure operations are carried out as intended. While we recognize the subjective nature of evaluating agency contributions based on the facts and circumstances of each case, additional guidance on qualitative factors could help better ensure consistency with which these factors are applied across cases. Such guidance could also help DHS components better assess agency contributions when making equitable sharing determinations. TEOAF performs an administrative review of low-value packages to ensure that the required applications and decision forms are included, among other things. TEOAF officials said that because DHS components have decision-making authority for low-value determinations, they primarily rely on the components to ensure that these packages comply with equitable sharing requirements. The low-value packages that we reviewed did not always comply with certain requirements in Treasury’s guidance. However, HSI and USSS officials have taken steps to address the deficiencies we found in our analysis of these packages. Specifically, the guidance requires that state and local law enforcement agencies submit equitable sharing applications within 60 days after the seizure, and if this deadline is not met, agencies need to submit a written request stating the reasons for the late submission in order for components to waive the requirement. TEOAF officials said that this requirement is in place to ensure that components receive all sharing requests in a timely manner and are aware of all agency contributions before determining equitable shares. HSI and USSS headquarters officials stated that if a state or local agency submitted a request for a waiver, they would include it in the package provided to TEOAF. However, requests for waivers were not included in 8 of the 9 HSI and USSS packages we reviewed where an agency did not meet the 60-day deadline. HSI headquarters officials stated that they began enforcing the waiver requirement in January 2013. The officials said that this requirement may take some time to fully implement. Specifically, because of the potential lengthy forfeiture process, equitable sharing determinations processed after January 2013 may be from applications that were submitted to field offices over a year earlier. USSS’s Asset Forfeiture Liaison stated that in fiscal year 2013, USSS’s asset forfeiture system was programmed to automatically notify field offices when equitable sharing applications are due to meet the 60-day requirement and provided an example of such a notice. In addition, Treasury’s guidance specifies that final determination of sharing percentages cannot be made until after assets have been forfeited. TEOAF officials said that this requirement is in place so that state and local agencies do not expect equitable shares before forfeitures are finalized, because in some cases, funds may not be available for sharing. However, USSS headquarters officials approved sharing determinations in 5 of the 10 low-value USSS equitable sharing packages we reviewed before assets were forfeited. These 5 packages were all approved by headquarters in 2011. USSS’s Asset Forfeiture Liaison said that with the assignment of a new asset forfeiture specialist in 2012, USSS changed its review process and no longer approves determinations before forfeiture. The 5 packages we reviewed that USSS headquarters approved after 2011 complied with this requirement. We also reviewed 3 additional packages that USSS approved in September 2013 and these complied as well. DHS components and TEOAF coordinate in a variety of ways to oversee the equitable sharing program. For example, DHS and TEOAF have established roles and responsibilities for the processing, review, and approval of equitable sharing determinations, consistent with leading practices on interagency coordination. In addition, each DHS component has an asset forfeiture liaison who is responsible for overseeing interactions with TEOAF on forfeitures and equitable sharing. These liaisons are the primary points of contact between TEOAF and DHS field offices and help facilitate the processing of equitable sharing determinations. Further, according to TEOAF and DHS component officials, TEOAF holds meetings with component asset forfeiture liaisons once every 2 weeks to discuss TFF issues, including equitable sharing and any major forfeiture cases that are expected, among other things. HSI also coordinated with TEOAF to provide equitable sharing training to field office staff. For example, TEOAF officials stated that TEOAF staff worked with HSI to develop an agenda of the areas that needed to be covered during the training sessions. Officials from both HSI and TEOAF USSS and TEOAF officials provided presentations during the training.stated that they coordinated to provide training to USSS field office staff as well. In addition, TEOAF has collaborated with DHS components to develop equitable sharing guidance. Specifically, as part of the ongoing development of updated Treasury guidance, TEOAF provided a draft of the guidance to DHS components for their review and, according to TEOAF and DHS officials, held meetings with components to discuss revisions. Border Patrol also collaborated with TEOAF to develop additional equitable sharing guidance for its sectors, according to Border Patrol and TEOAF officials. Such actions are consistent with leading practices on interagency coordination that call for agencies to address the compatibility of standards, policies, and procedures that will be used in the collaborative effort through effective communication, among other things. However, HSI did not inform TEOAF that it was planning on issuing additional equitable sharing guidance in January 2013 or provide a draft of the guidance for TEOAF to review before issuance. According to HSI officials, HSI did not take these steps because HSI’s additional guidance was based on Treasury’s guidance and prior discussions in which TEOAF directed HSI to address concerns about HSI allocating large shares of forfeiture proceeds to state and local agencies that were disproportionate to their contributions in investigations and did not retain sufficient revenues to support TFF expenses. HSI officials stated that this includes overhead-related expenses that HSI incurs, such as administrative and storage costs. The officials said that the 30 percent minimum was based on an analysis of HSI investigative expenses and is intended to help ensure that HSI’s expenses do not exceed its forfeiture revenues. contributions and include required information in the packages they submit. The authorization to share federal forfeiture proceeds with participating state and local law enforcement agencies is an important component of federal asset forfeiture activities and critical in fostering enhanced cooperation with these agencies. In fiscal year 2013, DHS components obligated about $355 million in equitable sharing payments to state and local agencies—the highest annual amount over the past decade. In addition, equitable sharing determinations have grown more complex in recent years because of the increase in large investigations that involve multiple agencies, according to TEOAF officials. Such developments underscore the need for controls to help ensure compliance with established equitable sharing guidance and federal statutes. DHS components have designed controls to help ensure compliance with Treasury’s guidance when making equitable sharing determinations. However, there are gaps in the documentation of key information that serves as the basis for making sharing decisions. Without a mechanism to ensure documentation of the number of work hours expended by lead components, how qualitative factors were used to adjust sharing percentages, and the reasons for headquarters’ changes to equitable sharing percentages in low-value packages, it is unclear how equitable sharing deciding authorities could fully evaluate the nature and value of the contributions of each of the agencies involved in an investigation. Further, additional guidance on the qualitative factors to consider when making equitable sharing determinations could help better ensure that they are consistently applied over time and across cases. As the manager of the TFF and equitable sharing program, TEOAF is best positioned to help ensure that DHS components consistently comply with Treasury’s equitable sharing guidance. To help improve management controls over the equitable sharing program, we recommend that the Director of TEOAF take the following two actions: Establish a mechanism to ensure that the basis for DHS’s low-value equitable sharing determinations—including component work hours, how qualitative factors are applied to adjust percentages, and the rationale for component headquarters’ changes to percentages—is documented in equitable sharing packages. Develop additional guidance on qualitative factors to be used when making adjustments to equitable sharing percentages. We provided a draft of this report to Treasury and DHS for their review and comment. Treasury provided written comments, which are reprinted in appendix II. Treasury and DHS also provided technical comments, which we incorporated in this report as appropriate. Treasury concurred with both recommendations in this report in an e-mail provided on March 20, 2014. In its written comments, the department outlined steps that it plans to take to implement them. Specifically, Treasury stated that it plans to implement changes in its equitable sharing forms, policy guidance, and processes to address our recommendation that the basis for DHS’s low-value equitable sharing determinations is documented in equitable sharing packages. For example, Treasury noted that it is to require all equitable sharing packages to include component work hours and emphasize that upward adjustments to a local law enforcement agency’s sharing percentage must include a coherent and compelling explanation of the unique or indispensable assistance provided. Treasury also plans to redesign the equitable sharing decision form to accommodate these and other changes. discuss changes made to the equitable sharing program, including those related to qualitative factors, with components over the next 6 months to address our recommendation that Treasury develop additional guidance on qualitative factors to be used when making adjustments to equitable sharing percentages. We are sending copies of this report to the Secretary of the Treasury, the Secretary of Homeland Security, selected congressional committees, and other interested parties. In addition, this report is available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have any further questions about this report, please contact me at (202) 512- 9627 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. This report addresses the following objectives: 1. What have been Department of Homeland Security (DHS) components’ revenues contributed to and obligations from the Treasury Forfeiture Fund (TFF) from fiscal years 2003 through 2013? 2. To what extent have DHS components designed controls to help ensure compliance with the Department of the Treasury’s (Treasury) guidance when implementing the equitable sharing program? 3. To what extent do DHS components coordinate with Treasury in overseeing the equitable sharing program? To determine TFF revenues from and obligations by DHS components from fiscal year 2003—the year in which DHS began operations—through fiscal year 2013, we analyzed Treasury’s reported data on revenues and obligations by fiscal year, by the four participating DHS members of the TFF—U.S. Immigration and Customs Enforcement (ICE), the U.S. Secret Service (USSS), U.S. Customs and Border Protection (CBP), and the U.S. Coast Guard (USCG)—and for the fund as a whole. We used information on revenues and obligations contained in CBP’s National Finance Center’s financial accounting systems. We interviewed officials from the four DHS participating components, DHS’s Office of the Chief Financial Officer, and the Treasury Executive Office for Asset Forfeiture (TEOAF) to discuss trends and variations in the revenues and obligations over this 11-year period. We also reviewed information on unobligated and excess unobligated balances contained in the TFF’s financial accounting systems. We interviewed TEOAF officials who are responsible for oversight of the TFF regarding processes for carrying over funds at the end of the fiscal year. Further, we analyzed Treasury’s reported data on the TFF’s excess unobligated balances from fiscal years 2003 through 2013, and interviewed DHS officials from the four participating components, DHS’s Office of the Chief Financial Officer, and TEOAF officials about how excess unobligated balances have been used since 2003. To determine the extent to which DHS components that conduct equitable sharing—ICE, USSS, and CBP—have designed controls to help ensure compliance with Treasury’s guidance when implementing the equitable sharing program, we analyzed federal statutes and Treasury guidance on making equitable sharing determinations and DHS controls designed to help ensure compliance with guidance. We compared these controls with the overall framework for establishing and maintaining internal control outlined in Standards for Internal Control in the Federal Government. We We selected the 40 packages from equitable sharing payments made to state and local agencies from October 1, 2012, to June 30, 2013, to obtain the most recent payments given our timeframes. We selected this sample based on payment amounts and to reflect a range of DHS components and field offices that conduct equitable sharing. We reviewed 26 ICE, 10 USSS, and 4 CBP low-value packages from a range of each component’s field offices across the nation. Within our sample time frame, ICE accounted for 1,902 payments to state and local agencies, USSS accounted for 347, and CBP accounted for 4. Because multiple payments can result from 1 equitable sharing package, we could not determine the number of packages processed during a given time period. We selected the 5 packages from a list of those approved by TEOAF from October 1, 2012, to June 30, 2013, based on such factors as amounts forfeited and the number of seizures involved.packages approved during this time period, we reviewed 3 from ICE and 2 from USSS investigations. The results of our analysis of equitable sharing packages are not generalizable to the universe of packages paid or approved within the same time frames. However, they provided information on the extent to which the selected packages adhered to guidance and included documentation of controls, among other things. We also interviewed officials from TEOAF; ICE, USSS, and CBP in headquarters; and selected field offices of these components in California, New York, and Texas to assess controls established to help ensure compliance with guidance. These three states received the highest amounts of equitable sharing payments on average from fiscal years 2010 through 2012 and composed about 50 percent of total payments nationwide. We selected ICE, USSS, and CBP field offices to interview in each state to include those that processed high amounts and large numbers of equitable sharing payments. In addition, we interviewed officials from three state or local law enforcement agencies in each of these states to obtain their perspectives on the equitable sharing process. We selected these agencies based on the amount and number of payments they received in fiscal year 2012 and to cover a range of government agencies (e.g., state, county, or city). While the results of these interviews are not generalizable to all DHS component field offices and agencies, they provided valuable information and perspectives on the equitable sharing determination process and controls. To determine the extent to which DHS components coordinate with Treasury in overseeing the equitable sharing program, we analyzed guidance and other documents. For example, as part of our review of selected equitable sharing determination packages, we assessed documentation of how DHS components and TEOAF coordinate and communicate when making equitable sharing decisions. In addition, we interviewed officials from ICE, USSS, and CBP in headquarters; selected field offices of these components as discussed above; and TEOAF to obtain information about, among other things, the extent to which DHS and TEOAF coordinate on overseeing the equitable sharing program, including making sharing determinations and developing guidance. We compared DHS and TEOAF coordination mechanisms with leading practices on interagency collaboration. To assess the reliability of data for revenues and obligations and excess unobligated balances for the first objective and equitable sharing payments used to select our samples of packages to review for the second and third objectives, we reviewed relevant documentation, such as annual financial plans and standard operating procedures related to reporting TFF data in the fund’s financial accounting system, which is maintained by CBP. We also conducted interviews with CBP officials responsible for managing data, as well as Treasury officials who review and work with the data to understand how CBP and Treasury collect, categorize, and tabulate the information and the actions they take to ensure its consistency, accuracy, and completeness. We determined information on the financial accounting system provided by CBP to be sufficiently reliable for presenting Treasury’s reported data on total revenues, obligations such as equitable sharing payment data, and excess unobligated balances by DHS components and as a proportion of the TFF for fiscal years 2003 through 2013. We conducted this performance audit from April 2013 through March 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Eric Erdman, Assistant Director; Sylvia Bascope; Kelly Krinn; and Johanna Wong made key contributions to this report. Also contributing to this report were Phyllis Anderson, Christine Broderick, Susan Hsu, Eric Hauswirth, Cynthia Saunders, and Janet Temko. | Every year, DHS components seize millions of dollars in assets during investigations and other activities and contribute forfeited proceeds to the Treasury Forfeiture Fund. Treasury manages the fund, which held about $1.7 billion in assets in fiscal year 2013. DHS components use proceeds primarily to cover forfeiture activity costs, which include sharing proceeds with state and local agencies that participate in DHS investigations through Treasury's equitable sharing program. GAO was asked to review the management of the fund. This report addresses (1) DHS revenues contributed to and obligations from the fund and (2) the extent to which DHS components have designed controls to help ensure compliance with Treasury's guidance when implementing the equitable sharing program. GAO analyzed financial data from fiscal years 2003 through 2013 on the forfeiture fund; Treasury's equitable sharing guidance; and a sample of 40 DHS equitable sharing packages, selected based on payment amounts and other factors; Sample results are not generalizable but provided information on DHS's compliance with guidance. GAO also interviewed DHS and Treasury officials. From fiscal years 2003 through 2013, Department of Homeland Security (DHS) components that participate in the Treasury Forfeiture Fund—U.S. Immigration and Customs Enforcement (ICE), the U.S. Secret Service (USSS), U.S. Customs and Border Protection (CBP), and the U.S. Coast Guard (USCG)—contributed approximately $3.6 billion in revenues to the fund and obligated about $2.6 billion from the fund for forfeiture-related activities. These obligations included, among other things, approximately $1.2 billion that DHS components shared with state, local, federal, and foreign law enforcement agencies that participated in forfeiture efforts. Also, during this period, DHS components used about $348 million from the fund to support various law enforcement activities and projects, such as the construction of Border Patrol facilities along the southwest border. DHS components have designed controls to help ensure compliance with the Department of the Treasury's (Treasury) equitable sharing guidance, but controls could be enhanced though additional documentation and guidance. Documentation: Treasury's guidance directs components to base equitable sharing determinations on the work hours that all participating agencies contributed to an investigation and then consider qualitative factors regarding agency contributions, such as originating the information that led to the seizure, to adjust percentages. However, 31 of the 40 DHS component equitable sharing packages—which contain sharing determinations and other documents—that GAO reviewed did not include key information, such as component work hours expended on a case and documentation of how qualitative factors were applied to make determinations, to support the basis for final sharing percentages, consistent with federal internal control standards. For example, in 1 package GAO reviewed, two police departments contributed the same number of work hours, but one received a 10 percent larger share than the other, resulting in a difference of about $48,000 in forfeiture proceeds. However, the package did not clearly document how qualitative factors were applied to adjust the percentages. Fully documenting the basis for DHS equitable sharing determinations could help enhance the transparency of decision making and better position DHS components and Treasury to ensure that equitable sharing decisions are made in compliance with Treasury's guidance. Guidance: Treasury's guidance on qualitative factors includes three examples, but does not include three other factors listed on the equitable sharing application or provide specific information on how to apply factors to adjust sharing percentages. For example, incurring extraordinary expenses is listed as a factor on the application, but is not included as an example in the guidance. Providing guidance on qualitative factors that are listed on the application, including what they entail and how to apply them, could help participating agencies have a better and more consistent understanding of these factors. In addition, headquarters officials from the three DHS components that conduct equitable sharing stated that additional guidance could help ensure a more consistent understanding of these factors among headquarters and field offices. Developing additional guidance on qualitative factors could help better ensure consistency with which these factors are applied across cases. GAO recommends that Treasury ensure that the basis for DHS equitable sharing determinations is fully documented and develop additional guidance on qualitative factors used to make determinations. Treasury concurred with both recommendations and outlined steps it plans to take to address them. |
As agreed with your office, we limited our examination of targeting opportunities to our published work. To answer your questions about whether targeting can help the federal government downsize and to provide illustrative examples of a targeting strategy for deficit reduction, we updated information from our March 1995 report on the budgetary implications of our work. At this date, the Congress is considering several of the options described in this report. Notwithstanding pending congressional actions, we included the options because they illustrate how targeting can fit in an effective deficit reduction strategy. That report presents a deficit reduction framework consisting of three broad themes. The first focuses on reassessing the objectives of federal programs and services. Our premise is that periodically reconsidering a program’s original purpose, the conditions under which it continues to operate, and its cost-effectiveness is appropriate. The second focuses on improved targeting of federal programs and services to beneficiaries. This theme concerns how efficiently federal programs and services reach their intended recipients. The third focuses on improving the efficiency of program and service delivery. This theme suggests that focusing on the approach or delivery method can significantly reduce spending or increase collections. This letter expands on the second theme—improved targeting—as a strategy that allows for reducing the deficit while improving the design of federal government activities. We did this work in Washington, D.C., from August 1995 through October 1995. The following examples from our work illustrate potential opportunities to better target federal programs and services. Examples are detailed under one of four strategies for better targeting the intended beneficiaries: revise grant formulas, change eligibility rules, target fees and charges, and narrow tax preferences. At a time when federal domestic discretionary resources are constrained, better targeting of grant formulas offers a strategy to concentrate lower federal spending levels on states or localities with greater needs and lower capacity to absorb grant reductions. Through this process, federal funding reductions would fall more heavily on those communities with lesser relative needs and with greatest fiscal capacity to finance services from their own revenue base. We have issued many reports over the past decade showing that the allocation of federal grants to state and local governments is not well targeted. This work has been confirmed by many economic analyses from other sources. As a result, program recipients in areas with relatively lower needs and greater wealth received a higher level of services than those available in harder pressed areas, or wealthier areas were able to provide the same level of services at lower tax rates. Reductions in federal grants to states could be targeted by adjusting the allocation formulas to concentrate funding on those states with relatively lesser fiscal capacities and greater needs. Similarly, reductions in federal grants to local governments could be targeted by either concentrating cuts on areas with the strongest tax bases or by changing program eligibility to restrict grant funding in places with high fiscal capacity and/or few programmatic needs. For example, in 1992 we reported that Maternal and Child Health (MCH) Services block grants could be allocated more equitably. This program was designed to secure basic health care for low-income and moderate-income expectant mothers, their infants, and children with special health care needs. However, our report concluded that the allocation method for distributing MCH grants to states ran counter to the equity standards we developed. We found that while the number of children at risk, the costs of providing maternal and child health services, and the states’ ability to pay for these services varied from state to state, the current MCH allocation method did not consider these factors. As a result, Louisiana—with the second highest proportion of children at risk and average service costs—ranked 14th in per capita grant funding. Similarly, at the time of our analysis, Kansas and Illinois received nearly equal per capita grants, even though Illinois had about 28 percent higher health care costs. In practical terms, this meant that Illinois consumers had to spend more money than Kansans to buy the same MCH services. We concluded that a new MCH allocation method that strikes a balance between each state’s (1) need adjusted for costs and (2) ability to pay could substantially improve the overall equity of the MCH program. Federal spending for the MCH program reached a reported $687 million in fiscal year 1994. If overall funding for this program were reduced, such a new allocation method could help target the remaining MCH program funds more equitably. In another example, we found that the Medicaid program formula does not target most federal funds to states with weak tax bases and high concentrations of poor people. In 1990, we reported that while the program covered 75 percent of those below the poverty line nationwide, the coverage varied from 37 percent in Idaho to 111 percent in Michigan. We suggested that a formula using better indicators of states’ financing capacities and poverty rates coupled with a reduced minimum federal share would more equitably distribute the burden state taxpayers face in financing Medicaid benefits for low-income residents in their respective states. Federal spending for Medicaid in fiscal year 1994 reached a reported $82 billion, and the Office of Management and Budget (OMB) projects spending to reach $136.5 billion by fiscal year 2000. Should the Congress act to reduce federal Medicaid spending, a revised grant allocation system could help target the reduced funding more equitably. Along these lines, a block grant that the Congressional Budget Office (CBO) estimated would reduce federal Medicaid spending by $163 billion over the next 7 years was included in the recently passed Balanced Budget Act of 1995. Under this proposal future Medicaid costs would be reduced and equity in the distribution of the remaining funding would be improved because the allocation formula uses new factors that more precisely measure differences in states’ fiscal capacity and poverty levels. In another example, Title I grants to local educational agencies (LEAs), which fund supplementary education services for low achievers in poverty areas, could be modified to improve targeting among counties. Under these grants, formerly known as Chapter 1 grants, school districts have broad discretionary powers to determine how resources are distributed to schools, specifying the grades served and the type and extent of services, and defining which students are low achievers. In 1992, we reported that these factors resulted in considerable variation among students who receive Title I LEA services. For example, in some school districts Title I LEA funds served only children scoring below the 20th percentile on standardized tests. In other districts, program funds served some children scoring above the national average (the 50th percentile). We found that the legislatively mandated formula for Title I LEA grants did not (1) accurately reflect the distribution of poverty-related low achievers, (2) provide extra assistance to areas with relatively less ability to fund remedial education services, or (3) adequately reflect differences in local costs of providing education services. We concluded that modifications to the Title I LEA allocation method could target more funds to counties with the largest numbers of poverty-related low achievers and those least able to finance remedial instruction. Federal funding for Title I grants to local educational agencies reached a reported $6.3 billion in fiscal year 1994. If the Congress decides to reduce funding for these grants, a revised formula could better target Title I LEA grants to those counties with the greatest overall need. The formula could be revised to rely on a more precise method of estimating the number of poverty-related low achievers, use an income adjustment factor to grant additional assistance to areas least capable of financing remedial instruction, and employ a uniform measure of educational services costs that recognizes differences within and between states. Changing eligibility rules to better target the intended beneficiaries of federal programs offers another strategy that can allow for deficit reduction by concentrating reductions on beneficiaries with little demonstrable need for government assistance. We have issued many reports in recent years showing that programs could be better targeted to more cost-effectively address those beneficiaries most in need. For example, we found that the Vaccines for Children (VFC) Program is not well targeted. This program was created to improve immunization rates for measles, mumps, rubella, and other childhood diseases by lowering the cost of immunization for all children. However, we found that most children had already been immunized because cost was not a significant barrier and that a disproportionate number of children in underserved areas were not immunized. We suggested that the Congress consider targeting the program. Services could be improved by directing VFC funds to children in those particular geographic areas where underimmunization has been a persistent problem. Fiscal year 1995 costs for the childhood vaccine program were estimated at about $450 million. Based on our examinations of the Market Promotion Program (MPP), we believe that the program’s eligibility rules could be tightened to provide support to small, generic, new-to-export companies, but not to large companies with substantial corporate advertising budgets. The MPP uses federal funds to subsidize efforts to expand export markets for U.S. agricultural products by financing such activities as advertising and consumer promotions. From 1986 through 1994, about one-third of MPP funds and those of its predecessor program (the Targeted Export Assistance (TEA) program) supported private for-profit companies’ brand-name promotions. These companies included many large for-profit businesses with substantial corporate advertising budgets, such as Sunkist Growers and E.J. Gallo Winery. In fiscal year 1995, MPP funding was reduced to $84.5 million from the budgeted level of $110 million. Eligibility rules could be revised to ensure that MPP funds are supporting additional promotional activities rather than simply replacing company or industry funds. While large firms receive MPP funds to increase exports of U.S. agricultural products, the resources otherwise available to such firms may indicate that they have no demonstrable need for government assistance. Our reviews of U.S. Department of Agriculture crop price supports show that the program’s eligibility rules allow producers to avoid payment limits and reduced program payments. These income support payments, known as deficiency payments, are the principal payments made to producers who participate in farm programs for wheat, feed grains, cotton, and rice. The payments are designed to protect producers’ incomes when crop prices fall below a legally established target price. The Food Security Act of 1985 limited the payments for those commodities to $50,000 per person annually. For the act’s purposes, a person is broadly defined as an individual, an entity (such as a corporation, limited partnership, association, trust, or estate), or a member of a joint operation (such as a general partnership or joint venture). Despite reforms made by the Congress in 1987, producers have avoided the payment limit by reorganizing their farming operations to include additional persons. According to OMB, deficiency payments amounted to $6.4 billion in fiscal year 1994. One option to further tighten payment limits as a means to reduce program costs would be to change eligibility rules to limit payments to $50,000 per individual and only provide benefits to individuals actively engaged in farming. In another example, narrowing eligibility rules for veterans disability compensation could generate savings without affecting veterans who suffered disabilities as a result of military service. In 1994, CBO reported that about 250,000 veterans were receiving about $1.5 billion annually in Department of Veterans Affairs (VA) compensation for diseases neither caused nor aggravated by military service. Our study of five other countries’ veterans programs shows that they do not compensate veterans under these circumstances. Dollar savings could be achieved by targeting disability benefits more narrowly, as is done by other countries. Adjusting fees and charges to the beneficiaries of some business-type federal programs and services offers a third targeting strategy to reduce the deficit. Fees exist for many services provided by the federal government, including customs and other inspections, use of recreation and other facilities, and mail delivery. However, in many cases, the direct beneficiaries of these kinds of governmental activities contribute little to support the program or administrative costs of the activity. As a result, the programs and services are often overused and/or under-provided, and money must be found elsewhere in the budget to make up the difference between administrative costs and beneficiary charges. For example, although many beneficiaries of the Child Support Enforcement Program have higher incomes than the population originally envisioned to be served by this program they pay relatively little to support the program’s administrative costs. The Congress created the Child Support Enforcement Program in 1975 to strengthen state and local efforts to obtain child support for both families eligible for Aid to Families with Dependent Children (AFDC) and non-AFDC families. Child support enforcement services were made available to non-AFDC individuals because it was believed that many families might not have to apply for welfare if they had adequate assistance in obtaining the support due from the noncustodial parent. In 1994, the program collected a reported $7.3 billion for 8.2 million non-AFDC clients. Bureau of the Census data for 1991 showed that about 65 percent of the individuals requesting non-AFDC child support enforcement services in that year had family incomes, excluding any child support received, exceeding 150 percent of the federal poverty level. Because states have exercised their discretion to charge only minimal application and service fees, they are doing little to recover the federal government’s 66-percent share of program costs. In fiscal year 1994, state fee practices returned $33 million of the reported $1.1 billion spent to provide non-AFDC services. Rising non-AFDC caseloads and new program requirements could lead to administrative costs exceeding $1.6 billion by fiscal year 2000, with very little offset from those benefiting from the services. We have reported and testified on opportunities to defray some of the costs of child support programs. Based on this work, we believe that mandatory application fees should be dropped and that states should charge a minimum percentage service fee on successful collections for non-AFDC families. Under this proposal, non-AFDC beneficiaries would pay an increased share of the costs of administering this program. As a second example, veterans’ long-term care costs could be reduced and comparability among retirees increased if veterans’ copayments for these services were increased. All veterans with a medical need for nursing home care are eligible to receive such care in VA and community facilities to the extent that space and resources are available. VA is required to collect a fee, commonly known as a copayment, from certain veterans with nonservice-connected problems and incomes above a designated level. Nursing home care is free for other veterans who receive care in VA or contract community nursing homes. By contrast, we found that state veterans’ homes recovered as much as 50 percent of the costs of operating their facilities through charges to veterans receiving services. Similarly, through estate recoveries during the 12 months ending June 30, 1992, Oregon recovered about 13 percent of the costs of nursing home care provided under its Medicaid program. However, in fiscal year 1990, the VA offset less than one-tenth of 1 percent of its costs through beneficiary copayments. OMB reported that in fiscal year 1994, VA’s operating expenses were about $1.7 billion to provide nursing home and domiciliary care to veterans. The Congress may wish to consider increasing cost sharing for VA nursing home care by adopting cost-sharing requirements similar to those imposed by most state veterans’ homes and by implementing an estate recovery program similar to those operated by many states under their Medicaid programs. The potential for recoveries appears to be greater within the VA system than under Medicaid. Home ownership is significantly higher among VA hospital users than among Medicaid nursing home recipients, and veterans living in VA nursing homes generally contribute less toward the cost of their care than do Medicaid recipients, allowing veterans to build larger estates. In another example, we found that the current ski fee system does not ensure that the Forest Service receives fair market value for the use of its land. In 1991, privately owned ski areas operating on Forest Service land—such as those in Vail, Colorado; Jackson Hole, Wyoming; and Taos, New Mexico—generated $737 million in gross sales. After making adjustments reflecting the revenues generated from federal land, these areas paid about $13.5 million, or about 2.2 percent of the total revenues generated, in fees to the government. When the Forest Service ski fee system was developed in 1965, the rates were to be adjusted periodically to reflect changes in economic conditions for these business-type operations. However, the rates by which fees are calculated have not been updated since the fee system was developed. Changing eligibility rules for tax preferences offers a fourth targeting strategy to reduce the federal budget deficit. While tax expenditures can be a valid means for achieving certain federal objectives, studies by GAO and others have raised concerns about the effectiveness, efficiency, and equity of some tax expenditures. As with poorly targeted fees, poorly targeted tax preferences often lead to overutilization by beneficiaries and reduced revenues that either add to the deficit or must be made up elsewhere in the budget. For example, tax-exempt industrial development bonds (IDBs) are poorly targeted. IDBs are issued by state and local governments to finance the creation or expansion of manufacturing facilities to create new jobs or to promote start-up companies or companies in economically distressed areas. However, in a review of IDB-funded projects, we found that only about one-fourth of the projects were located in economically distressed areas. We also found that the job creation benefits attributed to IDBs would likely have occurred anyway. In addition, most developers contacted said that they would have proceeded with their projects without IDBs. Moreover, few companies obtaining tax subsidized financing were start-up companies. OMB estimated that revenue loss due to the tax exempt status of small issue IDBs reached $690 million in fiscal year 1994. Similarly, we found that achievement of public benefits from qualified mortgage bonds (QMBs) is questionable. We found that QMBs did little to increase home ownership, were usually provided to home buyers who did not need them to obtain a conventional (unassisted) mortgage loan, and were not cost-effective. OMB estimated that revenue loss due to the tax-exempt status of QMBs amounted to $1.76 billion in fiscal year 1994. Both IDBs and QMBs could be better targeted. For example, IDBs could be focused on economically distressed areas or start-up companies, and QMBs could be directed toward home buyers who could not reasonably qualify for unassisted conventional loans. In another example, the current tax treatment of health insurance gives few incentives to workers to economize on purchasing health insurance.Some analysts believe that the tax-preferred status of these benefits has contributed to the overuse of health care services and large increases in our nation’s health care costs. Improved targeting for this subsidy could play a role in reducing the associated revenue losses and improving the efficiency of the nation’s health care system. Targeting is a viable approach because higher income employees are more likely to have health care coverage and, because they pay higher marginal tax rates than low-income earners, the tax benefits from employer-provided health benefits are greater for high-wage earners. The Department of the Treasury estimated that revenue loss due to the tax-exempt status of employer-provided health insurance amounted to $33.5 billion in fiscal year 1992. An option to better target this tax preference would be to place a cap on the dollar amount of health insurance premiums that could be excluded from income. Including in a worker’s income the dollar amount over the cap could improve the efficiency of the health care system and, to a lesser extent, tax equity. Alternatively, including health insurance premiums in income but allowing a tax credit for some percentage of the premium would improve equity since tax savings per dollar of premium would be the same for all taxpayers, irrespective of the tax brackets. As the examples from our published work show, more effective targeting is one of several available approaches that can allow for reducing spending while improving federal programs and services. Programs and services, such as grants to states to provide health care for low- and moderate-income individuals or export promotion support for emerging firms, are created due to some perception of eligibility and/or need. In these instances, individuals, organizations, or jurisdictions outside the original targeted population—that is, populations with a greater capacity to provide the program or service from their own resources or having fewer needs—have received program funds, services, or tax subsidies. This poor targeting may have occurred because grant formulas or eligibility rules were constructed too broadly or fees did not fully reflect beneficiaries’ capacity to offset program costs. In other instances, the circumstances creating a need for the program or service may have changed. The end result of poor targeting is that the federal government spends more money than needed to reach the intended beneficiaries and achieve its program or service goals. Moreover, in a climate of continuing large budget deficits, the inefficiencies resulting from poorly targeted programs and services have sometimes called into question the legitimacy of continuing these activities or maintaining them at their current levels. In many instances, broad support remains for the objectives of poorly targeted programs and services. In these areas, better targeting can increase the efficiency and effectiveness of the program or service while allowing for program reductions. In other cases, poor targeting raises fundamental questions about the program’s or service’s merit and/or feasibility. In these circumstances, decisionmakers may want to consider whether the program or service should be eliminated. We are sending copies of this report to the Ranking Minority Member of the House Committee on Government Reform and Oversight. Copies will be available to others upon request. Major contributors to this report were Margaret T. Wrightson, Assistant Director, and Timothy L. Minelli, Senior Evaluator. Please contact me at (202) 512-9573 if you or your staff have any questions concerning the report. Paul L. Posner Director, Budget Issues The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO summarized its previous work suggesting better targeting of federal programs and services as a strategy for downsizing government and reducing the deficit. GAO found that: (1) better targeting can reduce spending and improve federal programs and services, but poor targeting can result in overspending; (2) Congress must decide how to reduce funding for certain programs and alter the allocation of resources to meet its deficit reduction goals; and (3) options for better targeting include revising formula grants to states and localities to reflect differences in fiscal capacity, altering eligibility rules for federal benefit programs to restrict certain benefits, instituting fees for those that consume certain government-provided, business-type services, and limiting or eliminating tax preferences given to state and local governments that issue industrial development bonds. |
FHWA is the DOT agency responsible for federal highway programs— including distributing billions of dollars in federal highway funds to the states—and developing federal policy regarding the nation’s highways. The agency provides technical assistance to improve the quality of the transportation network, conducts transportation research, and disseminates research results throughout the country. FHWA’s program offices conduct these activities through its Research and Technology Program, which includes “research” (conducting research activities), “development” (developing practical applications or prototypes of research findings), and “technology” (communicating research and development knowledge and products to users). FHWA maintains a highway research facility in McLean, Virginia. This facility, known as the Turner-Fairbank Highway Research Center, has over 24 indoor and outdoor laboratories and support facilities. Approximately 300 federal employees, on-site contract employees, and students are currently engaged in transportation research at the center. FHWA’s research and technology program is based on the research and technology needs of each of its program offices such as the Offices of Infrastructure, Safety, or Policy. Each of the program offices is responsible for identifying research needs, formulating strategies to address transportation problems, and setting goals for research and technology activities that support the agency’s strategic goals. (See Appendix I for examples of research that these offices undertake.) One program office that is located at FHWA’s research facility provides support for administering the overall program and conducts some of the research. The agency’s leadership team, consisting of the associate administrators of the program offices and other FHWA offices, provides periodic oversight of the overall program. In 2002 FHWA appointed the Director of its Office of Research, Development, and Technology as the focal point for achieving the agency’s national performance objective of increasing the effectiveness of all FHWA program offices, as well as its partners and stakeholders, in determining research priorities and deploying technologies and innovation. In addition to the research activities within FHWA, the agency collaborates with other DOT agencies to conduct research and technology activities. For example, FHWA works with DOT’s Research and Special Programs Administration to coordinate efforts to support key research identified in the department’s strategic plan. Other nonfederal research and technology organizations also conduct research funded by FHWA related to highways and bridges. Among these are state research and technology programs that address technical questions associated with the planning, design, construction, rehabilitation, and maintenance of highways. In addition, the National Cooperative Highway Research Program conducts research on acute problems related to highway planning, design, construction, operation, and maintenance that are common to most states. Private organizations, including companies that design and construct highways and supply highway-related products, national associations of industry components, and engineering associations active in construction and highway transportation, also conduct or sponsor individual programs. Universities receive funding for research on surface transportation from FHWA, the states, and the private sector. Leading organizations that conduct scientific and engineering research, other federal agencies with research programs, and experts in research and technology have identified and use best practices for developing research agendas and evaluating research outcomes. Although the uncertain nature of research outcomes over time makes it difficult to set specific, measurable program goals and evaluate results, the best practices we identified are designed to ensure that the research objectives are related to the areas of greatest interest and concern to research users and that research is evaluated according to these objectives. These practices include (1) developing research agendas through the involvement of external stakeholders and (2) evaluation of research using techniques such as expert review of the quality of research outcomes. External stakeholder involvement is particularly important for FHWA because its research is expected to improve the construction, safety, and operation of transportation systems that are primarily managed by others, such as state departments of transportation. According to the Transportation Research Board’s Research and Technology Coordinating Committee, research has to be closely connected to its stakeholders to help ensure relevance and program support, and stakeholders are more likely to promote the use of research results if they are involved in the research process from the start. The committee also identified merit review of research proposals by independent technical experts based on technical criteria as being necessary to help ensure the most effective use of federal research funds. In 1999, we reported that other federal science agencies—such as the Environmental Protection Agency and the National Science Foundation—used such reviews to varying degrees to assess the merits of competitive and noncompetitive research proposals. In April 2002, the Office of Management and Budget issued investment criteria for federal research and technology program budgets that urge these agencies to put into place processes to assure the relevance, quality and performance of their programs. For example, the guidance requires these programs to have agendas that are assessed prospectively and retrospectively through external review to ensure that funds are being expended on quality research efforts. The Committee on Science, Engineering, and Public Policy reported in 1999 that federal agencies that support research in science and engineering have been challenged to find the most useful and effective ways to evaluate the performance and results of the research programs they support. Nevertheless, the committee found that research programs, no matter what their character and goals, can be evaluated meaningfully on a regular basis and in accordance with the Government Performance and Results Act. Similarly, in April 2002 the Office of Management and Budget issued investment criteria for federal research and technology program budgets that require these programs to define appropriate outcome measures and milestones that can be used to track progress toward goals and assess whether funding should be enhanced or redirected. In addition, program quality should be assessed periodically in relation to these criteria through retrospective expert review. The Committee on Science, Engineering, and Public Policy also emphasized that the evaluation methods must match the type of research and its objectives, and it concluded that expert or peer review is a particularly effective means to evaluate federally funded research. Peer review is a process that includes an independent assessment of the technical and scientific merit or quality of research by peers with essential subject area expertise and perspective equal to that of the researchers. Peer review does not require that the final impact of the research be known. In 1999, we reported that federal agencies, such as the Department of Agriculture, the National Institutes of Health, and the Department of Energy, use peer review to help them (1) determine whether to continue or renew research projects, (2) evaluate the results of research prior to publication of those results, and (3) evaluate the performance of programs and scientists. In its 1999 report, the Committee on Science, Engineering, and Public Policy also stated that expert review is widely used to evaluate: (1) the quality of current research as compared with other work being conducted in the field, (2) the relevance of research to the agency’s goals and mission, and (3) whether the research is at the “cutting edge.” Although FHWA engages external stakeholders in elements of its research and technology program, the agency currently does not follow the best practice of engaging external stakeholders on a consistent and transparent basis in setting its research agendas. The agency expects each program office to determine how or whether to involve external stakeholders in the agenda setting process. As we reported in May 2002, FHWA acknowledges that its approach to preparing research agendas is inconsistent and that the associate administrators of FHWA’s program offices primarily use input from the agency’s program offices, resource centers, and division offices. Although agency officials told us that resource center and division office staff provide the associate administrators with input based on their interactions with external stakeholders, to the extent that external stakeholder input into developing research agendas occurs, it is usually ad hoc and provided through technical committees and professional societies. For example, the agency’s agenda for environmental research was developed with input from both internal sources (including DOT’s and FHWA’s strategic plans and staff) and external sources (including the Transportation Research Board’s reports on environmental research needs and clean air, environmental justice leaders, planners, civil rights advocates, and legal experts). In our May 2002 report we recommended that FHWA develop a systematic approach for obtaining input from external stakeholders in determining its research and technology program’s agendas. FHWA concurred with our recommendation and has taken steps to develop such an approach. FHWA formed a planning group consisting of internal stakeholders as well as representatives from the Research and Special Programs Administration and the Pennsylvania Department of Transportation to determine how to implement our recommendation. This planning group prepared a report analyzing the approaches that four other federal agencies are taking to involve external stakeholders in setting their research and technology program agendas. Using the lessons learned from reviewing these other agencies’ activities, FHWA has drafted a Corporate Master Plan for Research and Deployment of Technology & Innovation. Under the draft plan, the agency would be required to establish specific steps for including external stakeholders in the agenda setting process for all areas of research throughout the agency’s research and technology program by fiscal year 2004. In drafting this plan, FHWA officials obtained input from internal stakeholders as well as external stakeholders, including state departments of transportation, academia, consultants, and members of the Transportation Research Board. It appears that FHWA has committed to taking the necessary steps to adopt the best practice of developing a systematic process for involving external stakeholders in the agenda setting process. The draft plan invites external stakeholders to assist FHWA with such activities as providing focus and direction to the research and technology program and setting the program’s agendas and priorities. However, because FHWA’s plan has not been finalized, we cannot comment on its potential effectiveness in involving external stakeholders. As we reported last year, FHWA does not have an agency wide systematic process to evaluate whether its research projects are achieving intended results that uses such techniques as peer review. Although the agency’s program offices may use methods such as obtaining feedback from customers and evaluating outputs or outcomes versus milestones, they all use success stories as the primary method to evaluate and communicate research outcomes. According to agency officials, success stories are examples of research results adopted or implemented by such stakeholders as state departments of transportation. These officials told us that success stories can document the financial returns on investment and nonmonetary benefits of research and technology efforts. However, we raised concerns that success stories are selective and do not cover the breadth of FHWA’s research and technology program. In 2001, the Transportation Research Board’s Research and Technology Coordinating Committee concluded that peer or expert review is an appropriate way to evaluate FHWA’s surface transportation research and technology program. Therefore, the committee recommended a variety of actions, including a systematic evaluation of outcomes by panels of external stakeholders and technical experts to help ensure the maximum return on investment in research. Agency officials told us that increased stakeholder involvement and peer review will require significant additional expenditures for the program. However, a Transportation Research Board official told us that the cost of obtaining expert assistance could be relatively low because the time needed to provide input would be minimal and could be provided by such inexpensive methods as electronic mail. In our May 2002 report, we recommended that FHWA develop a systematic process for evaluating significant ongoing and completed research that incorporates peer review or other best practices in use at federal agencies that conduct research. While FHWA has concurred that the agency must measure the performance of its research and technology program, it has not developed, defined or adopted a framework for measuring performance. FHWA’s report on efforts of other federal agencies that conduct research, discussed above, analyzed the approaches that four other federal agencies are taking to evaluate their research and technology programs using these best practices. According to FHWA’s assistant director for Research, Technology, and Innovation Deployment, the agency is using the results of this report to develop its own systematic approach for evaluating its research and technology program. However, this official noted that FHWA has been challenged to find the most useful and effective ways to evaluate the performance and results of the agency’s research and technology program. According to FHWA’s draft Corporate Master Plan for Research and Deployment of Technology & Innovation, FHWA is committed to developing a systematic method of evaluating its research and technology program that includes the use of a merit review panel. This panel would conduct evaluations and reviews in collaboration with representatives from FHWA staff, technical experts, peers, special interest groups, senior management, and contracting officers. According to the draft plan, these merit reviews would be conducted on a periodic basis for program-level and agency-level evaluations, while merit reviews at the project level would depend on the project’s size and complexity. FHWA is still in the process of developing, defining, and adopting a framework for measuring performance. Therefore, we cannot yet comment on how well FHWA’s efforts to evaluate research outcomes will follow established best practices. Mr. Chairman, this concludes my prepared statement. I would be pleased to answer any questions that you or Members of the Committee may have. For further information on this testimony, please contact Katherine Siggerud at (202) 512-2834 or [email protected]. Deena Richart made key contributions to this testimony. FHWA’s research and technology program is based on the research and technology needs of each of its program offices such as the Offices of Infrastructure, Safety, and Policy. Each of the program offices is responsible for identifying research needs, formulating strategies to address transportation problems, and setting goals for research and technology activities that support the agency’s strategic goals. (See table 1.) | Improvement and innovation based on highway research have long been important to the highway system. The Federal Highway Administration (FHWA) is the primary federal agency involved in highway research. Throughout the past decade, FHWA received hundreds of millions of dollars for its surface transportation research program, including nearly half of the Department of Transportation's approximate $1 billion budget for research in fiscal year 2002. Given the expectations of highway research and the level of resources dedicated to it, it is important to know that FHWA is conducting high quality research that is relevant and useful. In May 2002, GAO issued a report on these issues and made recommendations to FHWA, which the agency agreed with, aimed at improving its processes for setting research agendas and evaluating its research efforts. GAO was asked to testify on (1) best practices for developing research agendas and evaluating research outcomes for federal research programs; (2) how FHWA's processes for developing research agendas align with these best practices; and (3) how FHWA's processes for evaluating research outcomes align with these best practices. Leading organizations, federal agencies, and experts that conduct scientific and engineering research use best practices designed to ensure that research objectives are related to the areas of greatest interest to research users and that research is evaluated according to these objectives. Of the specific best practices recommended by experts--such as the Committee on Science, Engineering, and Public Policy and the National Science Foundation--GAO identified the following practices as particularly relevant for FHWA: (1) developing research agendas in consultation with external stakeholders to identify high-value research and (2) using a systematic approach to evaluate research through such techniques as peer review. FHWA's processes for developing its research agendas do not always consistently include stakeholder involvement. External stakeholder involvement is important for FHWA because its research is to be used by others that manage and construct transportation systems. FHWA acknowledges that its approach for developing research agendas lacks a systematic process to ensure that external stakeholders are involved. In response to GAO's recommendation, FHWA has drafted plans that take the necessary steps toward developing a systematic process for involving external stakeholders. While the plans appear responsive to GAO's recommendation, GAO cannot evaluate their effectiveness until they are implemented. FHWA does not have a systematic process that incorporates techniques such as peer review for evaluating research outcomes. Instead, the agency primarily uses a "success story" approach to communicate about those research projects that have positive impacts. As a result, it is unclear the extent to which all research projects have achieved their objectives. FHWA acknowledges that it must do more to measure the performance of its research program, however, it is still in the process of developing a framework for this purpose. While FHWA's initial plans appear responsive to GAO's recommendation, GAO cannot evaluate their effectiveness until they are implemented. |
The Delaware River deepening project calls for dredging the river’s main navigation ship channel to 45 feet, from a depth of 40 feet, beginning at the mouth of the Delaware Bay through Philadelphia Harbor, and to the Beckett Street Terminal in Camden, New Jersey—a distance of 102.5 miles. The Corps plans to use nine existing federal disposal sites in Delaware (one), New Jersey (seven), and Pennsylvania (one) to dispose of the material dredged from the bottom of the river. The new dredged material is to be layered on top of the material already deposited at these sites during annual maintenance dredging in the channel to maintain its 40- foot depth. Additionally, a portion of the material to be dredged is sand from Delaware Bay, which would be used by the Corps to restore wetlands at Kelly Island, Delaware, and the shoreline at Broadkill Beach, Delaware. According to the Corps, dredged material has been used in a variety of beneficial projects over the years, including environmental restoration, landscaping, and airport runway fill material. Often, the material must be drained and dried for several months before it can be used in these ways. Figure 1 shows the area to be dredged, the nine federal disposal sites, the two Delaware restoration locations, and other features discussed in this report. In 1992, the year Congress authorized the deepening project, the Corps completed a Final Interim Feasibility Study and Environmental Impact Statement (EIS) for the project. This document was used to inform decision makers and the public of the Corps’ recommended plan for the project, potential alternatives to it, its benefits and costs, and the likely environmental effects. The Corps then prepared a design memorandum in 1996, which provided details on the final design and engineering plans for the project, and published a Supplemental Environmental Impact Statement (SEIS) in 1997. In its 1998 LRR, the Corps updated its economic analysis of the project’s benefits and costs. In our June 2002 report, we found that the Corps’ 1998 analysis was based on miscalculations, invalid assumptions, and outdated information, and did not consider a number of uncertainties that could affect the project’s benefits and costs. Consequently, we concluded that the Corps’ analysis did not provide a reliable basis for determining whether the project was economically justified and recommended that the Corps (1) prepare a comprehensive, new economic analysis of the project; (2) obtain the information necessary to address uncertainties that could affect benefits and costs; (3) engage an external independent party to review the new analysis; and (4) submit the new analysis to Congress. In response to our 2002 report, the Corps reanalyzed the economic benefits and costs of the deepening project and issued a Comprehensive Economic Reanalysis Report in 2002, followed by a Supplement to Comprehensive Economic Reanalysis Report in 2004. (In this report we use the term “reanalysis” to refer collectively to both the Corps’ 2002 report and 2004 supplement.) The Corps’ reanalysis concluded that the project would yield average annual benefits of $24.2 million, about $16 million less than the Corps’ 1998 annual benefit estimate of $40.1 million. According to the Corps’ reanalysis, annual benefits would result largely from transportation cost savings associated with the importation of specific commodities—crude oil; containerized cargo, such as refrigerated meat and produce; and dry bulk commodities, such as steel slabs and blast furnace slag (an additive used in the production of cement). Crude oil savings would account for about half of these benefits, with cost savings related to containerized cargo accounting for another quarter of them. See table 1 for details on the benefit estimates and share of total benefits for each benefit category in the reanalysis. The benefit estimates in the Corps’ reanalysis depend on a number of factors, including (1) the extent to which future growth expands the total volume transported for each of the benefiting commodities; (2) the savings associated with using less of certain economic resources, such as the Delaware River lightering fleet; and (3) the economy’s prevailing price level and discount rate. For the reanalysis, the Philadelphia district contracted with a private consulting firm to analyze project benefits. According to the Corps’ reanalysis, the Delaware River deepening project would generate benefits relating to commodities imported by the following entities: Five crude oil refining facilities with six deep-draft terminals now owned by Sunoco (four), Valero (one), and ConocoPhillips (one), with four terminals located in Pennsylvania and two in New Jersey. Other commodity terminals, including those at Beckett Street Terminal in Camden, New Jersey; Packer Avenue Marine Terminal in Philadelphia, Pennsylvania; and Delaware Terminal at the port of Wilmington, Delaware. (The nine commodity terminals appear in the figure 1 map.) With regard to project costs, the Corps’ reanalysis estimated average annual project costs of $21.0 million, almost $8 million less than the Corps’ 1998 annual cost estimate of $28.8 million. This revised cost estimate includes channel dredging, disposal site construction, and any related land costs, such as land for new disposal sites and rights of way. It also includes associated costs, which are those needed, in addition to project costs, to achieve the benefits claimed during the period of the Corps’ analysis. These costs include, for example, berth deepening and dock modifications to accommodate deeper ships at refinery facilities and container terminals. Although associated costs are the responsibility of the potentially benefiting facilities, the Corps includes these costs in its total cost estimate, in accordance with its guidance. See appendix II for more information about the project’s associated costs. In addition to the 2002 and 2004 reanalysis documents, the Corps prepared the following documents that provide supplemental information on the benefits and costs of the Delaware River deepening project: an economic update to the project that reaffirmed the reanalysis’s benefit and cost estimates for budgeting purposes (April 2008), an environmental assessment that included a section summarizing the project’s potential economic benefits (April 2009), and an economic update to support the Corps’ fiscal year 2011 budget request (December 2009). See figure 2 for a summary timeline of key documents related to the to the deepening project. deepening project. As of December 2009, the Corps estimated average annual benefits of $30.1 million and average annual costs of $22.3 million for the project, yielding annual net benefits of $7.8 million. Because estimated benefits exceeded estimated costs—resulting in positive net benefits and a benefit- cost ratio greater than one—the Corps determined that the project remained economically justified. See table 2 for a summary of the benefit and cost estimates and resulting benefit-cost ratios in the Corps’ reanalysis and in its most recent economic update. As noted in table 2, the benefit and cost estimates are based on different price levels and discount rates, which accounts for some of the changes observed in the estimates between the 2002-2004 reanalysis and the 2009 economic update. This means that the estimates and resulting net benefits and benefit-cost ratios are not directly comparable between the two analyses. With regard to assessing the project’s potential environmental impacts, the Corps is required to comply with the National Environmental Policy Act (NEPA). In addition to summarizing the project’s potential economic benefits, as noted earlier, the 2009 environmental assessment’s primary purpose was to evaluate the impacts of changes to the project and in the project area since the 1992 EIS and 1997 SEIS, as well as to present the results of post-SEIS environmental monitoring and data collection. Although the Corps has made efforts to conduct a reanalysis of the project and provide assessments of its potential environmental impacts, the project has remained controversial. For many years the project has been criticized by regional environmental groups, among others, who have raised concerns about the project’s impact on water quality and various fish and wildlife species, as well as the accuracy of the Corps’ estimates of the project’s benefits and costs. Notwithstanding, because of the results of the reanalysis, congressional funding, and support for the project from its local sponsor and others, the Corps continued its efforts to begin construction. Specifically, in 2008 the Philadelphia Regional Port Authority (PRPA)—an independent agency of the state of Pennsylvania—replaced the Delaware River Port Authority as the project’s local sponsor. In that same year, PRPA and the Army signed a project partnership agreement for the construction of the deepening project. As the local sponsor, PRPA is to contribute 25 percent of the project’s total costs. The Corps’ reanalysis addressed many of the limitations that we had identified in 2002 in the project’s original economic analysis by using more recent information to correct invalid assumptions and outdated data, recalculating benefits and costs to correct miscalculations, and accounting for some of the economic uncertainty associated with the project. In addition, as we recommended, the Corps had independent experts review the reanalysis before submitting it to Congress. Although the Corps’ efforts were responsive overall to the recommendations we made in 2002, we found several additional limitations in the reanalysis. For example, in its analysis of the economic uncertainty associated with the project, the Corps considered the effects of negative-growth scenarios only for crude oil and refined petroleum but not for the remaining benefit categories. The Corps’ reanalysis was based in large part on the information that its contractor, David Miller & Associates (DMA), an economic consulting firm, developed between 2002 and 2004. Using the updated information that DMA developed, the Corps revised its list of potential benefit categories to exclude those that would no longer benefit from the project or those for which the agency had insufficient information to calculate benefits. For example, our 2002 report noted that the Corps had assumed benefits resulting from coal and iron ore imports, as well as scrap metal exports, even though trade in these commodities had greatly declined since the Corps had last studied them. In its reanalysis, the Corps dropped these commodities from its benefit calculations because of factors, such as reduced trade volumes, that indicated that benefits related to these commodities would not be realized. In addition to identifying outdated benefit categories, our 2002 report suggested that changing import patterns could present new commodities for the Corps’ consideration. The Corps’ reanalysis subsequently identified additional benefiting commodities that were not previously considered, such as refined petroleum, steel slabs, and blast furnace slag. The Corps’ reanalysis also prepared new forecasts of growth rates for each of the benefiting commodities to correct the past overstatement of key benefit categories, using information from government and private trade databases to re-evaluate import growth rates. For example, in 2002 we found that the Corps’ 1998 LRR had applied a 5.8 percent growth rate to oil imports from West Africa for 1992 through 2005, when that rate should have been applied only through 2000 and a lower rate—1.4 percent— applied for 2001 through 2005. This misapplication of growth rates was significant because crude oil benefits increase as import volume increases, generating savings from reduced transportation costs per barrel. For the reanalysis, the Corps assumed a lower annual growth rate of 0.2 percent by linking the forecast to the expected growth rate for the Delaware River refineries’ relatively fixed overall capacity, which was expected to grow by 10 percent—or 0.2 percent per year—over the 50-year life of the project. For other commodities, the Corps assumed that growth would be limited to the period leading up to the base year, which is the first year that the project’s full benefits can be realized. One commodity that the Corps limited in this way was containerized cargo, which, like crude oil, was assigned growth rates in the 1998 LRR that we found in 2002 to be overstated. In addition to constraining containerized cargo growth to the period leading up to the base year, the Corps’ reanalysis also assumed that project benefits for containerized cargo would be limited to two specific trade routes and the Corps forecasted growth for only one of these two routes. These routes included one extending from the East Coast of South America northbound to the U.S. East Coast and a second reaching from Australia and New Zealand eastbound through the Panama Canal and up the U.S. East Coast—both terminating at Philadelphia’s Packer Avenue Marine Terminal. In its reanalysis the Corps also corrected several additional invalid assumptions that we had identified in our prior report concerning the estimate of crude oil benefits. Specifically, in 2002 we reported that the Corps’ 1998 LRR (1) assumed that many more crude oil ship type and trade route combinations would benefit from a deepened channel than could be supported by its analysis, (2) relied on outdated specifications for lightering vessels in calculating benefits, and (3) incorrectly assumed that lightering reduction benefits would be realized at ports of origin. In the reanalysis these issues were addressed as follows: First, according to the Corps’ original statistical model, in 23 percent of all possible cases, ships on specific crude oil trade routes would carry enough cargo to exceed 40 feet of draft if a 45-foot channel were available, leading to transportation cost savings for that cargo in a deeper channel. However, in its 1998 LRR the Corps applied these benefits for 100 percent of the possible ship type-trade route combinations, thereby overstating benefits. In the reanalysis, DMA replaced the Corps’ statistical model with new projections based on the characteristics of the ships that actually called on Delaware River refineries in 2000, including information on each ship’s origin and destination, operating cost, crude oil tonnage, actual draft, and maximum draft for which it was designed. DMA used this information, in conjunction with refinery interviews, to determine which ships would be likely to increase their tonnage—and thus their drafts—in a deepened channel, and what level of benefits would be associated with this change. DMA ultimately based its projections on 86 percent of the crude oil tonnage reported by the refineries for the year 2000 because the remaining data were incomplete or otherwise unsuitable for analysis. Second, for those crude oil tankers that would need to be lightered less in a deepened channel, the 1998 LRR relied on outdated specifications in assuming that tankers can discharge crude oil into refineries’ dockside storage tanks twice as fast as they can transfer the oil to lightering vessels. The Corps’ reanalysis revised this assumption to reflect that lightering rates exceed dockside discharge rates because of, for example, shorter pumping distances and the assistance of gravity when pumping from large tankers to smaller lightering vessels. As the Corps recognized in the reanalysis, some portion of the benefits of reduced lightering would be offset by the increased time and cost of discharging more cargo at refineries’ docks. Third, the Corps’ 1998 analysis assumed that cost savings from reduced lightering would be realized at both the port of origin and port of destination. In fact, these benefits would be realized only at the destination port because that is where lightering occurs. The reanalysis assigns these benefits only to destination ports. In the reanalysis, the Corps used a lightering model based on a full year’s worth of lightering operations data to help refine its estimate of crude oil benefits. DMA initially constructed this model using assumptions about the lightering firm’s practices that were based on its review of Maritime Exchange data on tanker movements and sailing drafts for the year 2000. Following publication of the model’s assumptions and results in the Corps’ 2002 Comprehensive Economic Reanalysis Report, the lightering firm disagreed with DMA’s methodology and claimed that DMA’s assumptions resulted in an overstatement of lightering costs, which in turn would overstate crude oil benefits derived from avoiding these costs. For example, the lightering firm noted that DMA did not include the minority of its lightering activity that occurs not in the Delaware River but in the ocean offshore of Delaware Bay. Ignoring this portion of the firm’s lightering overstates cost per barrel lightered by inaccurately dividing 100 percent of costs by less than 100 percent of barrels lightered. In response, DMA revised its lightering model in the 2004 supplement by collecting and combining actual lightering operations data for the year 2000 from the lightering firm, the Corps’ Waterborne Commerce Statistics Center, and three of the five principal refinery firms operating in the Delaware River at that time. According to the Corps, this refinement allowed DMA to account for nearly 99 percent of all crude oil barrels lightered in the Delaware River and offshore during 2000, providing a more accurate estimate of crude oil benefits in the 2004 supplement. The Corps further attempted to address the lightering firm’s comments about its 2002 report by developing a more sophisticated model of lightering activities in the event of a 45-foot channel. Specifically, in its initial model, DMA had determined that the likely reduction in lightering volume in a deeper channel would be roughly equivalent to the capacity of one of the three vessels in the lightering firm’s fleet. DMA estimated lightering reduction benefits by removing that vessel and its operating costs from the fleet, as it assumed the lightering firm would choose to do in the event of a deepened channel, then recalculating total lightering costs based on the remaining two vessels. In response to the lightering firm’s criticism of this approach as unrealistic, DMA revised its approach by using updated data on operations from 2000 to simulate tanker-by- tanker lightering operations through 2058. The simulation results were matched with estimated vessel operating costs and hourly fuel consumption costs developed by the Corps’ Institute for Water Resources specifically for each of the three vessels in the lightering firm’s fleet. According to the Corps, this approach allowed the agency to more directly calculate the reduction in total economic resources—such as those devoted to each ship’s crew, fuel, and maintenance—needed to provide lightering services as lightering volumes fall. The Corps assumed these freed resources would be put to productive use by the lightering firm elsewhere in the economy. The revised methodology in the 2004 supplement was associated with a roughly 20 percent drop in the Corps’ crude oil benefit estimate when compared to the 2002 report. Finally, the Corps corrected miscalculations and important omissions we identified in 2002 that affected the project’s benefit and cost estimates. For example: When we attempted to replicate the Corps’ results in 2002, we identified a $4.7 million gap between the Corps’ estimate of annual project benefits and the estimate that we developed. The Corps’ economist for the project told us in 2002 that the gap resulted from a computer error that could have occurred when files were transferred from one program to another; ultimately, the Corps acknowledged the error but was unable to definitively explain it. For the reanalysis, the Corps recalculated its total benefit estimate using DMA’s new analysis of each benefiting commodity. We reviewed this calculation and found no significant errors. The Corps’ 1998 LRR was marked by inconsistent discounting of project benefits and costs to determine their net present value. Moreover, the Corps presented benefit estimates at price levels for different years—for example, coal benefits were presented at 1991 price levels and containerized cargo benefits at 1995 price levels. Both of these practices made it difficult for decision makers to understand and compare the true benefits and costs of the project. In developing the reanalysis the Corps used DMA’s analysis, which standardized the price level and discounting adjustments for project benefit estimates by benefit category, presenting each at 2002 price levels and using the prevailing discount rate at the time the reanalysis was published (5.625 percent). The Corps adjusted the reanalysis’s cost estimates using the same approach. In 2002 we found that the Corps omitted construction costs for federal disposal sites from its summary calculations in the 1998 LRR cost estimate. These construction costs would be incurred as the Corps expands the sites to accommodate additional dredged material resulting from annual maintenance of the 45-foot channel over its 50-year project life. In its reanalysis, the Corps’ estimate of total costs included costs for these sites. In 2002 we reported that the Corps’ 1998 LRR failed to update its estimates for associated costs, such as deepening the access channels that connect the main channel to benefiting facilities’ loading docks and increasing on- site storage capacity to handle larger deliveries. For the reanalysis, DMA hired a subcontractor to survey potentially benefiting firms and determine their likely associated costs, including berth deepening, dock modifications, and additional storage, and to estimate the cost of these modifications. This work was completed in 2002, and the Corps included the updated associated costs in the reanalysis’s total cost estimate. Our 2002 report noted that the Corps’ cost estimate in the 1998 LRR assumed that annual maintenance dredging for the 45-foot channel would begin after the last year of construction and continue for 50 years. However, maintenance dredging in completed segments of the channel could be required before the end of project construction—a consideration that was not accurately incorporated into the Corps’ previous maintenance cost estimate. The Corps’ reanalysis recognized that maintaining a 45-foot channel segment is more costly than maintaining a 40-foot segment, and incorporated this higher cost into its total cost estimate. In our 2002 report, we observed that some of the errors we identified illustrated the uncertainty inherent in forecasting information, such as commodity shipments, technological changes, and industry’s economic choices. We suggested that a reanalysis of the project consider a more careful treatment of the uncertainty associated with estimating benefits and costs, particularly since Corps guidance requires planners to identify areas of uncertainty in their analysis and to clearly describe them so that decision makers can understand the degree of reliability in a project’s benefit and cost estimates. One way to analyze the uncertainty associated with estimating benefits and costs is to include more information than simple point estimates, which can give the illusion of precision when a range of estimates may be more appropriate. Sensitivity analysis is one analytical tool for assessing the uncertainty associated with the estimates. In the context of benefit and cost estimation, sensitivity analysis can be used to assess the degree to which a benefit or cost estimate is affected by a change in a key assumption. For example, a sensitivity analysis for a labor-intensive construction project might examine the effect on overall project cost if the estimated hourly cost of labor were varied by plus or minus 10 percent. The 1998 LRR did not employ sensitivity analysis, but both of the reports that constitute the Corps’ reanalysis used this tool to analyze some of the uncertainties associated with the project’s benefit and cost estimates. Specifically, in the 2002 Comprehensive Economic Reanalysis Report, the Corps used sensitivity analysis to assess the extent to which the benefit and cost estimates, including the net benefit estimate, would change given alternative assumptions about factors such as commodity growth rates, lightering operation costs, and future ship sizes for slag and steel imports. For example, the Corps analyzed the effect on the net benefit estimate if future crude oil imports to Delaware River refineries grew by more, or less, than the assumed 0.2 percent per year. Scenarios included higher growth, lower growth, no growth, and negative growth. Under the latter scenario, the Corps estimated that crude oil benefits would be reduced by about 16 percent. The Corps’ rationale for the negative-growth scenario, in part, was the possibility that one or more of the refineries could go out of business. The Corps, however, stated that this was unlikely, citing the continued expansion of demand for products refined from crude oil and noting that its 0.2 percent growth rate was conservative relative to the Department of Energy’s projection of future U.S. crude oil imports through 2020, which ranged from 0.6 percent to 1.6 percent annually. Similarly, the Corps examined the potential effect on benefits of a negative-growth scenario for refined petroleum, as well as higher-growth, lower-growth, and no-growth scenarios for refined petroleum, blast furnace slag, containerized cargo, and steel slabs. To augment its sensitivity analysis, the Corps examined the vulnerability of various benefit categories to the actions of individual firms whose business decisions could affect the project. For example, the Corps’ estimate of blast furnace slag benefits was based on slag imports by a single cement firm. Benefits related to importing blast furnace slag could be lower or could disappear if this facility were to operate at a lower production capacity than the Corps assumed, or if it were shut down and not replaced by another firm. Crude oil, on the other hand, was imported by five firms at the time of the reanalysis’s 2002 report. Given the history of the continued operation of their respective refinery facilities in the recent past, including successful transfers of ownership to new firms, the Corps considered it unlikely that any refinery would be shut down for an extended period of time. However, the Corps did note that if one or more of the refineries went out of business, the benefits related to crude oil imports could drop significantly. In addition to analyzing the uncertainty associated with some of its benefit estimates, the Corps conducted a sensitivity analysis of some cost assumptions in the 2002 Comprehensive Economic Reanalysis Report. These sensitivity analyses tested different assumptions about key cost factors, such as dredging efficiency and the composition of dredged material. The latter could vary from mud and silt, which is relatively more expensive to dredge, to loose sand, which is relatively cheaper. The Corps also examined associated costs—specifically, whether individual firms were likely to make the necessary infrastructure investments to benefit from a deepened channel given their expected benefits. The Corps’ analysis showed that facility benefits would likely exceed facility costs for each of the project beneficiaries. The 2004 Supplement to Comprehensive Economic Reanalysis Report also contained sensitivity analyses—four related to crude oil benefits and three related to containerized cargo benefits. The crude oil analyses examined the impact of altering certain assumptions about lightering operations. These assumptions informed the Corps’ lightering simulation model, such as the vessel capacity assigned to each lightering trip in the model, and therefore any change in these assumptions could result in a significant change in the Corps’ crude oil benefit estimate. The final three sensitivity analyses examined containerized cargo assumptions. For example, the Corps calculated the positive and negative effect on project benefits that would result from increasing and decreasing containerized imports by 20 percent, respectively, for the two trade routes that the reanalysis identified as benefiting from a deeper channel. As we recommended in our 2002 report, the Corps submitted its reanalysis to independent reviewers before delivering it to Congress. This process included separate reviews of project benefits and costs. Benefits were reviewed first by a university professor with expertise in transportation systems. In addition, at the request of Corps headquarters, the Corps’ Institute for Water Resources arranged to have an external independent panel review the project’s benefit analysis. The institute contracted with a private consulting firm to convene a panel of economics and navigation experts for this review, which consisted of an iterative process of issue resolution through panel comments and the Corps’ responses. Similarly, the Corps selected an engineering firm with expertise in dredging cost analysis to review the project’s costs, including those incurred in initial construction dredging, long-term maintenance dredging, and the construction of disposal sites for dredged material. After the independent reviewers issued their final reports, the Corps’ Director of Civil Works approved the reanalysis. In at least one instance, the Corps’ external independent reviews resulted in a substantial change to the project’s benefit estimate. Specifically, the benefits review panel disagreed with an aspect of the approach DMA used to calculate the cost of crude oil lightering operations. This calculation had a direct effect on project benefits because a significant portion of crude oil benefits are derived from avoiding the cost of some lightering due to a deepened river channel. DMA defended its methodology in a series of responses to review panel comments. However, the Corps ultimately accepted the review panel’s revision of DMA’s calculation and used the resulting lower benefit estimate in its 2004 supplement. This $2.8 million adjustment represented a 19 percent reduction in annual crude oil benefits and a 10 percent reduction in the project’s total benefit estimate. Overall, while the Corps’ efforts have been responsive to the recommendations we made in 2002, we identified several limitations in the economic reanalysis that introduce additional uncertainty into the project’s benefit estimates. First, the external independent panel convened to review the reanalysis’s benefit estimates raised concerns about the benefit analysis for containerized cargo that may not have been fully resolved. Specifically, in its January 2004 final report, the independent panel concluded that the Corps had not eliminated significant uncertainties associated with the estimation of containerized cargo benefits. The review panel had been concerned that the Corps based its benefit estimate on transportation cost savings that would accrue to the project through more direct delivery of goods to Philadelphia-area destinations on just the two trade routes in the Corps’ analysis—one originating from South America and the other from Australia/New Zealand—and a weekly shipping service operating on each. According to the Corps, savings would result because some containers on the South America route were being shipped to the deeper port of New York/New Jersey to bypass the 40-foot Delaware River channel, and then trucked south to Philadelphia-area destinations. With the deeper channel, the Corps projected that these containers—as well as others resulting from growth on the Australia/New Zealand route—would instead be shipped directly to the port of Philadelphia through the 45-foot channel, avoiding the costly trucking from New York/New Jersey to Philadelphia. The review panel noted that for one of the two trade routes—Australia/New Zealand to the U.S. East Coast, accounting for 85 percent of containerized cargo benefits—the Corps’ benefit estimate relies on trucking that (1) does not yet occur and (2) depends on future revisions to the existing shipping service prompted by growth. The panel also noted that the prospective benefits rely on the future business decisions of only a few shipping services. For these and other reasons, the review panel stated that significant uncertainties remained in the containerized cargo benefit estimate and that the estimation of benefits accruing to the Australia/New Zealand trade route was the greatest source of residual uncertainty for this benefit category. The Philadelphia district responded to the panel’s comments in a document defending its analysis and also revised its discussion of containerized cargo benefits in the final version of the February 2004 supplement. The district’s response was reviewed by Corps headquarters, which acknowledged that not all uncertainties had been resolved, but concluded that the findings as a whole were reasonable and defensible. However, the Corps did not provide the final version of the 2004 supplement to the external review panel for resolution as the contract for its services had expired. Second, as noted earlier, the Corps’ 2002 Comprehensive Economic Reanalysis Report employed sensitivity analysis to examine the effect of negative-growth scenarios on the annual benefit estimates for crude oil and refined petroleum. However, negative-growth scenarios were not considered for the remaining benefiting commodities, which were analyzed under only higher-growth, lower-growth, and no-growth scenarios. The possibility of a contraction in the market for blast furnace slag, containerized cargo, and steel slabs was not insignificant, given the relatively few importers for certain commodities and the sensitivity of these markets to changes in world economic conditions. Indeed, as noted earlier, estimated benefits for slag rely on the future business decisions of a single firm. Considering that even a no-growth scenario for each benefit category would collectively result in the project’s total annual costs slightly exceeding its total annual benefits, as shown in the Corps’ reanalysis, the cumulative effect of negative growth for all commodities could have provided additional context to decision makers. In addition, the alternative-growth scenarios for crude oil from the 2002 report’s sensitivity analysis were not reanalyzed in the 2004 supplement, even though the methodology used to develop the estimate of crude oil benefits changed substantively from the 2002 report and the estimate itself declined by about 20 percent. Finally, the lightering firm disagrees with the reanalysis’s assumption that significant savings will result from the firm reducing its service levels proportionally in response to reduced demand for lightering in a deepened channel. To the extent that lightering service levels in a 45-foot channel are higher than the Corps assumes, project benefits could be reduced. In practice, the Corps’ assumption would mean that the lightering firm’s three vessels would spend less time in operation, or perhaps that two vessels would maintain similar service levels but the third vessel would be put to other uses. This reduction in service would save crew, fuel, and other resource costs that are the basis for the Corps’ estimate of lightering cost savings in its crude oil benefit model. However, the lightering firm contends that tanker arrivals into Delaware Bay can be unpredictable, with multiple arrivals possible on short notice, which requires the firm to retain three vessels in order to maintain the flexibility needed to provide prompt service. For the importing refineries that pay for tankers to ferry crude oil across the ocean to their facilities, lightering delays in the bay are costly. Moreover, refinery facilities typically do not maintain much on- site storage and instead rely on timely deliveries to continue operating. For these reasons, the lightering firm told us that the reanalysis’s assumption of service levels falling in proportion to reduced lightering demand is unrealistic. Instead, the lightering firm believes service levels would likely remain higher than the Corps’ modeling predicts because, for example, the firm would continue to provide service with three vessels instead of two. In fact, the lightering firm’s position on the feasibility of reduced service levels resembles an observation that the Corps made in discussing the undesirability of delivery delays for containerized cargo in the reanalysis’s 2002 report: “The issue is customer satisfaction and the potential loss of customers who are not receiving their desired service.” In interviews with us, Corps officials characterized their assumption of reduced lightering service levels as consistent with an economically rational firm’s most efficient allocation of its resources. In the 6 years that have elapsed since the Corps completed its reanalysis, current and anticipated future market and industry conditions have changed significantly. Several of the assumptions that underlie the Corps’ estimates of the project’s benefits are inconsistent with these changes. For example, the Department of Energy has lowered its long-term forecasts for growth in East Coast refinery capacity and U.S. imports of crude oil. These developments raise questions about the extent to which the reanalysis’s findings could be affected by these changed conditions. The Corps’ 2008 and 2009 economic updates did not analyze the potential effect of these changes on the project’s benefit estimates. Consequently, decision makers do not have the updated information necessary to indicate whether the market and industry changes that have occurred would affect the project’s net benefits. Benefits related to crude oil, containerized cargo, and steel slabs make up 89 percent of the project’s total annual benefits, accounting for 49 percent, 25 percent, and 15 percent respectively. Current market and industry conditions and future outlook for these key benefit categories have changed since the reanalysis was completed in early 2004. These changes indicate that the assumptions underlying the Corps’ benefit estimates may need to be revised, but their net effect is unclear without additional information and analysis. The following summarizes our findings related to these benefit categories, in descending order of importance to the project’s overall benefit estimate. The reanalysis’s crude oil benefit assumptions are not consistent with current market and industry conditions and future outlook, which raises questions about the reliability of the reanalysis’s crude oil benefit estimate. Relevant changes that could affect crude oil benefits include a projected decline in refinery capacity, a current and projected decline in crude oil imports, and changes in the Delaware River crude oil refining and lightering industries. Projected Decline in Refinery Capacity In the reanalysis, the Corps chose a 0.2 percent annual growth rate as the basis for its long-term forecast for crude oil imports into Delaware River ports. The Corps based its growth rate on the expected growth in long- term capacity for refineries in the East Coast region. This forecast came from the Department of Energy’s Energy Information Administration (EIA) as part of its Annual Energy Outlook. However EIA’s long-term outlook for East Coast refinery capacity has declined from 0.2 percent annual growth in its 2002 outlook to a 0.1 percent annual decline in its 2009 outlook, and the early-release version of EIA’s 2010 outlook has predicted a steeper decline of 2.0 percent annually. Current and Projected Decline in Imports The Corps observed in its reanalysis that its 0.2 percent annual growth rate for crude oil imports was a conservative projection compared to a Department of Energy forecast of future U.S. crude oil imports through 2020, which ranged from 0.6 percent to 1.6 percent annual growth; in its 2002 annual energy outlook, EIA identified 1.1 percent annual growth in imports as the most likely rate for this period. By 2009, this outlook had changed considerably from the earlier part of the decade: instead of the 1.1 percent annual growth for crude oil imports forecasted in EIA’s 2002 long-term outlook or the 0.2 percent annual growth assumed by the Corps, EIA’s 2009 and 2010 long-term outlooks forecasted annual declines of 1.6 and 0.4 percent, respectively. Moreover, to date, available data indicate that even the Corps’ marginal growth rate of 0.2 percent overstated crude oil imports through at least 2008. According to EIA data, the volume of crude oil imports into Delaware River ports declined from about 415 million barrels in 2000 to about 381 million barrels in 2008, for an annual decline of 1.1 percent and an overall decline of 8.1 percent since 2000. Imports were about 332 million barrels in 2009. We identified several reasons for the decline in crude oil imports into the Delaware River and changes to their long-term outlook. First, EIA officials pointed to several factors that have reduced the demand for crude oil in the United States overall and thus contributed to changes in the long-term forecast. These include the requirements of new regulations and legislation, such as the Energy Independence and Security Act of 2007— which includes mandates to increase domestic use of nonpetroleum liquid fuels such as ethanol and more stringent fuel efficiency standards—and competition from gasoline produced in Europe. EIA officials explained that crude oil imports are sensitive to changes in the market for gasoline, a product that accounts for about half of the refined output from a typical barrel of crude oil. East Coast refineries are especially vulnerable to competition from refineries in Europe (as well as U.S. Gulf Coast states) because the East Coast refineries have relatively high production costs. On the other hand, the officials noted that by lowering lightering costs, a deeper channel could reduce the cost of production and could potentially improve the refineries’ position in a highly competitive market. Second, EIA officials explained that the nation’s current economic recession has been associated with declines in demand for products refined from crude oil and shrinking profit margins for Delaware River refineries. These conditions are reflected in relatively low utilizations— that is, how much of a refinery’s total productive capacity is being used— which EIA officials said had fallen below 80 percent by late 2009. The officials noted that the following two Delaware River refinery firms have recently reduced their respective refinery capacities by halting production at major facilities: In October 2009 Sunoco announced that it was indefinitely idling its Eagle Point refinery facility in Westville, New Jersey. Subsequently, the firm announced in February 2010 that the closure was permanent. In November 2009 Valero announced that it would permanently shut down one of its two Delaware River refinery facilities—the former Motiva facility in Delaware City, Delaware. According to EIA officials, the remaining Delaware River facilities are likely to continue to operate because most of the excess refinery capacity has already been squeezed out of the Delaware River region. Looking ahead, EIA officials said that according to many observers, demand is not expected to return to its former levels even after the economy recovers because of the policy and structural changes noted earlier, resulting in less need for gasoline from crude oil. However, they said that the Northeast will remain a major consumer of home heating oil, which is made from crude oil, and that demand will likely grow for diesel fuel, a crude oil product that is used heavily in the trucking industry—especially in the Northeast. Third, according to an independent economic expert with experience analyzing the Delaware River crude oil market, demand for crude oil imports has declined in the Northeast because of high oil prices, changing consumer preferences, and gasoline imports from Europe. He predicted that, in general, U.S. energy demand will rely less heavily on crude oil in the future. In his assessment, the Corps’ crude oil forecasts are therefore likely outdated, and while the Corps’ assumptions about projected crude oil growth may have been reasonable in the early 2000s, they do not reflect current and expected future conditions. Changes in Delaware River Refining Industry Changes in the Delaware River crude oil refining industry affect the reanalysis’s crude oil benefit assumptions in ways that raise questions about the Corps’ crude oil benefit estimate. For example, the reanalysis’s lightering simulation model predicted that in the first year of a 45-foot channel the recently closed Eagle Point facility’s lightering requirement would be reduced by 41 percent. This amount represented 22 percent of the total expected decline in the need for Delaware River lightering in the model’s initial year. This reduction in lightering represents resource cost savings that are a key part of the Corps’ crude oil benefit estimate. If the facility is not reopened, it is unclear to what extent its share of crude oil benefits would instead be realized by Sunoco’s remaining Delaware River facilities. In comparison with Sunoco’s Eagle Point closure, Valero’s closure of its Delaware City facility would likely affect the crude oil benefit estimate less because this facility was not considered a potential beneficiary in the Corps’ reanalysis. However, the Corps’ lightering simulation model assumed that the facility would account for nearly a quarter of the lightering firm’s volume in the first year of a 45-foot channel. While this consideration does not affect the crude oil benefit estimate directly—because the volume of lightering for this facility was expected to remain the same in a 45-foot channel, thus precluding lightering reduction benefits—it does alter the assumptions about the lightering firm’s day-to- day operations that the Corps used to build the model, which could affect the benefit estimates for other facilities. In addition, because the Corps’ crude oil benefit estimate includes time savings from fewer tidal delays as tankers proceed upriver, a reduction in future oil imports could decrease these savings in a 45-foot channel. Overall, the net effect of these and other industry changes on the Corps’ crude oil benefit estimate is unclear. Changes in Delaware River Lightering Operations Changes experienced by the lightering firm whose operations were modeled in the Corps’ reanalysis also could influence the Corps’ estimate of crude oil benefits. According to Overseas Shipholding Group (OSG), the firm lightered about 98 million barrels in 2000, the year that the Corps used to build the reanalysis’s crude oil benefit model and that served as the basis for its crude oil projections. As recently as 2007, OSG officials told us, the firm lightered about 95 million barrels; however, OSG lightered only about 88 million barrels in 2008 and 77 million barrels in 2009, down almost 22 percent from the 2000 total. Despite the drop in lightering demand, OSG officials said they have maintained three ships in their lightering fleet to keep service levels consistent for their customers. As discussed earlier, the Corps’ Institute for Water Resources estimated the vessel operating costs, including factors such as hourly fuel consumption costs, for each of the three vessels in the lightering fleet at the time of the reanalysis—avoiding these costs through reduced lightering provided the basis for lightering resource cost savings in a deepened channel. However, according to OSG officials, two of the three ships in the firm’s current Delaware River lightering fleet are different from those the Corps modeled in its reanalysis, which suggests that fleet operating costs and other characteristics, such as pumping efficiency, may now be different. In 2010 the composition of OSG’s lightering fleet is expected to change even more from the composition of the fleet used in the Corps’ model, which could further influence the Corps’ estimate of crude oil benefits. Fleet composition would change because of a 10-year contract with Sunoco—OSG’s largest Delaware River customer—that led OSG to order two new tug-barges slated for delivery in 2010. An OSG official explained that these vessels were specially designed to take into account customer requirements, desired cargo volumes, increased operational efficiencies, and anticipated future environmental requirements. By adding these vessels to its fleet, OSG expects that greater lightering volumes will be realized. OSG officials said that by fall 2010, they expect to have the two new tug-barges operating as part of the firm’s fleet, along with a third vessel that was not modeled in the Corps’ reanalysis. OSG officials expect the new lightering fleet to have lower operating costs than the fleet that was modeled by the Corps, primarily because they will burn a less expensive fuel, coupled with increased operational efficiency. This would tend to reduce lightering resource costs and thus reduce the Corps’ estimated crude oil benefits, all else the same. Finally, the delivery of the first new tug-barge would activate the 10-year contract with Sunoco, which OSG officials said includes guaranteed minimum lightering volumes. If this contract causes lightering volumes to be higher than the Corps’ model predicts for whatever portion of the 10 years overlaps with the deepened channel’s 50-year operation period, then lightering reduction benefits could be lower as a result. It is possible that increased lightering under the contract, if any, for Sunoco’s remaining facilities could mitigate the drop in potential lightering cost savings resulting from the closure of Sunoco’s Eagle Point facility. Still, without an updated analysis of these changes, their net effect on the Corps’ estimate of crude oil benefits remains unclear. Potential Effect of Crude Oil Changes The Corps has acknowledged that changes since the reanalysis could affect its crude oil benefit model but has not analyzed this potential effect. In the reanalysis’s 2002 sensitivity analysis, the Corps showed that benefits related to crude oil could drop significantly in a negative-growth scenario where, for example, refineries go out of business (though, as we mentioned earlier, this analysis was not revised in the 2004 supplement despite substantive changes in the crude oil analysis). Further, according to the Corps, future import growth is responsible for about 9 percent of annual crude oil benefits. The Corps’ primary economic consultant for the reanalysis agreed that a decline in crude oil imports into the Delaware River would reduce crude oil benefits, although he noted that the percentage decline for benefits would be less than the decline for imports—that is, it would not be a one-for-one decline. The consultant also said that changes to vessel operating costs in the lightering firm’s fleet could have a significant effect on the crude oil benefit model. The reanalysis’s containerized cargo benefit assumptions may not fully reflect current conditions and cannot be adequately assessed without additional information. In the reanalysis’s 2004 supplement, the Corps revised its containerized cargo analysis to focus on specific growth assumptions for the two trade routes in its analysis—one from the East Coast of South America and a second from Australia/New Zealand passing through the Panama Canal. At the time of the Corps’ reanalysis, the two routes were served by a primary shipping firm and several partners operating one weekly service on each route that called at Philadelphia. The reanalysis’s containerized cargo benefits depended entirely on changes in shipping practices prompted by a 45-foot ship channel. Specifically, the reanalysis derived transportation cost savings from avoiding inefficient and costly trucking from the port of New York/New Jersey to Philadelphia—whether already occurring (on the South America service) or assumed to begin at some future time (on the Australia/New Zealand service). This trucking was an adaptation resulting from constraints on cargo capacity because of the need to maintain ship drafts that did not exceed the Delaware River’s 40-foot depth, which meant that some ships and cargo destined for Philadelphia would offload first at the relatively deeper port of New York/New Jersey. We were unable to verify the Corps’ key assumptions underlying the reanalysis’s expected containerized cargo benefits. Specifically, we could not confirm whether trucking is occurring at all, is occurring at a stable rate, or is growing on the South America service, and whether trucking has begun as a result of growth on the Australia/New Zealand service. According to the logistics provider for the firm that operates the Packer Avenue Marine Terminal, the South America weekly service still exists, is still operated by the same primary shipping firm, and still includes time- sensitive refrigerated cargo that could be trucked from New York/New Jersey to hasten its arrival in Philadelphia, thus preserving its retail value. The logistics provider’s weekly delivery data from January through November 2009 indicate overall growth on this service. However, we cannot fully assess the reanalysis’s benefit assumptions for this trade route without information about the number of containers still being offloaded in the port of New York/New Jersey and trucked to Philadelphia, which is the basis for containerized cargo benefits. We also asked the logistics provider for information about the weekly Australia/New Zealand service, which represents 85 percent of containerized cargo benefits in the Corps’ reanalysis. The provider said the weekly shipping service on that trade route is now handled in part by a firm that acquired the former primary shipper. In addition, a competing biweekly service that carries refrigerated cargo from the same countries began in early 2006. The logistics provider’s weekly delivery data from January through November 2009 indicate that the reanalysis may have understated the number of containers that could be shipped directly into Philadelphia on the weekly service without being rerouted to New York/New Jersey with subsequent trucking back to Philadelphia. It is also possible that additional imports that otherwise would have arrived on the weekly service are instead being accommodated at current channel depth, without trucking, by the competing biweekly service that did not exist at the time of the reanalysis. Being able to avoid trucking only through a deeper channel was the basis for containerized cargo benefits in the reanalysis, and was a key source of uncertainty identified by the reanalysis’s independent review panel. Ultimately, as in the case of the South America trade route, we cannot fully assess the reanalysis’s benefit assumptions for this trade route without additional information about the extent to which trucking is occurring on the weekly service, if at all. The reanalysis’s steel slabs benefit assumptions are not consistent with current market conditions. The Corps assumed that (1) transportation cost savings would be realized by a shift toward deeper-drafted vessels that can load more fully in a deepened channel and (2) these savings would grow as steel import volumes increased. From a 2001 base, the reanalysis forecasted a 1.1 percent annual growth rate for steel slab imports into the Packer Avenue Marine Terminal over the life of the project, which the Corps estimated would result in approximately 1 million tons imported in 2009—the reanalysis’s project base year—and 1.6 million tons imported in 2059. According to the Packer Avenue logistics provider, 1 million tons was exceeded in 2002 (1.1 million tons) and again in 2006 (1.2 million tons). However, worsening economic conditions affecting construction and other steel-intensive industries were reflected in import volumes for steel products in 2008 (261,000 tons) and 2009 (63,000 tons). In the reanalysis’s 2004 supplement, the Corps notes that the domestic market for steel is cyclical and exhibits a certain level of expected volatility. Still, import volumes would need to recover to at least 1 million tons by 2015— the revised project base year—before steel slab benefits could reach the Corps’ forecasted levels. For commodities such as steel slabs, the downturn in imports may be directly related to the recession and imports may recover as the economy recovers. It is possible that, over the length of the project, the growth rate for this benefit category may reach or exceed the Corps’ expected growth rate. For example, the current construction schedule means that benefits would not begin to be realized until at least 2015. Certain market and industry trends that have the potential to reduce project benefits— especially those tied to current economic conditions—could change over the next 5 years and have little or no negative effect on the benefit estimates or could even increase them. On the other hand, trends that result in part from policy and structural changes in the economy, such as legislation requiring increased fuel efficiency and the adoption of alternative fuels, are more likely to persist. Despite policy changes, competition from other sources, the recent downturn in the crude oil market, and other changes in the industry, officials from Delaware River crude oil refineries continue to be strong supporters of the deepening project. They agree that as long as they are importing crude oil, they would have an incentive to maximize efficiency on large vessels with drafts that exceed 40 and often 45 feet. For example, according to an official from a refinery facility that receives crude oil from Canada, being able to more fully load its supply tankers would save one out of every seven tanker deliveries to the facility. The Corps’ benefit model correctly presumes that transportation cost savings could be generated from these efficiencies, but given the market and industry changes since the modeling was performed, the benefit estimates may not be reliable. In addition, the Corps, the Philadelphia Regional Port Authority (PRPA), and others contend that the project has additional benefits that are not included in the Corps’ reanalysis. In its reanalysis, the Corps based its benefit estimate for the project on existing ships, commodities, and trade routes, with no commodity growth or new routes occurring as a direct result of the deepening. However, others have suggested that a 45-foot channel would actually increase the amount of trade in the Delaware River by making its ports more marketable globally. Moreover, a Corps Institute for Water Resources study expects the expansion and deepening of the Panama Canal that would accommodate 50-foot ship drafts by 2014 to significantly affect shipping routes, port development, and cargo distribution among ports. According to the study, one of the expansion’s greatest impacts will be seen in the containerized cargo trade. We heard from industry representatives that this trade is moving toward ever-larger container ships in order to realize greater economies of scale, including many ships that draft in excess of 40 feet. Furthermore, according to one of the economic experts we spoke with, significant growth in the chilled meat market could attract trade to Philadelphia and its extensive refrigerated warehouse infrastructure. To the extent that new cargoes and trade routes appear during the project’s 50-year operation period, the Corps’ analysis may understate project benefits for those commodities carried on vessels large enough to benefit from a 45-foot channel. However, these potential benefits would need to be analyzed by the Corps before they could be used to support the project’s economic justification. This analysis would also need to assess the potential effect of an expansion of Delaware River trade in relation to other East Coast ports to ensure that any Delaware River benefits claimed are not merely transfers from those ports. The Corps’ 2008 and 2009 economic updates do not account for the market and industry changes that have occurred since the completion of the reanalysis or verify certain benefit categories that were expected to develop by 2009. The two economic updates affirmed the level of expected benefits for each commodity and adjusted these estimates to reflect the current price level and discount rate. However, neither update analyzed the extent to which changes in, for example, the market for crude oil might affect the net benefits of the project. Such information would be useful to establish whether the changes have affected the Corps’ estimate of the project’s economic justification. Corps policy requires planners to report and maintain current estimates of project benefits and costs for all active funded projects in order to provide reasonable estimates of economic justification to Congress, federal decision makers, and local project sponsors. This policy requires economic updates for ongoing projects when more than 3 fiscal years have passed since the project’s last economic analysis. According to Corps guidance, economic updates do not require any major new analysis. Instead, they are limited to reviewing and updating previous assumptions, as well as limited surveying, sampling, and other techniques to develop a reasonable estimate of project benefits. The Corps’ 2008 economic update did not account for changed conditions and uncertainties related to the Corps’ commodity benefit estimates. According to Corps officials, the April 2008 economic update was developed internally for budgetary purposes and for establishing current project costs in preparation for the Army’s June 2008 project partnership agreement with PRPA. The update recapped the discussion of major benefit categories from the two documents that constitute the reanalysis and presented an additional few years of data on the volume of commodity imports. We believe that some of these updates would be useful to decision makers seeking to understand how the reanalysis’s forecasts had performed to date, but others would be less relevant. For example, the Corps validated its assumption of growth in blast furnace slag imports (and thus slag benefits) by using Waterborne Commerce Statistics Center data through 2005 to show that slag imports had exceeded the reanalysis’s growth forecast. However, the Corps also used the center’s data to show that crude oil imports had remained stable through 2005, but did not update the true constraint on long-term growth identified in the reanalysis—the Corps’ assumption of 0.2 percent annual growth in the area’s refinery capacity. For example, EIA’s 2006 Annual Energy Outlook forecasted a 0.4 percent long-term annual decline in East Coast refinery capacity, and its 2007 outlook forecasted no long-term change, but the Corps did not discuss either of these forecasts in its 2008 economic update or assess their potential effect on its crude oil benefit estimate. Neither did the Corps contact OSG to discuss the potential benefit-estimate implications of (1) the firm’s long-term contract with Sunoco and the new lightering vessels it ordered (both of which were reported publicly in 2005), or (2) OSG’s 2006 acquisition of the lightering firm whose operations were modeled by the Corps, which could have led to changes in the lightering operations that serve as the basis for the Corps’ model. The Corps’ 2008 update also did not resolve uncertainties related to some other benefit categories. For example, the Corps noted the healthy growth rate of container volumes overall for the Packer Avenue Marine Terminal for 2005 and 2006 but did not update the status of the weekly shipping services on the two trade routes that account for all containerized cargo benefits. Specifically, the Corps did not confirm that (1) containers were still being trucked from New York/New Jersey to Philadelphia on the South America trade route and (2) the expected rate of growth was occurring on the Australia/New Zealand trade route, which was projected to cause trucking to begin by 2009—both of which are necessary to realize any containerized cargo benefits. This information is especially vital given that the future status of the Australia/New Zealand trade route was identified by the reanalysis’s external independent review panel as the primary source of uncertainty in the Corps’ estimate of containerized cargo benefits. Furthermore, the Corps’ estimate of refined petroleum benefits depends in part on the benefiting petroleum firm’s construction of a new ship berth on the Delaware River that was due to be completed in 2007. The 2008 economic update did not discuss the status of this berth; according to a firm official, these improvements have not been made. Like the 2008 update, the Corps’ 2009 economic update reviewed commodity growth rates and adjusted benefit estimates to reflect new price levels and a lower discount rate. In addition, the update—completed by the Philadelphia district in December 2009, reviewed by the New England and New York districts, and approved by the North Atlantic division in January 2010—reduced the project’s construction cost estimate to reflect the latest engineering surveys of the amount of material needing to be dredged from the river channel. However, the 2009 update did not present any revised modeling, sensitivity analysis, or related adjustments to the benefit estimates to reflect changes to market and industry conditions and outlook for the Delaware River region—for example, by incorporating the lost refinery capacity at the Delaware City and Eagle Point facilities into its forecasts, or by revisiting the sensitivity analysis from the Corps’ 2002 report that analyzed the effect of negative growth for crude oil, both of which could have provided additional context for decision makers. Like the 2008 update, the 2009 update provided no updated information about the current status of the weekly shipping services on the two trade routes that account for all containerized cargo benefits. Moreover, the 2009 update reprinted the same steel slab import volumes from 2005 and 2006 that appeared in the 2008 update, which captured the 2006 peak in steel slab imports but ignored the precipitous decline from 2007 through 2009. In addition, the 2009 update presented 2 additional years of blast furnace slag import data (2006 and 2007), but did not discuss the 38 percent decline in slag imports from 2005 to 2007. The 2007 import total (529,000 tons) was just more than half of the 1 million tons that the reanalysis forecasted would occur by 2009; according to a U.S. Geological Survey official who studies the slag industry, a private trade database indicates that the 2009 import total was about 125,000 tons. Finally, like the 2008 update, the 2009 update did not revisit the Corps’ expectation that the benefiting petroleum firm’s new ship berth would be in place by 2007. The Corps’ 2009 update did reduce the project’s overall benefit estimate by 2.6 percent to remove benefits that were expected to be achieved prior to the completion of all segments of the deeper channel. In the reanalysis, the Corps stated that its construction schedule would allow benefits to be achieved at downriver facilities where deepening had already occurred before all upriver segments had been deepened. However, we observed— and Corps officials agreed—that the Corps’ revised construction schedule makes it impossible to achieve these benefits. After we shared our preliminary findings with the Corps in February 2010, the agency asked David Miller & Associates (DMA) to prepare a document that would provide us with additional information about the current status of Delaware River commerce to consider as we finalized our report. The resulting memorandum, reviewed by the Philadelphia district, discussed current trends in Delaware River commerce and identified changes in operations for relevant industries since the reanalysis was completed in 2004. DMA’s memorandum generally agreed with our findings regarding declines in crude oil, steel slab, and blast furnace slag imports. However, the memorandum concluded that other than short-term impacts of the recession, Delaware River import trends and industry changes have the potential to increase project benefits. According to DMA, this is because changes that would likely have a negative impact on project benefits, such as the reduction in crude oil imports, would likely be offset by increases in containerized cargo, refined petroleum, and steel imports. However, although the memorandum asserts that additional benefits and beneficiaries may be present, it does not include sufficient quantitative analysis to show how the changed conditions and outlook would likely affect the reanalysis’s commodity benefit estimates. For example, DMA’s memorandum acknowledges that (1) crude oil imports have declined in part because of competition from imports of refined petroleum products, such as gasoline, to East Coast ports; and (2) refined petroleum vessels typically do not lighter their cargo and therefore tend to arrive at the Delaware River with shallower drafts than crude oil vessels, which often engage in lightering. DMA suggested that a deeper channel could result in a shift to larger refined petroleum vessels that could make fewer trips to deliver the same volume of cargo. If so, DMA states that partial replacement of crude oil imports by refined petroleum imports may increase project benefits if the transportation cost savings of avoided refined petroleum vessel trips are greater than the cost savings associated with reduced crude oil lightering over the life of the project. Nonetheless, this partial revision of the reanalysis’s assumptions indicates that its crude oil and refined petroleum benefit estimates may no longer be reliable. Changed assumptions related to these benefit estimates—and those related to the estimates for containerized cargo, steel, and slag that also were presented in DMA’s memorandum—could affect each benefit estimate as well as the project’s overall net benefit estimate. We identified three key outstanding policy issues that could impact the construction of the Delaware River deepening project as it moves forward. Specifically, the Corps (1) lowered its estimate of the volume of dredged material, which eliminated the need for new disposal sites, but it continues to face resistance to its disposal plan; (2) was sued by Delaware and New Jersey in October and November 2009, respectively, which charged that the Corps lacks the environmental approvals needed to proceed with the project; and (3) has an ongoing dispute with New Jersey and several environmental groups over the project’s National Environmental Policy Act (NEPA) process. In the 2009 environmental assessment, the Corps lowered its 2002 estimate for the amount of material that would be dredged during the project’s 5- year initial construction period by 38 percent, from 26 million cubic yards of material dredged during initial construction to 16 million cubic yards. The estimate was lower because improved hydrographic survey technology showed less need for dredging in some portions of the river channel, nonfederal interests had conducted dredging in some portions of the channel, and higher sea levels have naturally deepened some portions of the channel. Unlike the estimate of dredged material for initial construction, the estimate for additional annual dredging to maintain a 45- foot channel, over the amount of dredging that would be required to maintain the 40-foot channel, remained unchanged—860,000 cubic yards per year, or 43 million cubic yards over the 50-year life of the project. The Corps’ lower estimate of dredged material for initial construction was independently validated in January 2009 by an engineering firm hired by PRPA, which, as the project’s local sponsor, is responsible for 25 percent of the cost of dredging and other aspects of construction. We found the firm’s approach to validating the dredged material estimate to be reasonable. The lower estimate for dredged material allowed the Corps to eliminate the three additional disposal sites in New Jersey that it had planned to add according to the reanalysis. In its 2009 environmental assessment, the Corps stated that it can account for all project-related dredged material at its existing disposal sites. The disposal sites are to receive the material dredged during initial construction as well as the material dredged during annual maintenance of the 45-foot channel. As we mentioned earlier, the Corps already uses the existing sites in Delaware and New Jersey to dispose of dredged material during annual maintenance cycles for the current 40-foot channel. By using only its existing disposal sites, the Corps expects to reduce project costs by forgoing land expenditures and construction costs related to the new sites. The Corps has accounted for these plans in a revised disposal cost estimate in its 2009 economic update. When it revised its dredged material estimate for the deepening project’s initial construction in the 2009 environmental assessment, the Corps also reduced the beneficial uses of Delaware Bay dredged sand from three projects to two. A third beneficial use project included in the reanalysis would have restored wetlands at Egg Island Point, New Jersey. However, Corps officials told us that the agency decided to defer the project in part because the Corps no longer expects to dredge enough sand in the bay portion of the deepening project to supply all three sites. Despite reductions in the dredged material estimate and the number of disposal sites needed, the Corps’ disposal plan remains a point of contention. Specifically, New Jersey is opposed to receiving any dredged material from the deepening project because it believes that the Corps has not adequately sampled and analyzed the material. Furthermore, New Jersey officials believe that the material could contain polychlorinated biphenyls (PCBs) and other toxins that could contaminate the state’s water supply, harm marine life, and pose a risk to disposal site employees. The Corps disagrees with this assertion, maintaining that the incremental additional dredged material, from 40 feet to 45 feet, is similar to the material dredged during annual maintenance of the 40-foot channel, which is deposited each year at the same disposal sites in New Jersey. The Corps contends that based on its sediment testing, the dredged material contains no harmful levels of contamination and will have no impact on water quality. New Jersey officials question the sufficiency of this sediment testing, asserting that the Corps’ testing is outdated and did not include sediment in the project’s new work areas—channel bends, channel widenings, and the channel bottom below 40 feet—which are not dredged during the Corps’ annual maintenance of the channel. A 2007 agreement between the governors of New Jersey and Pennsylvania has also added to the controversy over the placement of the project’s dredged material. According to a letter from the governor of New Jersey to the Corps, the agreement specified that dredged material resulting from any deepening would be deposited entirely in Pennsylvania, not in New Jersey. Conversely, in separate letters from the governor of Pennsylvania to the Corps and to the governor of New Jersey, Pennsylvania interpreted the agreement to mean that Pennsylvania would be the final repository for all of the material unwanted by New Jersey or Delaware that could be used for beneficial purposes in Pennsylvania, but that the material could be initially deposited and drained in federal disposal sites in New Jersey and Delaware before being moved to Pennsylvania. Additionally, while both New Jersey and Pennsylvania agreed in 2007 to the formation of a committee to identify sites for the disposal of the material, they have not yet formed this committee. Although the Corps and PRPA were not involved in the governors’ agreement, Corps officials told us that while they are open to an alternative disposal plan in general, any new disposal plan would have to be at least as safe as the current plan and result in no additional costs to the agency. Moreover, Corps officials stated that the project’s benefits and costs would need to be reassessed to ensure economic justification if under an alternative disposal plan (1) the dredged material were first placed in New Jersey and later moved to Pennsylvania or (2) all the material went directly to Pennsylvania. They also noted that an alternative plan could result in another needed round of project approvals by Congress. However, Corps officials also told us that if Pennsylvania agreed with New Jersey to remove the dredged material from New Jersey sites at a later date, then the Corps would not consider this agreement to be part of the deepening project. Further, Corps officials said the later activity would have to be a “100 percent nonfederal expense” and would not affect the overall cost of the project. The Corps and the states of Delaware and New Jersey disagree on the need for additional environmental approvals related to the deepening project, and this is currently the subject of litigation. In 1997 the Corps obtained letters from both states concurring that the project is consistent with each state’s coastal resource management policies. Under the Coastal Zone Management Act, a federal agency must carry out its activities consistent to the maximum extent practicable with the enforceable policies of approved state management programs. In states with federally approved coastal zone management programs—such as Delaware and New Jersey—a federal agency that undertakes a project in the coastal zone must provide a certification to that state that the project is consistent with the state’s program. If a state deems the project consistent with the state’s policies, the state issues a consistency “concurrence.” However, in 2002, New Jersey informed the Corps that the state was revoking its consistency determination, citing substantial changes in the project’s economic analyses and unresolved environmental issues. According to New Jersey officials, these issues include state requests for updated sediment sampling and analyses, as well as surface and groundwater monitoring reports, as described in a memorandum of understanding that accompanied the state’s 1997 consistency concurrence. Additionally, in a 2009 letter to the Corps, Delaware asked for additional coordination on its consistency concurrence issued in 1997, citing substantial project modifications over the previous 10 years. The Corps disagrees with the states’ positions on the consistency concurrences. First, Corps officials told us that they have the necessary concurrence letter on file from New Jersey. While New Jersey asserted that it “revoked” this concurrence, the National Oceanic and Atmospheric Administration, which administers the coastal zone management program, advised New Jersey that a state may not revoke a concurrence, noting an exception where the project has not begun and the effects are substantially different than previously reviewed. Similarly, with respect to Delaware, the Corps’ position is that the state already concurred with the Corps’ consistency determination. In November 2009 the Corps determined that supplemental coordination was not required for either state’s concurrence, because it found that the project changes were not substantial and that the changed circumstances were not significant. In addition, in 2001 the Corps applied for a subaqueous lands and wetlands permit from the state of Delaware. Under that state’s law, dredging in subaqueous lands or wetlands requires a permit. In comments on our 2002 report, the Under Secretary of the Army stated that the Corps “could not, and would not, proceed to construction without Subaqueous Lands/Wetlands Permit,” a position that the Under Secretary noted was a provision of the project cooperation agreement with the project’s original local sponsor (Delaware River Port Authority). In 2003 a hearing officer for Delaware’s Department of Natural Resources and Environmental Control recommended that the department deny the permit, citing the need for additional information. According to the Corps, it made several attempts to provide additional information to Delaware in the years following the hearing officer’s recommendation. However, a senior Delaware official told us that this information could have been accepted only as part of a new application because the record on which the department’s decision would be based had been closed. When the Army entered into a new project partnership agreement with PRPA in 2008, it reserved the right to determine whether the Delaware state permit was required as a matter of federal law, and presumably to move forward with the project if it determined the permit not to be required. In July 2009 Delaware’s Department of Natural Resources and Environmental Control denied the Corps’ request for the permit—finding that the Corps failed in its 2001 application to demonstrate that adverse environmental effects resulting from the project had been minimized, and that the record was outdated given the significant changes to the project as well as additional information developed since 2001. Subsequently, the Corps has argued that, under a provision of the Clean Water Act, the agency can assert federal supremacy and avoid compliance with the relevant state law because the Assistant Secretary of the Army for Civil Works found that regulation under such law impaired the Corps’ authority to maintain navigation. In summer 2009 the Corps solicited construction bids for dredging the first segment of the project. In response to the Corps’ statements and actions, in fall 2009, Delaware, New Jersey, and several environmental groups filed separate lawsuits against the Corps in U.S. district courts in Delaware and New Jersey. Among other things, the states and environmental groups are seeking a halt to the project until the Corps complies with all legal requirements, including obtaining relevant concurrences and permits. However, a U.S. district court recently allowed the Corps to proceed with deepening of the first river segment, denying in part Delaware’s motion for preliminary injunction. The judge also granted Delaware’s motion in part, ruling that the Corps cannot proceed with the rest of the project pending resolution of the lawsuit or further order of the court. The judge stated her opinion that, notwithstanding the ruling, the project “should be completed, consistent with congressional intent.” In reaching the decision, the court did not make a final ruling on Delaware’s claims, but concluded that the state was unlikely to prevail on a majority of its claims, while finding the Corps’ record lacking with respect to one claim. According to the court, its decision “gives the parties the opportunity to satisfy their respective obligations to govern responsibly.” The environmental groups who intervened in the case have appealed the ruling. On February 23, 2010, the Corps announced it had awarded a contract to deepen the first segment of the project, and on March 1 this work began. In the meantime, the district court case, as well as the pending New Jersey and environmental groups’ cases, is proceeding. The Corps’ 2009 environmental assessment for the Delaware River deepening project was controversial and has been challenged in court on several grounds. Specifically, New Jersey officials and several environmental groups have separately claimed that the assessment is not the appropriate mechanism for updating the last major environmental analysis of the project—the 1997 Supplemental Environmental Impact Statement (SEIS)—because, in their view, applicable regulations require the Corps to prepare another SEIS to account for project and environmental conditions that they contend have changed significantly since 1997. Generally, an environmental assessment involves a less detailed analytical process than other NEPA documents, such as an Environmental Impact Statement (EIS) or SEIS. Instead, it is intended to be a concise document that provides sufficient evidence and analysis for determining whether to prepare an EIS or SEIS. In commenting on a draft of this report, the Department of Defense noted that it has followed the regulations concerning the NEPA documents. Specifically, the stated purpose of the environmental assessment included to evaluate the impacts of changes to the deepening project, as well as changes to the existing conditions in the project area from those described in the 1992 EIS and 1997 SEIS. On this basis, the Corps concluded that none of the changes to the proposed project were substantial and there were no new circumstances or information that can be considered significant, and therefore determined that an SEIS was not required. According to New Jersey and the environmental groups, the environmental assessment overlooked certain elements of the project, relied on outdated information, and did not sufficiently explore all of the potential adverse impacts from the project. For example, they believe additional and updated sediment sampling and analyses are needed to fully characterize the materials to be dredged in the deepening project. As a result of these concerns, New Jersey and the environmental groups are now asking a U.S. district court, as part of the lawsuits they filed in fall 2009, to order the Corps to issue a new SEIS before proceeding with the project. In this regard, the Corps’ process for public comments on the deepening project has also been criticized. On December 17, 2008, the Philadelphia district, via a public notice, solicited comments from stakeholders concerning environmental changes as well as project changes since the 1997 SEIS, such as changes to the amount of estimated dredged material and the elimination of new disposal sites. The Corps’ notice indicated that all comments should be made by December 31, 2008. Among other things, environmental groups criticized the Corps for not giving stakeholders sufficient time for commenting on these changes and for scheduling the comment period over a major holiday period. Following these criticisms, the Corps extended the public comment period by 2 weeks. The public notice also did not explicitly inform the public that their comments would be used to prepare an environmental assessment. Instead, the notice asked the public for comments related to a summary of project changes and to identify any applicable existing and new information generated subsequent to the 1997 SEIS, to be used to update the environmental record and to determine whether further environmental work and analyses would be needed. Owing to both the abbreviated response period and the confusion over the public notice’s purpose, the environmental groups we spoke to stated that some potential respondents may not have commented, and comments the Corps did receive may not have been comprehensive. The environmental groups also contend that the Corps should have circulated a draft of the environmental assessment for public comment. There was professional disagreement between the Corps and the Army concerning whether a comment period for the draft environmental assessment was necessary. Specifically, in March 2009 the Corps’ Director of Civil Works asked permission from the Assistant Secretary of the Army for Civil Works to circulate the draft environmental assessment for public comment before it was issued in final form. In his request, the director identified several reasons why circulation of the draft assessment was advisable. The Assistant Secretary of the Army, however, denied his request, disagreeing with the rationale and focusing on its finding that circulation was not legally required—maintaining that the initial notice and comment period constituted a sufficient amount of public participation and that there was no legal requirement for additional public involvement. While Corps officials in the Philadelphia district told us that Corps guidance does not direct the agency to provide a public comment period for draft environmental assessments, they could not identify other environmental assessments that the district had issued without first circulating the draft for public comment. The reason that NEPA regulations emphasize public involvement through mechanisms such as public comment is that the law’s purpose, in part, is “to require disclosure of relevant environmental considerations that were given a ‘hard look’ by the agency, and thereby to permit informed public comment on proposed action and any choices or alternatives that might be pursued with less environmental harm.” The Corps has had the difficult task of developing benefit and cost estimates for the Delaware River deepening project that are based on what may occur over a 50-year period of analysis—a period that begins only after 5 years of channel dredging have been completed. For such a project, economic uncertainties associated with making projections about future conditions are important to consider because expectations about future market conditions and benefits often may not be realized. As the Corps’ policies recognize, analyzing uncertainties can help decision makers judge whether a project would be warranted under a range of economic conditions. The Corps’ reanalysis has provided a more solid foundation for estimating the project’s benefits and costs and has used sensitivity analysis to analyze the uncertainties associated with several key assumptions. However, since the reanalysis was completed, market and industry conditions have changed significantly in ways that raise questions about the Corps’ project benefit estimates going forward. While some of these changes could be short-term trends, others could have longer-lasting impacts. Such changes create additional uncertainties about the deepening project. In some cases, such as blast furnace slag, the changes affect a small portion of the project’s estimated benefits, but in other cases, such as crude oil, containerized cargo, and steel slabs, the changes are associated with commodities that make up most of the project’s estimated benefits. A key purpose of the Corps’ periodic economic updates is to analyze these uncertainties by collecting enough additional information to ensure that decision makers are presented with reasonable and timely estimates and that the project is warranted under a range of economic conditions. Because the Corps’ economic updates have not accounted for the potentially significant impact that some market and industry trends could have on the project’s estimated benefits, federal decision makers do not have the most current information about the project, including whether adjustments to the assumptions in the Corps’ benefit models are necessary. Such information would help decision makers more fully assess the project’s economic justification. Noneconomic aspects of project implementation can also add to uncertainties about the project. A key area of such uncertainty is the outcome of the legal challenges to the project’s environmental approvals and compliance. In particular, the Corps has made several decisions— such as soliciting information from the public over the winter holiday, and then, following Army direction, not seeking public comment on the draft environmental assessment—that have exacerbated public concerns over environmental issues, and as a result, its communications with the public regarding its actions have not been as open as might have been advisable for such a controversial project. To better ensure that decision makers have the most current information about changes that could affect the benefits of the Delaware River deepening project, we recommend that the Secretary of Defense direct the Chief of Engineers and Commanding General of the U.S. Army Corps of Engineers to provide an updated assessment to the Assistant Secretary of the Army for Civil Works, and to Congress, of relevant market and industry trends and outlook that specifies the extent to which the data and assumptions underlying each benefit category have changed, and the effect of any changes on each benefit estimate and the project’s net benefit estimate. This assessment should be issued as a public document and become part of the project’s official record. To improve consistency and transparency in how the Corps handles public participation in the development of environmental documents that are related to controversial projects and that the Corps believes have no applicable NEPA requirement, we recommend that the Chief of Engineers develop guidance on the appropriate timing and approaches for public notice and comment on such documents. We provided a draft of this report to the Department of Defense for review and comment. The department generally agreed with the recommendations in our report. Specifically, the department concurred with our recommendation that the Corps provide an updated assessment of relevant market and industry trends and outlook that specifies the extent to which data and assumptions underlying each benefit category have changed and the effect of these changes on project benefit estimates. The department agreed to have the Corps prepare an updated quantitative assessment that would incorporate the long-term trend in the economy over the project’s 50-year planning period. In addition, the department partially concurred with our recommendation that the Corps develop guidance on the appropriate timing and approaches for public notice and comment on environmental documents that are related to controversial projects and that the Corps believes have no applicable National Environmental Policy Act requirement. The department agreed that the Army will review and evaluate the need for clarifying guidance regarding whether or when a draft Corps Civil Works environmental assessment (EA) and finding of no significant impact (FONSI) should be circulated for public comment before they are finalized. The department notes that it has no reason to believe that its existing regulations and guidance regarding this subject are defective or in need of modification. While there are regulations addressing the typical scenario where an environmental assessment is the first NEPA document developed (e.g., there is no EIS previously prepared), we believe that no Corps guidance exists for the less common scenario where a relatively old EIS or supplemental EIS already exists for a project that has not yet been constructed, as was the situation in 2009 when the Corps prepared its EA for the deepening project. The department acknowledges that it would be beneficial to issue clarifying guidance for conducting an EA in such a scenario. The department’s official comments are presented in appendix III. We also received technical comments from the department, which we have incorporated as appropriate throughout the report. In addition, we invited Delaware, New Jersey, and Pennsylvania to comment on draft report excerpts discussing issues relevant to each state. We received comment letters from New Jersey and Pennsylvania, which we present in appendixes IV and V, respectively. We also received technical comments from all three states, which we incorporated as appropriate throughout the report. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the appropriate congressional committees, the Secretary of Defense, the Chief of Engineers and Commanding General of the U.S. Army Corps of Engineers, and other interested parties. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions regarding this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix VI. Our objectives were to determine (1) the extent to which the U.S. Army Corps of Engineers’ (Corps) reanalysis addressed the economic analysis limitations we identified in 2002; (2) the extent to which the benefit projections the Corps included in its reanalysis of the project, as updated, are consistent with current and anticipated future market and industry conditions; and (3) what other key issues, if any, could affect the project, and the extent to which the Corps has accounted for these issues and their potential impacts. To determine the extent to which the Corps’ reanalysis addressed the economic analysis limitations we identified in 2002, we reviewed our 2002 report to identify the key limitations we had found in the Corps’ 1998 analysis. These limitations ranged from errors in benefit and cost estimation—such as the misapplication of commodity growth rates and the omission of disposal site construction costs for future channel maintenance dredging—to concerns about the Corps’ treatment of economic uncertainty and the lack of internal quality control in the Corps’ report review process. We then reviewed the Corps’ reanalysis—the 2002 Comprehensive Economic Reanalysis Report and the 2004 Supplement to Comprehensive Economic Reanalysis Report—and supporting documents, and assessed the extent to which the reanalysis generally addressed the limitations we had identified earlier consistent with standard economic principles for conducting a benefit-cost analysis. The supporting documents we reviewed included detailed quantitative analyses for each of the five benefiting commodities included in the reanalysis—crude oil, containerized cargo, steel slabs, blast furnace slag, and refined petroleum—as well as the Corps’ calculation of benefits for reuse of dredged sand at Broadkill Beach. We also reviewed documents that provided greater context for the Corps’ reanalysis, such as the agency’s official comments in response to the findings and recommendations in our 2002 report, and two subsequent internal documents that updated and further explained the reanalysis’s benefit and cost assumptions—the Corps’ 2008 and 2009 economic updates. We interviewed Corps officials at the Philadelphia district with primary responsibility for the reanalysis’s benefit-cost analysis to gain further understanding of the steps taken by the Corps to address the limitations. We also discussed the reanalysis with officials from the Corps’ North Atlantic division and its headquarters in Washington, D.C. For further information about the reanalysis, we interviewed the primary economic consultant for the reanalysis from David Miller & Associates (DMA), the firm that the Corps hired to prepare key parts of the reanalysis, including an updated analysis of project benefits and associated costs. Finally, we presented our findings related to each limitation in a table for the Corps to review, at which time we requested additional information and documentation for certain items, as appropriate. In addition, we discussed the Corps’ analyses with an academic expert who has analyzed the lightering and crude oil industries in the Delaware River. To determine the extent to which the benefit projections the Corps included in its reanalysis of the project, as updated, are consistent with current and anticipated future market and industry conditions, we attempted to verify key data and assumptions underlying the key benefit categories in the reanalysis’s 2002 report and 2004 supplement, as well as the Corps’ 2008 and 2009 economic updates, using data on the general trends since the Corps conducted its reanalysis, current conditions, and the expected outlook for relevant Delaware River imports and industries. For crude oil, we used data on imports to Delaware River ports collected by the Department of Energy’s Energy Information Administration (EIA). Importers of crude oil and petroleum products are required to report on a monthly basis to EIA. To assess the reliability of these data, we reviewed existing agency information about the data and the agency’s data quality procedures, and we interviewed agency officials knowledgeable about the data. We used information from the Department of Commerce and industry sources to corroborate the general historical trend exhibited in the EIA import data. We note that EIA’s import data for 2009 are preliminary and may be revised. We determined that the EIA data are sufficiently reliable for the purposes of this report. We also reviewed EIA’s Annual Energy Outlook forecasts for U.S. crude oil imports and refinery capacity on the East Coast; EIA forecasts were a primary source for the Corps’ in developing the reanalysis’s crude oil benefit estimate. To assess the reasonableness of these forecasts, we reviewed supporting documentation on the approach and key assumptions and we interviewed knowledgeable EIA officials to discuss possible reasons for observed declines in historical imports and changes in the agency’s forecast for crude oil refinery capacity and imports. We note that EIA may revise its forecasts over time as new information becomes available. To further assess trends and outlook in the Delaware River crude oil industry, we interviewed officials from the three refinery firms that own the six Delaware River refinery facilities included in the Corps’ reanalysis, as well as representatives of Overseas Shipholding Group, which conducts the lightering operations for those firms, to discuss their past and present crude oil-related operations. We also interviewed EIA officials and an academic expert knowledgeable about oil markets. To assess current conditions and outlook for containerized cargo and steel slab imports, we reviewed Corps data in the reanalysis and subsequent economic updates and we interviewed the logistics provider for the Packer Avenue Marine Terminal. Although we obtained information on containerized cargo import trends, we were unable to obtain data with which to verify key assumptions that the Corps used to support its containerized cargo benefit estimate. Specifically, we could not confirm that (1) containers were still being trucked from the port of New York/New Jersey to Philadelphia for the weekly service on the South America trade route and (2) the expected rate of growth was occurring for the weekly service on the Australia/New Zealand trade route, which was supposed to cause trucking to begin by 2009—both of which are necessary to realize any containerized cargo benefits. For information on blast furnace slag imports, we reviewed annual reports by the U.S. Geological Survey on the slag industry in the United States and we interviewed a U.S. Geological Survey official who is knowledgeable about the slag industry. We believe that the information is sufficiently reliable for the purposes of this report. For details about the operational status of the reanalysis’s sole refined petroleum beneficiary, we interviewed a representative of Magellan LP, the firm that acquired the benefiting petroleum terminal identified in the reanalysis. For additional background on all commodities, we reviewed historical import data from several additional sources, including the Corps’ Waterborne Commerce Statistics Center, the U.S. Department of Agriculture, and the U.S. Department of Commerce. To determine what other key issues, if any, could affect the project, and the extent to which the Corps has accounted for these issues and their potential impacts, we reviewed the limitations that we had identified in our 2002 report to develop a list of key noneconomic concerns for further examination. This included the Corps’ handling of environmental policy issues, such as its pursuit of a subaqueous lands permit from Delaware for dredging in that state’s waters. Similar to the methodology described in our first objective, we used these previously identified concerns as criteria for reviewing the reanalysis and other key documents, as well as the Corps’ 2009 environmental assessment, to determine whether and how each issue was addressed by the Corps. We also requested from the Corps all comment letters received in response to its public request for information in advance of its 2009 environmental assessment. These letters—from federal, state, and local agencies; environmental groups; and private citizens—detailed concerns about the project’s potential impacts and changes in the project area since the Corps’ 1997 supplemental environmental impact statement (SEIS). We reviewed these letters, and the content analysis that the Corps prepared to summarize them, in order to gain an understanding of prominent issues and controversies associated with the project. Throughout our review we also read local media accounts of these issues and controversies. Further, we reviewed correspondence and legal filings related to Delaware’s, New Jersey’s, and regional environmental groups’ ongoing disputes with the Corps over environmental approvals for the deepening project. We discussed the project with several regional environmental groups, including some that were involved in lawsuits to stop the project. Finally, once we had determined key policy and legal issues affecting the project, we discussed these issues with the Corps and requested more information and documentation of the Corps’ plans where necessary. We also asked representatives of the three states likely to be most affected by the project—Delaware, New Jersey, and Pennsylvania—to review our interpretation of these issues to the extent that it was relevant to each state. For all three objectives, we consulted experts in the fields of economics and lightering, environmental groups with an interest in the project, representatives of firms likely to be affected both positively and negatively by the project, and the Philadelphia Regional Port Authority (PRPA), the project’s local sponsor. Where we obtained other analyses or external studies, we considered the contents of these studies but conducted our own independent review. For example, the Corps’ reduced estimate of dredged material from initial construction was independently validated in January 2009 by an engineering firm hired by PRPA, which, as the project’s local sponsor, is responsible for 25 percent of the cost of dredging and other aspects of construction. We reviewed the firm’s approach to validating the revised dredged material estimate and found it to be reasonable. We conducted this performance audit from March 2009 through March 2010 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. It has now been more than 7 years since the Corps has asked the refineries about changes to their facilities. Since the reanalysis, some refinery facilities have undergone significant structural and operational changes that could affect the associated costs of the project, which are the private costs that would need to be incurred, in addition to project costs, to achieve the project’s full benefits. Associated costs account for about 10 percent of the project’s total economic first costs. Specifically: Associated costs could be lower. According to one refinery, a tanker dock at one of its facilities was completely rebuilt in 2008 to address structural problems. In anticipation of a deepened channel, the new dock was constructed to accommodate tankers needing a depth of 45 feet. Since this work was undertaken after the Corps’ reanalysis, the project’s associated costs could be lower than the reanalysis initially predicted. In 2002 the Corps estimated that these modifications would cost $3.6 million. In addition, the recent closure of Sunoco’s Eagle Point facility, if permanent, could decrease associated costs because no modifications would need to be made at this facility. In 2002 the Corps estimated that it would cost $362,000 to modify this facility. Associated costs could be higher. Refinery officials expressed concerns about the availability of private disposal space for dredged material, which could be costly. According to PRPA, private dredging and disposal costs have risen since the time of the reanalysis due to higher fuel costs, among other factors. If the disposal cost for dredged material is significantly more now than it was in 2002, the project’s associated costs could increase. Officials from all three refinery firms told us that they supported the deepening project. However, they also told us that they would need to analyze the project’s benefits and costs for their firms to determine whether they would commit to making the improvements necessary to take advantage of the project. These improvements could be substantial: deepening their ship berths, retrofitting their docks, or expanding their storage capacity. As our discussions with refinery officials suggest, firms are not likely to commit to the modifications needed to realize project benefits until they have conducted their own financial analysis of the benefits they would gain. If the firms decided against making the necessary modifications, then the project’s benefits could be lower than initially estimated. These decisions are particularly important in light of the Army’s project partnership agreement with PRPA. The agreement specifies that PRPA is responsible for ensuring that the local facilities undertake the modifications necessary to take advantage of the deepening project. However, this agreement does not require PRPA to produce third-party agreements with these potential beneficiaries as evidence of their commitment before project construction could proceed. In contrast, under the agreement with the project’s original local sponsor (the Delaware River Port Authority), the local sponsor had to provide copies of third- party agreements as evidence of local facilities’ commitment to make the modifications necessary to realize project benefits. Corps officials said that in the time between the signed agreements with the Delaware River Port Authority and PRPA, the model project partnership agreement, developed by Corps headquarters, changed so that provisions for third- party agreements are no longer included. Nevertheless, according to Corps officials with whom we spoke, the agency expects benefiting firms will modify their facilities once project construction begins. The Corps assumes that the beneficiaries will make separate arrangements with the Corps’ dredging contractor while the contractor is working in each beneficiary’s section of the river. By using the Corps’ contractor, the beneficiaries could save certain dredging costs, such as those related to the transfer of equipment to and from the site and the installation and removal of pipelines. In addition to the individual listed above, Vondalee R. Hunt (Assistant Director), Elizabeth Beardsley, David Brown, Laurie Ellington, and Timothy Guinane made significant contributions to this report. Michael Armes, Sara Daleski, Terrance Horner, Richard Johnson, Armetha Liles, Christopher Murray, Lauren Nunnally, Katherine Raheb, Carol Shulman, Vasiliki Theodoropoulos, and Eugene Wisnoski also made key contributions. | In 1992 Congress authorized the U.S. Army Corps of Engineers (Corps) to implement the Delaware River deepening project, which would deepen the river's shipping channel from 40 to 45 feet. In 2002 GAO reviewed the Corps' economic analysis of the project, concluding that it contained significant limitations. GAO recommended that the Corps prepare a comprehensive economic reanalysis, which the Corps completed in 2004. GAO was asked to determine the extent to which (1) the reanalysis addressed the limitations GAO identified; (2) the reanalysis's benefit projections, as updated, reflect current and anticipated market and industry conditions; and (3) the Corps has accounted for other key issues that could affect the project. GAO reviewed Corps project documentation and interviewed federal officials along with representatives of affected states, firms, and environmental groups. The Corps' reanalysis addressed many of the limitations GAO had identified in 2002 in the Delaware River deepening project's original economic analysis by using updated information to correct invalid assumptions and outdated data, recalculating benefits and costs to correct miscalculations, and accounting for some of the economic uncertainty associated with the project. For example, the Corps revised its benefit estimates for transportation cost savings related to such commodities as crude oil, containerized cargo, and steel slabs. In addition, as GAO recommended, the Corps had independent experts review the reanalysis. Although the Corps' efforts were responsive overall to GAO's 2002 recommendations, GAO identified several additional limitations in the reanalysis. For example, in its analysis of economic uncertainty, the Corps considered the effects of negative-growth scenarios only for crude oil and refined petroleum, but not for the remaining commodities. In the 6 years that have elapsed since the Corps completed its reanalysis, current and anticipated future market and industry conditions have changed significantly. Several of the assumptions that underlie the Corps' estimates of the project's benefits are inconsistent with these changes. For example, the Department of Energy has lowered its long-term forecasts for growth in East Coast refinery capacity and U.S. imports of crude oil. Also, in the fall of 2009, Delaware River refinery firms closed two major facilities. Further, steel imports have declined since 2006 according to the benefiting facility identified in the reanalysis, and were well below the reanalysis's growth projection for 2009. However, the Corps' 2008 and 2009 economic updates for the project did not analyze the potential effect of these changes on the project's benefit estimates. The updates also did not determine the current status of shipping services on two trade routes that provide all of the benefits related to containerized cargo. Because of these and other omissions, decision makers do not have sufficient updated information to judge the extent to which market and industry changes would affect the project's net benefits. GAO identified three key outstanding issues that could affect the Delaware River deepening project. First, the Corps lowered its estimate of the volume of dredged material, which eliminated the need for new disposal sites in New Jersey, but its disposal plan continues to face resistance from that state. Second, Delaware, New Jersey, and several environmental groups filed separate lawsuits against the Corps in the fall of 2009, charging that the Corps lacks the environmental approvals needed to proceed with the project, among other concerns. Finally, New Jersey and several environmental groups have challenged in court the Corps' National Environmental Policy Act (NEPA) process for the project. Although the Corps completed an environmental assessment (EA) in April 2009, stakeholders believe that the process for soliciting public comment on its scope was unclear, did not allow enough time for comment, and that a new supplemental environmental impact statement is needed. Also, at the Army's direction, the Corps did not provide a public comment period for the draft EA as it had proposed to do. |
In an effort to strengthen homeland security following the September 11, 2001, terrorist attacks on the United States, President Bush issued the National Strategy for Homeland Security in July 2002 and signed legislation creating DHS in November 2002. The strategy set forth the overall objectives, mission areas, and initiatives to prevent terrorist attacks within the United States; reduce America’s vulnerability to terrorism; and minimize the damage and assist in the recovery from attacks that may occur. DHS, which began operations in March 2003, represented a fusion of 22 federal agencies to coordinate and centralize the leadership of many homeland security activities under a single department. Although the National Strategy for Homeland Security identified that many other federal departments (and other nonfederal stakeholders) are involved in homeland security activities, DHS has the dominant role in implementing the strategy. The strategy identified 6 mission areas and 43 initiatives. DHS was designated as the lead federal agency for 37of the 43 initiatives, and has activities under way in 40 of the 43 initiatives. The Homeland Security Act of 2002, which created DHS, represented a historic moment of almost unprecedented action by the federal government to fundamentally transform how the nation thinks of homeland security, including how it protects itself from terrorism. Also significant was the fact that many of the 22 departments brought together under DHS were not focused on homeland security missions prior to September 11, 2001. Rarely in the country’s past had such a large and complex reorganization of government occurred or been developed with such a singular and urgent purpose. The creation of DHS represented a unique opportunity to transform a disparate group of agencies with multiple missions, values, and cultures into a strong and effective cabinet department whose goals are to, among other things, protect U.S. borders and infrastructure, improve intelligence and information sharing, and prevent and respond to potential terrorist attacks. Together with this unique opportunity, however, came a significant risk to the nation that could occur if the department’s implementation and transformation efforts were not successful. Mission areas designated as high risk have national significance, while other areas designated as high risk represent management functions that are important for agency performance and accountability. The identified areas can have a qualitative risk that may be detrimental to public health or safety, national security, and economic growth, or a fiscal risk due to the size of the program in question. Examples of high-risk areas include federal governmentwide problems, like human capital management; large programs, like Social Security, Medicaid, and Medicare; and more narrow issues, such as contracting at a specific agency. The DHS transformation is unique in that it involves reorganization, management, and program challenges simultaneously. We first designated DHS’s transformation as high risk in January 2003 based on three factors. First, DHS faced enormous challenges in implementing an effective transformation process, developing partnerships, and building needed management capacity because it had to effectively combine 22 agencies with an estimated 170,000 employees into one department. Second, DHS faced a broad array of operational and management challenges that it inherited from its component legacy agencies. For example, many of the major components that were merged into the department, including the Immigration and Naturalization Service, the Transportation Security Administration, the Customs Service, the Federal Emergency Management Agency, and the Coast Guard, brought with them existing challenges in areas such as strategic human capital, information technology, and financial management. Finally, DHS’s national security mission was of such importance that the failure to effectively address its management challenges and program risks could have serious consequences on our intergovernmental system, the health and safety of our citizens, and our economy. Our prior work on mergers and acquisitions, undertaken before the creation of DHS, found that successful transformations of large organizations, even those faced with less strenuous reorganizations than DHS, can take years to achieve. On the basis of the need for more progress in its transformation efforts, DHS’s implementation and transformation stayed on our high-risk update for 2005, and remained on the high-risk list in 2007. Further, in November of 2006, we provided the congressional leadership a listing of government programs, functions, and activities that warrant further congressional oversight. Among the issues included were DHS integration and transformation efforts. Managing the transformation of an organization of the size and complexity of DHS requires comprehensive planning, integration of key management functions across the department, and partnering with stakeholders across the public and private sectors. DHS has made some progress in each of these areas, but much additional work is required to help ensure sustainable success. Apart from these integration efforts, however, a successful transformation will also require DHS to follow through on its initial actions of building capacity to improve the management of its financial and information technology systems, as well as its human capital and acquisition efforts. Thorough planning is important for DHS to successfully transform and integrate the management functions of 22 disparate agencies into a common framework that supports the organization as a whole. Our past work has identified progress DHS has made in its planning efforts. For example, the DHS strategic plan addresses five of six Government Performance and Results Act required elements and takes into account its non-homeland security missions, such as responding to natural disasters. Furthermore, several DHS components have developed their own strategic plans or strategic plans for missions within their areas of responsibility. For example, U.S. Immigration and Customs Enforcement (ICE) has produced an interim strategic plan that identifies its goals and objectives, and U.S. Customs and Border Protection (CBP) developed a border patrol strategy and an anti-terrorism trade strategic plan. However, deficiencies in DHS’s planning efforts remain. A DHS-wide transformation strategy should include a strategic plan that identifies specific budgetary, human capital, and other resources needed to achieve stated goals. The strategy should also involve key stakeholders to create a shared understanding of goals and priorities. DHS’s existing strategic plan lacks these linkages, and DHS has not effectively involved stakeholders in the development of the plan. DHS has also not completed other important planning-related activities. For example, some of DHS’s components have not developed adequate outcome-based performance measures or comprehensive plans to monitor, assess, and independently evaluate the effectiveness of their plans and performance. Integrating core management functions like financial, information technology, human capital, and procurement is also important if DHS is to transform itself into a cohesive, high-performing organization. However, DHS lacks a comprehensive management integration strategy with overall goals, a timeline, appropriate responsibility and accountability determinations, and a dedicated team to support its management integration efforts. In 2005, we recommended that DHS establish implementation goals and a timeline for its management integration efforts as part of a comprehensive integration strategy, a key practice to help ensure success for a merger or transformation. Although DHS has issued guidance and plans to assist management integration on a function by function basis, it has not developed a plan that clearly identifies the critical links that should occur across these functions, the necessary timing to make these links occur, how these interrelationships will occur, and who will drive and manage them. In March 2007 testimony before the House Homeland Security Committee, DHS’s Undersecretary for Management supported our recommendation on the need for a comprehensive management integration strategy for the department. The Undersecretary stated that he was reviewing DHS’s progress against its individual plans and guidance for its management functions that would be part of such a comprehensive strategy. In addition, although DHS had established a Business Transformation Office that reported to the Under Secretary for Management to help monitor and look for interdependencies among the individual functional management integration efforts, that office was not responsible for leading and managing the coordination and integration itself. We understand that the Business Transformation Office has been recently eliminated due to a lack of funding. In addition to the Business Transformation Office, we have recommended that Congress continue to monitor whether it needs to provide additional leadership authorities to the DHS Under Secretary for Management or create a Chief Operating Officer/Chief Management Officer (COO/CMO) position that could help elevate, integrate, and institutionalize DHS’s management initiatives. Legislation was introduced in this session and passed by the Senate to create a Deputy Secretary of Homeland Security for Management, a CMO position. On April 24, 2007, I sponsored a forum on implementing COO/CMO positions in select federal departments and agencies, as part of a broader study examining issues associated with implementing these positions in response to a bipartisan request from this subcommittee. Forum participants included former and current government executives, and officials from private businesses and nonprofit organizations. The forum discussion focused on criteria for determining the type of COO/CMO position that should be established in selected entities and how to implement the position, including qualifications, appointment processes, roles and responsibilities, and reporting relationships. In addition to the forum, we have also learned about the experiences of organizations that have positions similar to a COO/CMO through several case study reviews. We expect to issue our full report to the subcommittee in early September 2007. Finally, DHS cannot successfully achieve its homeland security mission without working with other entities that share responsibility for securing the homeland. Partnering for progress with other governmental agencies and private sector entities is central to achieving its missions. Since 2005, DHS has continued to form necessary partnerships and has undertaken a number of coordination efforts with private sector entities. These include, for example, partnering with (1) airlines to improve aviation passenger and cargo screening, (2) the maritime shipping industry to facilitate containerized cargo inspection, (3) financial institutions to follow the money trail in immigration and customs investigations, and (4) the chemical industry to enhance critical infrastructure protection at such facilities. In addition, FEMA has worked with other federal, state, and local entities to improve planning for disaster response and recovery. However, partnering challenges continue as DHS seeks to form more effective partnerships to leverage resources and more effectively carry out its homeland security responsibilities. For example, because DHS has only limited authority to address security at chemical facilities, it must continue to work with the chemical industry to ensure that it is assessing vulnerabilities and implementing security measures. Also, while TSA has taken steps to collaborate with federal and private sector stakeholders in the implementation of its Secure Flight program, these stakeholders stated that TSA has not provided them with the information they would need to support TSA’s efforts as they move forward with the program. DHS has made modest progress in addressing financial management and internal control weaknesses and continues to face significant challenges in these areas. For example, since its creation, DHS has been unable to obtain an unqualified or “clean” audit opinion on its financial statements. The independent auditor’s report cited 10 material weaknesses—i.e., significant deficiencies in DHS’s internal controls—showing no decrease from fiscal year 2005. These weaknesses included financial management oversight, financial reporting, financial systems security, and budgetary accounting. Furthermore, the report found two other reportable conditions and instances of non-compliance with eight laws and regulations, including the Federal Managers’ Financial Integrity Act of 1982, the Federal Financial Management Improvement Act of 1996, and the Federal Information Security Management Act of 2002. While there continue to be material weaknesses in its financial management systems, DHS has made some progress in this area. For example, the independent auditor’s fiscal year 2006 report noted that DHS had made improvements at the component level to improve financial reporting during fiscal year 2006, although many challenges were remaining. Also, DHS and its components have reported developing corrective action plans to address the specific material internal control weaknesses identified. In addition to the independent audits, we have done work to assess DHS’s financial management and internal controls. For example, in 2004, we reviewed DHS’s progress in addressing financial management weaknesses and integrating its financial systems. Specifically, we identified weaknesses in the financial management systems DHS inherited from the 22 component agencies, assessed DHS’s progress in addressing these weaknesses, identified plans DHS had to integrate with its financial management systems, and reviewed whether the planned systems DHS was developing would meet the requirements of relevant financial management improvement legislation. On the basis of our work, we recommended that DHS (1) give sustained attention to addressing previously reported material weaknesses, reportable conditions, and observations and recommendations; (2) complete development of corrective action plans for all material weaknesses, reportable conditions, and observations and recommendations; (3) ensure that internal control weaknesses are addressed at the component level if they were combined or reclassified at the departmentwide level; and (4) maintain a tracking system of all auditor-identified and management-identified control weaknesses. These recommendations are still relevant today. A departmentwide information technology (IT) governance framework— including controls (disciplines) aimed at effectively managing IT-related people, processes, and tools—is vital to DHS’s transformation efforts. These controls and disciplines include: having and using an enterprise architecture, or corporate blueprint, as an authoritative frame of reference to guide and constrain IT investments; defining and following a corporate process for informed decision making by senior leadership about competing IT investment options; applying system and software development and acquisition discipline and rigor when defining, designing, developing, testing, deploying, and maintaining systems; establishing a comprehensive information security program to protect its information and systems; having sufficient people with the right knowledge, skills, and abilities to execute each of these areas now and in the future; and centralizing leadership for extending these disciplines throughout the organization with an empowered Chief Information Officer. DHS has made progress in each of these areas, but additional work is needed to further enhance its IT governance framework and implement our related recommendations. For example, the June 2006 version of DHS’s enterprise architecture, while an improvement over prior versions, still lacks important architecture content and limits DHS’s ability to guide and constrain IT investments, among other things. With respect to IT investment management, DHS has established management structures but has not, for example, fully implemented key practices needed to effectively oversee and control department investments—putting the department at increased risk of its programs not delivering promised mission capabilities and benefits. DHS stated it is working on improving its investment management process. DHS has taken other measures to enhance IT governance as well, such as completing a comprehensive inventory of its major information systems (though a comprehensive information security program is still needed), organizing IT leadership roles and responsibilities under the CIO, and initiating strategic planning for IT human capital (an area where we have ongoing work to assess related strategic planning efforts and progress made). In addition to efforts undertaken in these areas, our reviews of key nonfinancial systems show that DHS has not consistently employed a range of system acquisition management disciplines, such as reliable cost- estimating practices and meaningful performance measurements. We have made a number of recommendations in this and other areas, including work related to deploying and operating IT system and infrastructure in support of DHS’s core mission and operations. Implementation of many of our recommendations has been slow. Until DHS fully establishes and consistently implements the full range of IT management disciplines embodied in its framework and related to federal guidance and best practices, it will be challenged in its ability to effectively manage and deliver programs. DHS has made some progress in transforming its human capital systems, but more work remains. Some of the most pressing human capital challenges at DHS include (1) successfully completing its ongoing transformation; (2) forging a unified results-oriented culture across the department (line of sight); (3) linking daily operations to strategic outcomes; (4) rewarding individuals based on individual, team, unit, and organizational results; (5) obtaining, developing, providing incentives to, and retaining needed talent; and most importantly, (6) leadership at the top, to include a chief operating officer or chief management officer. Moreover, employee morale is low, as measured by recent results in the 2006 Federal Human Capital Survey, which can have an impact on the progress of DHS’s transformation and integration. DHS scored at the bottom or near the bottom of all federal agencies in the four areas which provide the standards of success for agencies to measure their progress and achievements in managing their workforces. These four areas include (1) leadership and knowledge management, (2) results-oriented performance culture, (3) talent management, and (4) job satisfaction. As we have reported, people are at the center of any serious change management initiative, and addressing the “people” element and employee morale issues is the key to a successful merger and transformation. Strategic human capital management is the centerpiece of any transformation effort. In 2005, we reported that DHS had initiated strategic human capital planning efforts and published proposed regulations for a modern human capital management system. We also reported that DHS’s leadership was committed to the human capital system design process and had formed teams to implement the resulting regulations. Since our report, DHS has finalized its human capital regulations and it is vital that DHS implement its human capital system effectively. In April 2007, DHS issued its fiscal year 2007 and 2008 Human Capital Operational Plan, which identifies five department priorities: hiring and retaining a talented and diverse workforce, creating a DHS-wide culture of performance, creating high-quality learning and development programs for DHS employees, implementing a DHS-wide integrated leadership system, and being a model of human capital service excellence. DHS officials explained that the Human Capital Operating Plan encompasses the initiatives of the previous human capital management system, MAXHR, but also outlines a more comprehensive human resources program. GAO has not yet reviewed DHS’s new Human Capital Operational Plan to see if it addresses our prior recommendations. However, we expect to examine this plan. Further, since our 2005 update, DHS has taken some actions to integrate the legacy agency workforces that make up its components. For example, it standardized pay grades for criminal investigators at ICE and developed promotion criteria for investigators and CBP officers that equally recognize the value of the experience brought to ICE and CBP by employees of each legacy agency. DHS also made progress in establishing human capital capabilities for the US-VISIT program, which should help ensure that it has sufficient staff with the necessary skills and abilities to implement the program effectively. CBP also developed training plans that link its officer training to CBP strategic goals. Despite these efforts, however, DHS must still (1) create a clearer crosswalk between departmental training goals and objectives and DHS’s broader organizational and human capital goals, and (2) develop appropriate training performance measures and targets for goals and strategies identified in its departmentwide strategic training plan. We have also made recommendations to specific program offices and organizational entities to help ensure that human capital resources are provided to improve the effectiveness of management capabilities, and that human capital plans are developed that clearly describe how these components will recruit, train, and retain staff to meet their growing demands as they expand and implement new program elements. We are completing a review of selected human capital issues and plan to report on our results soon. This report will discuss information on: attrition rates at DHS; senior-level vacancies at DHS; DHS’s use of human capital flexibilities, including the Intergovernmental Personnel Act, and personal services contracts; and DHS’s compliance with the Federal Vacancies Reform Act of 1998. DHS has made some progress but continues to face challenges in creating an effective, integrated acquisition organization. Since its inception in March 2003, DHS made early progress in implementing a strategic sourcing program to increase the effectiveness of its buying power and in creating a small business program. These programs have promoted an environment in which there is a collaborative effort toward the common goal of an efficient, unified organization. Strategic sourcing allows DHS components to formulate purchasing strategies to leverage buying power and increase savings for a variety of products like office supplies, boats, energy, and weapons, while its small business program works to ensure small businesses can compete effectively for the agency’s contract dollars. However, DHS’s progress toward creating a unified acquisition organization has been hampered by policy decisions. In March 2005, we reported that an October 2004 management directive, Acquisition Line of Business Integration and Management, while emphasizing the need for a unified, integrated acquisition organization, relies on a system of dual accountability between the chief procurement officer and the heads of the departments to make this happen. This situation has created ambiguity about who is accountable for acquisition decisions. We also found that the various acquisition organizations within DHS are still operating in a disparate manner, with oversight of acquisition activities left primarily up to each individual component. Specifically, we reported that (1) there were components exempted from the unified acquisition organization, (2) the chief procurement officer had insufficient staff for departmentwide oversight, and (3) staffing shortages led the office of procurement operations to rely extensively on outside agencies for contracting support. In December 2005, DHS established an acquisition oversight program to provide comprehensive insight into each component’s acquisition programs. This oversight program involves a series of reviews which are currently being implemented. However, accountability concerns remain. In March 2005, we recommended that, among other things, the Secretary of Homeland Security provide the Office of the Chief Procurement Officer with sufficient resources and enforcement authority to enable effective departmentwide oversight of acquisition policies and procedures, and to revise the October 2004 management directive to eliminate reference to the Coast Guard and Secret Service as being exempt from complying with the directive. In September 2006, DHS reported on planned increases in staffing for the Office of the Chief Procurement Officer, but we expressed concern that the authority of the Chief Procurement Officer had not been addressed. Unless DHS addresses these challenges, it is at risk of continuing to exist as a fragmented acquisition organization. Because some of DHS’s components have major, complex acquisition programs—for example, the Coast Guard’s Deepwater program (designed to replace or upgrade its cutters and aircraft) and CBP’s Secure Border Initiative—DHS needs to improve the oversight of contractors and should adhere to a rigorous management review process. DHS continues to face challenges, many of which were inherited from its component legacy agencies, in carrying out its programmatic activities. These challenges include enhancing transportation security, strengthening the management of U.S. Coast Guard acquisitions and meeting the Coast Guard’s new homeland security missions, improving the regulation of commercial trade while ensuring protection against the entry of illegal goods and dangerous visitors at U.S. borders and ports of entry, and improving enforcement of immigration laws, including worksite immigration laws, and the provision of immigration services. DHS must also effectively coordinate the mitigation and response to all hazards, including natural disaster planning, response, and recovery. DHS has taken actions to address these challenges, for example, by strengthening passenger and baggage screening, increasing the oversight of Coast Guard acquisitions, more thoroughly screening visitors and cargo, dedicating more resources to immigration enforcement, becoming more efficient in the delivery of immigration services, and conducting better planning for disaster preparation. However, challenges remain in each of these major mission areas. Despite progress in this area, DHS continues to face challenges in effectively executing transportation security efforts. We have recommended that the Transportation Security Administration (TSA) more fully integrate a risk management approach—including assessments of threat, vulnerability, and criticality—in prioritizing security efforts within and across all transportation modes; strengthen stakeholder coordination; and implement needed technological upgrades to secure commercial airports. DHS has made progress in all of these areas, particularly in aviation, but must expand its security focus more towards surface modes of transportation and continue to seek best practices and coordinated security efforts with the international community. DHS and TSA have taken numerous actions to strengthen commercial aviation security, including strengthening passenger and baggage screening, improving aspects of air cargo security, and strengthening the security of international flights and passengers bound for the United States. For example, TSA increased efforts to measure the effectiveness of airport screening systems through covert testing and other means and has worked to enhance passenger and baggage screener training. TSA also improved its processes for identifying and responding to threats onboard commercial aircraft and has modified airport screening procedures based on risk. Despite this progress, however, TSA continues to face challenges in implementing a program to match domestic airline passenger information against terrorist watch lists, fielding needed technologies to screen airline passengers for explosives, and strengthening aspects of passenger rail security. In addition, TSA has not developed a strategy, as required, for securing the various modes of transportation. As a result, rail and other surface transportation stakeholders are unclear regarding what TSA’s role will ultimately be in establishing and enforcing security requirements within their transportation modes. We have recommended that TSA more fully integrate risk-based decision making within aviation and across all transportation modes, strengthen passenger prescreening, and enhance rail security efforts. We have also recommended that TSA work to develop sustained and effective partnerships with other government agencies, the private sector, and international partners to coordinate security efforts and seek potential best practices, among other efforts. While DHS has made significant strides in strengthening aviation security, it still is in the early stages of developing a comprehensive approach to ensuring inbound air cargo security. The Coast Guard needs to improve the management of its acquisitions and continue to enhance its security mission while meeting other mission responsibilities. In 2004, we recommended that the Coast Guard improve its management of the Deepwater program by strengthening key management and oversight activities, implementing procedures to better ensure contractor accountability, and controlling future costs by promoting competition. In April 2006, we reported the Coast Guard had made some progress in addressing these recommendations. For example, the Coast Guard has addressed our recommendation to ensure better contractor accountability by providing for better input from U.S. Coast Guard performance monitors. However, even with these improvements, acquisition and contract management issues that we reported on previously continue to be challenges to the Coast Guard. For example, within the Deepwater program, an updated class of patrol boats has been removed from service and its replacement, a new cutter class, has been delayed due to design concerns. While the Coast Guard recently announced that it will be taking a more active role in Deepwater acquisitions and noted that many of the issues that led to these acquisition problems are being addressed, it is too soon to tell how effective these changes will be. Further, the Coast Guard has acquisition challenges other than just the Deepwater program. For example, the Coast Guard's timeline for achieving full operating capability for its search and rescue communications system, Rescue 21, was delayed from 2006 to 2011, and the estimated total acquisition cost increased. The Coast Guard has made progress in balancing its homeland security and traditional missions. The Coast Guard is unlike many other DHS components because it has substantial missions not related to homeland security. These missions include maritime navigation, icebreaking, protecting the marine environment, marine safety, and search and rescue for mariners in distress. Furthermore, unpredictable natural disasters, such as Hurricane Katrina, can place intense demands on all Coast Guard resources. The Coast Guard must continue executing these traditional missions and balance those responsibilities with its homeland security obligations, which have increased significantly since September 11. DHS has made some progress but still faces an array of challenges in securing the border while improving the regulation of commercial trade. Since 2005, DHS agencies have made some progress in implementing our recommendations to refine the screening of foreign visitors to the United States, target potentially dangerous cargo, and provide the personnel necessary to effectively fulfill border security and trade agency missions. As of January 2006, DHS had a pre-entry screening capability in place in overseas visa issuance offices, and an entry identification capability at 115 airports, 14 seaports, and 154 land ports of entry. Furthermore, the Secretary of Homeland Security has made risk management at ports and all critical infrastructure facilities a key priority for DHS. In addition, DHS developed performance goals and measures for its trade processing system and implemented a testing and certification process for its officers to provide better assurance of effective cargo examination targeting practices. However, efforts to assess and mitigate risks of DHS’s and the Department of State’s implementation of the Visa Waiver Program remain incomplete, increasing the risk that the program could be exploited by someone who intends harm to the United States. Further, many of DHS’s border-related performance goals and measures are not fully defined or adequately aligned with one another, and some performance targets are not realistic. CBP is not systematically incorporating inspection results into its cargo screening system because it has not yet fully implemented a system that will report details on its security inspections nationwide to allow management to analyze those inspections. Other trade and visitor screening systems have weaknesses that must be overcome to better ensure border and trade security. For example, deficiencies in the identification of counterfeit documentation at land border crossings into the United States create vulnerabilities that terrorists or others involved in criminal activity could exploit. We also reported that DHS’s Container Security Initiative to target and inspect high-risk cargo containers at foreign ports before they leave for the United States has been challenged by staffing imbalances, the lack of minimum technical requirements for inspection equipment used at foreign ports, and insufficient performance measures to assess the effectiveness of targeting and inspection activities. We are currently reviewing this program to ascertain what progress CBP has made in addressing these challenges. DHS has taken some actions to improve enforcement of immigration laws, including worksite immigration laws, but the number of resources devoted to enforcing immigration laws is limited given that there are an estimated 12 million illegal aliens residing in the United States. DHS has strengthened some aspects of immigration enforcement, including allocating more investigative work years to immigration functions than the Immigration and Naturalization Service did prior to the creation of DHS. Nevertheless, effective enforcement will require more attention to efficient resource use and updating outmoded management systems. In April 2006, ICE announced an interior enforcement strategy to bring criminal charges against employers who knowingly hire unauthorized workers. ICE has also reported increases in the number of criminal arrests and indictments for these violations since fiscal year 2004. In addition, ICE has plans to shift responsibility for identifying incarcerated criminal aliens eligible for removal from the United States from the Office of Investigations to its Office of Detention and Removal, freeing those investigative resources for other immigration and customs investigations. ICE has also begun to introduce principles of risk management into the allocation of its investigative resources. However, enforcement of immigration enforcement laws needs to be strengthened and significant management challenges remain. DHS’s ability to locate and remove millions of aliens who entered the country illegally or overstayed the terms of their visas is questionable, and implementing an effective worksite enforcement program remains an elusive goal. ICE’s Office of Investigations has not conducted a comprehensive risk assessment of the customs and immigration systems to determine the greatest risks for exploitation by criminals and terrorists. This office also lacks outcome-based performance goals that relate to its objective of preventing the exploitation of systemic vulnerabilities in customs and immigration systems, and it does not have sufficient systems in place to help ensure systematic monitoring and communication of vulnerabilities discovered during its investigations. Moreover, the current employment verification process used to identify workers ineligible for employment in the United States has not fundamentally changed since its establishment in 1986, and ongoing weaknesses have undermined its effectiveness. We have recommended that DHS take actions to help address these weaknesses and to strengthen the current process by issuing final regulations on changes to the employment verification process which will reduce the number of documents suitable for proving eligibility to work in the United States. Some other countries require foreign workers to present work authorization documents at the time of hire and require employers to review these documents and report workers’ information to government agencies for collecting taxes and social insurance contributions, and conducting worksite enforcement actions. Although DHS has made progress in reducing its backlog of immigration benefit applications, improvements are still needed in the provision of immigration services, particularly by strengthening internal controls to prevent fraud and inaccuracy. Since 2005, DHS has enhanced the efficiency of certain immigration services. For example, U.S. Citizenship and Immigration Services (USCIS) estimated that it had reduced its backlog of immigration benefits applications from a peak of 3.8 million cases to 1.2 million cases from January 2004 to June 2005. USCIS has also established a focal point for immigration fraud, outlined a fraud control strategy that relies on the use of automation to detect fraud, and is performing fraud assessments to identify the extent and nature of fraud for certain benefits. However, DHS still faces significant challenges in its ability to effectively provide immigration services while at the same time protecting the immigration system from fraud and mismanagement. USCIS may have adjudicated tens of thousands of naturalization applications without alien files, and adjudicators were not required to record whether the alien file was available when they adjudicated the application. Without these files, DHS may not be able to take enforcement action against an applicant and could also approve an application for an ineligible applicant. In response to our report, USCIS recently enacted a policy that requires the adjudicator to record whether the alien file was available when they adjudicated the application. In addition, USCIS has not implemented important aspects of our internal control standards or fraud control best practices identified by leading audit organizations. Such best practices would include (1) a comprehensive risk management approach, (2) mechanisms for ongoing monitoring during the course of normal activities, (3) clear communication agencywide regarding how to balance production-related goals with fraud-prevention activities, and (4) performance goals for fraud prevention. We have reported that DHS needs to more effectively coordinate disaster preparedness, response, and recovery efforts. Between the time that FEMA became part of DHS in March 2003 and Hurricane Katrina hit in late August 2005, its responsibilities had been dispersed and its role within DHS continued to evolve. Hurricane Katrina severely tested disaster management at the federal, state, and local levels and revealed weaknesses in the basic elements of preparing for, responding to, and recovering from any catastrophic disaster. Our analysis showed the need for (1) clearly defined and understood leadership roles and responsibilities; (2) the development of the necessary disaster capabilities; and (3) accountability systems that effectively balance the need for fast and flexible response against the need to prevent waste, fraud, and abuse. In September 2006, we recommended that Congress give federal agencies explicit authority to take actions to prepare for all types of catastrophic disasters when there is warning. We also recommended that DHS (1) rigorously re-test, train, and exercise its recent clarification of the roles, responsibilities, and lines of authority for all levels of leadership, implementing changes needed to remedy identified coordination problems; (2) direct that the National Response Plan (NRP) base plan and its supporting Catastrophic Incident Annex be supported by more robust and detailed operational implementation plans; (3) provide guidance and direction for federal, state, and local planning, training, and exercises to ensure such activities fully support preparedness, response, and recovery responsibilities at a jurisdictional and regional basis; (4) take a lead in monitoring federal agencies’ efforts to prepare to meet their responsibilities under the NRP and the interim National Preparedness Goal; and (5) use a risk management approach in deciding whether and how to invest finite resources in specific capabilities for a catastrophic disaster. DHS has made revisions to the NRP and released its Supplement to the Catastrophic Incident Annex—both designed to further clarify federal roles and responsibilities and relationships among federal, state and local governments and responders. However, these revisions have not been rigorously tested. DHS is working on additional revisions to the NRP and the National Incident Management System and recently informed Congress the revisions to the NRP may not be complete by the scheduled June 1, 2007 target date. Thus, it is unlikely that any changes will be clearly communicated, understood, and effectively tested prior to the 2007 Hurricane Season, which begins in June. DHS has also announced a number of actions intended to improve readiness and response based on our work and the work of congressional committees and the Administration. For example, DHS is currently reorganizing FEMA as required by the fiscal year 2007 DHS appropriations act. One major objective of this reorganization is to integrate responsibility and accountability for disaster preparedness and response within DHS by placing the responsibility for both within FEMA. DHS has also announced a number of other actions to improve readiness and response, such as mass care and shelter, in which FEMA rather than the Red Cross, will now have the lead. However, there is little information available on the extent to which these changes are tested and operational. Finally, in its desire to provide assistance quickly following Hurricane Katrina, DHS was unable to keep up with the magnitude of needs to confirm the eligibility of victims for disaster assistance, or ensure that there were provisions in contracts for response and recovery services to ensure fair and reasonable prices in all cases. We recommended that DHS create accountability systems that effectively balance the need for fast and flexible response against the need to prevent waste, fraud, and abuse. We also recommended that DHS provide guidance on advance procurement practices (pre-contracting) and procedures for those federal agencies with roles and responsibilities under the NRP so that these agencies can better manage disaster-related procurement, and establish an assessment process to monitor agencies’ continuous planning efforts for their disaster- related procurement needs and the maintenance of capabilities. For example, we identified a number of emergency response practices in the public and private sectors that provide insight into how the federal government can better manage its disaster-related procurements. These include both developing knowledge of contractor capabilities and prices and establishing vendor relationships prior to the disaster and establishing a scalable operations plan to adjust the level of capacity to match the response with the need. FEMA had taken some action on these recommendations by entering into advance contracts for various goods, supplies, and services, such as debris removal. However, DHS has not implemented our recommendation to develop guidance on advance procurement practices and procedures for those federal agencies and other partners, such as the Red Cross, with roles and responsibilities under the NRP. To be removed from our high-risk list, agencies need to develop a corrective action plan that defines the root causes of identified problems, identifies effective solutions to those problems, and provides for substantially completing corrective measures in the near term. Such a plan should include performance measures, metrics and milestones to measure their progress. Agencies should also demonstrate significant progress in addressing the problems identified in their corrective action plan. This should include a program to monitor and independently validate progress. Finally, agencies, in particular top leadership, must demonstrate a commitment to sustain initial improvements. This would include a strong commitment to address the risk(s) that put the program or function on the high-risk list and provide for the allocation of sufficient people and resources (capacity) to resolve the risk(s) and ensure that improvements are sustainable over the long term. In the spring of 2006, DHS provided us a draft corrective action plan for addressing its transformation challenges. This plan addressed major management areas we had previously identified as key to DHS’s transformation—management integration through the DHS management directorate and financial, information, acquisition, and human capital management. The plan identified an overall goal to develop and implement key department wide processes and systems to support DHS’s transformation into a department capable of planning, operating, and managing as one effective department. In the short term, the plan sought to produce significant improvements over the next 7 years that further DHS’s ability to operate as one department. Although the plan listed accomplishments and general goals for the management functions, it did not contain (1) objectives linked to those goals that are clear, concise, and measurable; (2) specific actions to implement those objectives; (3) information linking sufficient people and resources to implement the plan; or (4) an evaluation program to monitor and independently validate progress toward meeting the goals and measuring the effectiveness of the plan. As of May 2007, DHS has not submitted a corrective action plan to OMB. According to an official at OMB, this is one of the few high-risk areas that have not produced a final corrective action plan. In addition to developing an effective corrective action plan, agencies must show that significant progress has taken place in improving performance in the areas identified in its corrective action plan. While our work has noted progress at DHS, for us to remove the DHS implementation and transformation from our high-risk list, we need to be able to independently assure ourselves and Congress that DHS has implemented many of our past recommendations, or has taken other corrective actions to address the challenges we identified. However, DHS has not made its management or operational decisions transparent enough so that Congress can be sure it is economically, efficiently, effectively, ethically, and equitably using the billions of dollars in funding it receives annually, and is providing the levels of security called for in numerous legislative requirements and presidential directives. Our work for Congress assessing DHS’s operations has been significantly hampered by long delays in granting us access to program documents and officials, or by questioning our access to information needed to conduct our reviews. We have processes for obtaining information from departments and agencies across the federal government that work well. DHS’s process— involving multiple layers of review by department- and component-level liaisons and attorneys regarding whether to provide us the requested information—does not work as smoothly. DHS’s processes have impeded our efforts to carry out our mission by delaying access to documents that we require to assess the department’s operations. We have occasionally worked with DHS management to establish a cooperative process—for example, reviewing sensitive documents at a particular agency location— in an effort to not only to maintain a productive working relationship with the department but also to meet the needs of our congressional requesters in a timely manner. I have spoken to Secretary Chertoff who pledged to make access a higher priority and have met with Undersecretary Schneider who also assured us of his cooperation. We are encouraged by these statements and look forward to better relations with the department. We recognize that the department has legitimate interests in protecting certain types of sensitive information from public disclosure. We share that interest as well and follow strict security guidelines in handling such information. We similarly recognize that agency officials will need to make judgments with respect to the manner and the processes they use in response to our information requests. However, to date, because of the processes adopted to make these judgments, GAO has often not been able to do its work in a timely manner. We have been able to eventually obtain information and to answer audit questions, but the delays we have experienced at DHS have impeded our ability to conduct audit work efficiently and to provide timely information to congressional clients. Finally, to be removed from our high-risk list, any progress that occurs must be sustainable over the long term. DHS’s leaders need to make and demonstrate a commitment to implementing a transformed organization. The Secretary has stated such a commitment, most prominently as part of his “second stage review” in the summer of 2005, and more recently in remarks made at George Washington University’s Homeland Security Policy Institute. However, appropriate follow-up is required to assure that transformation plans are effectively implemented and sustained, to include the allocation of adequate resources to support transformation efforts. In this regard, we were pleased when DHS established a Business Transformation Office, but we believe that the office’s effectiveness was limited because the department did not give it the authority and responsibility needed to be successful. We understand that this office has recently been eliminated. Further, department leaders can show their commitment to transforming DHS by acting on recommendations made by the Congress, study groups, and accountability organizations such as its Office of the IG and GAO. Although we have also seen some progress in this area, it is not enough for us to conclude that DHS is committed to and capable of quickly incorporating corrective actions into its operations. Therefore, until DHS produces an acceptable corrective action plan, demonstrates progress reforming its key management functions, and dedicates the resources necessary to sustain this progress, it will likely remain on our high-risk list. Mr. Chairman and members of the subcommittee, this completes my prepared statement. I would be happy to respond to any questions that you or other members of the subcommittee may have at this time. For information about this testimony, please contact Norman Rabkin, Managing Director, Homeland Security and Justice Issues, at (202) 512-8777, or [email protected] or Bernice Steinhardt, Director, Strategic Issues at 202-512-6806 or [email protected]. Other individuals making key contributions to this testimony include Christopher Conrad, Anthony DeFrank, and Sarah Veale. High-Risk Series: An Update, GAO-07-310 (Washington, D.C.: Jan. 31, 2007). Suggested Areas for Oversight for the 110th Congress, GAO-07-235R (Washington, D.C.: Nov. 17, 2006). Homeland Security: DHS Is Addressing Security at Chemical Facilities, but Additional Authority Is Needed, GAO-06-899T (Washington, D.C.: June 21, 2006). Homeland Security: Guidance and Standards Are Needed for Measuring the Effectiveness of Agencies’ Facility Protection Efforts, GAO-06-612 (Washington, D.C.: May 31, 2006). Homeland Security: DHS Needs to Improve Ethics-Related Management Controls for the Science and Technology Directorate, GAO-06-206 (Washington, D.C.: Dec. 22, 2005). Critical Infrastructure Protection: Department of Homeland Security Faces Challenges in Fulfilling Cybersecurity Responsibilities, GAO-05-434 (Washington, D.C.: May 26, 2005). Homeland Security: Overview of Department of Homeland Security Management Challenges, GAO-05-573T (Washington, D.C.: April 20, 2005). Results-Oriented Government: Improvements to DHS’s Planning Process Would Enhance Usefulness and Accountability, GAO-05-300 (Washington, D.C.: March 31, 2005). Department of Homeland Security: A Comprehensive and Sustained Approach Needed to Achieve Management Integration, GAO-05-139 (Washington, D.C.: March 16, 2005). Homeland Security: Further Actions Needed to Coordinate Federal Agencies’ Facility Protection Efforts and Promote Key Practices, GAO-05-49 (Washington, D.C.: Nov. 30, 2004). Highlights of a GAO Forum: Mergers and Transformation: Lessons Learned for a Department of Homeland Security and Other Federal Agencies, GAO-03-293SP (Washington, D.C.: Nov. 14, 2002). Determining Performance and Accountability Challenges and High Risks, GAO-01-159SP (Washington, D.C.: Aug. 2000). Financial Management Systems: DHS Has an Opportunity to Incorporate Best Practices in Modernization Efforts, GAO-06-553T (Washington, D.C.: March 29, 2006). Financial Management: Department of Homeland Security Faces Significant Financial Management Challenges, GAO-04-774. Washington, D.C.: July 19, 2004). Homeland Security: DHS Enterprise Architecture Continues to Evolve but Improvements Needed, GAO-07-564 (Washington, D.C.: May 9, 2007). Information Technology: DHS Needs to Fully Define and Implement Policies and Procedures for Effectively Managing Investments, GAO-07-424 (Washington, D.C.: April 27, 2007). Information Technology: Customs Has Made Progress on Automated Commercial Environment System, but It Faces Long-Standing Management Challenges and New Risks, GAO-06-580 (Washington, D.C.: May 31, 2006). Information Sharing: DHS Should Take Steps to Encourage More Widespread Use of Its Program to Protect and Share Critical Infrastructure Information, GAO-06-383 (Washington, D.C.: April 17, 2006). Homeland Security: Progress Continues, but Challenges Remain on Department’s Management of Information Technology, GAO-06-598T (Washington, D.C.: March 29, 2006). Information Technology: Management Improvements Needed on Immigration and Customs Enforcement’s Infrastructure Modernization Program, GAO-05-805 (Washington, D.C.: Sept. 7, 2005.) Information Security: Department of Homeland Security Needs to Fully Implement Its Security Program. GAO-05-700 (Washington, D.C.: June 17, 2005). Information Technology: Federal Agencies Face Challenges in Implementing Initiatives to Improve Public Health Infrastructure, GAO-05-308 (Washington, D.C.: June 10, 2005). Information Technology: Customs Automated Commercial Environment Program Progressing, but Need for Management Improvements Continues. GAO-05-267 (Washington, D.C.: March 14, 2005). Department of Homeland Security: Formidable Information and Technology Management Challenge Requires Institutional Approach, GAO-04-702 (Washington, D.C.: Aug. 27, 2004). Budget Issues: FEMA Needs Adequate Data, Plans, and Systems to Effectively Manage Resources for Day-to-Day Operations, GAO-07-139 (Washington, D.C.: Jan. 19, 2007). Border Security: Stronger Actions Needed to Assess and Mitigate Risks of the Visa Waiver Program, GAO-06-854 (Washington, D.C.: July 28, 2006). Information on Immigration Enforcement and Supervisory Promotions in the Department of Homeland Security’s Immigration and Customs Enforcement and Customs and Border Protection, GAO-06-751R (Washington, D.C.: June 13, 2006). Homeland Security: Visitor and Immigrant Status Program Operating, but Management Improvements Are Still Needed, GAO-06-318T (Washington, D.C.: Jan. 25, 2006). Department of Homeland Security: Strategic Management of Training Important for Successful Transformation, GAO-05-888 (Washington, D.C.: Sept. 23, 2005). Interagency Contracting: Improved Guidance, Planning, and Oversight Would Enable the Department of Homeland Security to Address Risks, GAO-06-996 (Washington, D.C.: Sept. 27, 2006). Homeland Security: Challenges in Creating an Effective Acquisition Organization, GAO-06-1012T (Washington, D.C.: July 27, 2006). Homeland Security: Successes and Challenges in DHS’s Efforts to Create an Effective Acquisition Organization, GAO-05-179 (Washington, D.C.: March 29, 2005). Homeland Security: Further Action Needed to Promote Successful Use of Special DHS Acquisition Authority, GAO-05-136 (Washington, D.C.: Dec. 15, 2004). Aviation Security: Federal Efforts to Secure U.S.-Bound Air Cargo Are in the Early Stages and Could Be Strengthened, GAO-07-660 (Washington, D.C.: April 30, 2007). Aviation Security: TSA's Change to its Prohibited Items List Has Not Resulted in Any Reported Public Safety Incidents, but the Impact of the Change on Screening Operations is Inconclusive, GAO-07-623R. (Washington, D.C.: April 25, 2007). Aviation Security: Risk, Experience, and Customer Service Drive Changes to Airline Passenger Screening Procedures, but Evaluation and Documentation of Proposed Changes Could Be Improved, GAO-07-634 (Washington, D.C.: April 16, 2007). Aviation Security: TSA’s Staffing Allocation Model Is Useful for Allocating Staff among Airports, but Its Assumptions Should Be Systematically Reassessed, GAO-07-299 (Washington, D.C.: Feb. 28, 2007). Aviation Security: Progress Made in Systematic Planning to Guide Key Investment Decisions, but More Work Remains, GAO-07-448T, (Washington, D.C.: Feb. 13, 2007). Transportation Security Administration: Oversight of Explosive Detection Systems Maintenance Contracts Can Be Strengthened, GAO-06-795 (Washington, D.C.: July 31, 2006). Aviation Security: TSA Oversight of Checked Baggage Screening Procedures Could Be Strengthened, GAO-06-869 (Washington, D.C.: July 28, 2006). Rail Transit: Additional Federal Leadership Would Enhance FTA’s State Safety Oversight Program, GAO-06-821 (Washington, D.C.: July 26, 2006). Aviation Security: Management Challenges Remain for the Transportation Security Administration’s Secure Flight Program, GAO-06-864T (Washington, D.C.: June 14, 2006). Aviation Security: Enhancements Made in Passenger and Checked Baggage Screening, but Challenges Remain, GAO-06-371T (Washington, D.C.: April 4, 2006). Aviation Security: Progress Made to Set Up Program Using Private- Sector Airport Screeners, but More Work Remains, GAO-06-166 (Washington, D.C.: March 31, 2006). Aviation Security: Significant Management Challenges May Adversely Affect Implementation of the Transportation Security Administration’s Secure Flight Program, GAO-06-374T (Washington, D.C.: Feb. 9, 2006). Aviation Security: Federal Air Marshal Service Could Benefit from Improved Planning and Controls, GAO-06-203 (Washington, D.C.: Nov. 28, 2005). Aviation Security: Federal Action Needed to Strengthen Domestic Air Cargo Security, GAO-06-76 (Washington, D.C.: Oct. 17, 2005.) Passenger Rail Security: Enhanced Federal Leadership Needed to Prioritize and Guide Security Efforts, GAO-05-851 (Washington, D.C.: Sept. 9, 2005). Aviation Security: Flight and Cabin Crew Member Security Training Strengthened, but Better Planning and Internal Controls Needed, GAO-05-781 (Washington, D.C.: Sept. 6. 2005). Aviation Safety: Oversight of Foreign Code-Share Safety Program Should Be Strengthened, GAO-05-930 (Washington, D.C.: Aug. 5, 2005). Homeland Security: Agency Resources Address Violations of Restricted Airspace, but Management Improvements Are Needed, GAO-05-928T (Washington, D.C.: July 21, 2005). Aviation Security: Secure Flight Development and Testing Under Way, but Risks Should Be Managed as System Is Further Developed, GAO-05-356 (Washington, D.C.: March 28, 2005). Aviation Security: Systematic Planning Needed to Optimize the Deployment of Checked Baggage Screening Systems, GAO-05-365 (Washington, D.C.: March 15, 2005). Coast Guard: Observations on the Fiscal Year 2008 Budget, Performance, Reorganization, and Related Challenges: GAO-07-489T (Washington, D.C.: April 18, 2007). Coast Guard: Status of Efforts to Improve Deepwater Program Management and Address Operational Challenges, GAO-07-575T (Washington, D.C.: March 8, 2007). Coast Guard: Preliminary Observations on Deepwater Program Assets and Management Challenges, GAO-07-446T (Washington, D.C.: Feb. 15, 2007). Coast Guard: Efforts to Improve Management and Address Operational Challenges in the Deepwater Program, GAO-07-460T (Washington, D.C.: Feb. 14, 2007). Homeland Security: Observations on the Department of Homeland Security's Acquisition Organization and on the Coast Guard's Deepwater Program, GAO-07-453T (Washington, D.C.: Feb. 8, 2007). United States Coast Guard: Improvements Needed in Management and Oversight of Rescue System Acquisition, GAO-06-623 (Washington, D.C.: May 31, 2006). Coast Guard: Changes to Deepwater Plan Appear Sound, and Program Management Has Improved, but Continued Monitoring is Warranted, GAO-06-546 (Washington, D.C.: April 28, 2006). Risk Management: Further Refinements Needed to Assess Risks and Prioritize Protective Measures at Ports and Other Critical Infrastructure, GAO-06-91 (Washington, D.C.: Dec. 15, 2005.) Maritime Security: Enhancements Made, but Implementation and Sustainability Remain Key Challenges, GAO-05-448T (Washington, D.C.: May 17, 2005). Cargo Security: Partnership Program Grants Importers Reduced Scrutiny with Limited Assurance of Improved Security, GA0-05-404 (Washington, D.C.: March 11, 2005). Coast Guard: Station Readiness Improving, but Resource Challenges and Management Concerns Remain, GAO-05-161 (Washington, D.C.: Jan. 31, 2005). Contract Management: Coast Guard’s Deepwater Program Needs Increased Attention to Management and Contractor Oversight, GAO-04-380 (Washington, D.C.: March 9, 2004). Border Security: US-VISIT Program Faces Strategic, Operational, and Technological Challenges at Land Ports of Entry, GAO-07-248 (Washington, D.C.: Dec. 6, 2006). Border Security: Stronger Actions Needed to Assess and Mitigate Risks of the Visa Waiver Program, GAO-06-854 (Washington, D.C.: July 28, 2006). Information Technology: Customs Has Made Progress on Automated Commercial Environment System, but It Faces Long-Standing Management Challenges and New Risks, GAO-06-580 (Washington, D.C.: May 31, 2006). Border Security: Key Unresolved Issues Justify Reevaluation of Border Surveillance Technology Program, GAO-06-295 (Washington, D.C.: Feb. 22, 2006). Homeland Security: Recommendations to Improve Management of Key Border Security Program Need to Be Implemented, GAO-06-296 (Washington, D.C.: Feb. 14, 2006). Border Security: Strengthened Visa Process Would Benefit from Improvements in Staffing and Information Sharing, GAO-05-859 (Washington, D.C.: Sept. 13, 2005). Border Security: Opportunities to Increase Coordination of Air and Marine Assets, GAO-05-543 (Washington, D.C.: Aug. 12, 2005). Border Security: Actions Needed to Strengthen Management of Department of Homeland Security’s Visa Security Program, GAO-05-801 (Washington, D.C.: July 29, 2005). Border Patrol: Available Data on Interior Checkpoints Suggest Differences in Sector Performance, GAO-05-435 (Washington, D.C.: July 22, 2005). Immigration Enforcement: Weaknesses Hinder Employment Verification and Worksite Enforcement Efforts, GAO-06-895T (Washington, D.C.: June 19, 2006). Information on Immigration Enforcement and Supervisory Promotions in the Department of Homeland Security’s Immigration and Customs Enforcement and Customs and Border Protection, GAO-06-751R (Washington, D.C.: June 13, 2006). Homeland Security: Contract Management and Oversight for Visitor and Immigrant Status Program Need to Be Strengthened, GAO-06-404 (Washington, D.C.: June 9, 2006). Homeland Security: Better Management Practices Could Enhance DHS’s Ability to Allocate Investigative Resources, GAO-06-462T (Washington, D.C.: March 28, 2006). Immigration Enforcement: Weaknesses Hinder Employment Verification and Worksite Enforcement Efforts, GAO-05-813 (Washington, D.C.: Aug. 31, 2005). Immigration Benefits: Additional Efforts Needed to Help Ensure Alien Files Are Located when Needed, GAO-07-85 (Washington, D.C.: Oct. 27, 2006). Immigration Benefits: Additional Controls and a Sanctions Strategy Could Enhance DHS’s Ability to Control Benefit Fraud, GAO-06-259 (Washington, D.C.: March 10, 2006). Immigration Benefits: Improvements Needed to Address Backlogs and Ensure Quality of Adjudications, GAO-06-20 (Washington, D.C.: Nov. 21, 2005). Immigration Services: Better Contracting Practices Needed at Call Centers, GAO-05-526 (Washington, D.C.: June 30, 2005.) Catastrophic Disasters: Enhanced Leadership, Capabilities, and Accountability Controls Will Improve the Effectiveness of the Nation’s Preparedness, Response, and Recovery System, GAO-06-618 (Washington, D.C.: Sept. 6, 2006). Disaster Relief: Governmentwide Framework Needed to Collect and Consolidate Information to Report on Billions in Federal Funding for the 2005 Gulf Coast Hurricanes, GAO-06-834 (Washington, D.C.: Sept. 6, 2006). Disaster Preparedness: Limitations in Federal Evacuation Assistance for Health Facilities Should be Addressed, GAO-06-826 (Washington, D.C.: July 20, 2006). Expedited Assistance for Victims of Hurricanes Katrina and Rita: FEMA’s Control Weaknesses Exposed the Government to Significant Fraud and Abuse, GAO-06-655 (Washington, D.C.: June 16, 2006). Hurricane Katrina: Comprehensive Policies and Procedures Are Needed to Ensure Appropriate Use of and Accountability for International Assistance, GAO-06-460 (Washington, D.C.: April 6, 2006). Continuity of Operations: Agency Plans Have Improved, but Better Oversight Could Assist Agencies in Preparing for Emergencies, GAO-05-577 (Washington, D.C.: April 28, 2005). This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | The Department of Homeland Security (DHS) plays a key role in leading and coordinating--with stakeholders in the federal, state, local, and private sectors--the nation's homeland security efforts. GAO has conducted numerous reviews of DHS management functions as well as programs including transportation and border security, immigration enforcement and service delivery, and disaster preparation and response. This testimony addresses: (1) why GAO designated DHS's implementation and transformation as a high-risk area, (2) specific management challenges that DHS continues to face, (3) examples of the program challenges that DHS faces, and (4) actions DHS should take to strengthen its implementation and transformation efforts. GAO designated implementing and transforming DHS as high risk in 2003 because DHS had to transform and integrate 22 agencies--several with existing program and management challenges--into one department, and failure to effectively address its challenges could have serious consequences for our homeland security. Despite some progress, this transformation remains high risk. Managing the transformation of an organization of the size and complexity of DHS requires comprehensive planning and integration of key management functions that will likely span a number of years. DHS has made some progress in these areas, but much additional work is required to help ensure sustainable success. DHS has also issued guidance and plans to assist management integration on a function by function basis, but lacks a comprehensive integration strategy with overall goals, a timeline, appropriate responsibility and accountability determinations, and a dedicated team to support its efforts. The latest independent audit of DHS's financial statements showed that its financial management systems still do not conform to federal requirements. DHS has also not institutionalized an effective strategic framework for information management, and its human capital and acquisition systems require further attention to ensure that DHS allocates resources economically, effectively, ethically, and equitably. Since GAO's 2007 high-risk update, DHS has continued to strengthen program activities but still faces a range of programmatic and partnering challenges. To help ensure its missions are achieved, DHS must overcome continued challenges related to such issues as cargo, transportation, and border security; systematic visitor tracking; efforts to combat the employment of illegal aliens; and outdated Coast Guard asset capabilities. Further, DHS and the Federal Emergency Management Agency need to continue to develop clearly defined leadership roles and responsibilities; necessary disaster response capabilities; accountability systems to provide effective services while protecting against waste, fraud, and abuse; and the ability to conduct advance contracting for emergency response goods, supplies, and services. DHS has not produced a final corrective action plan specifying how it will address its many management challenges. Such a plan should define the root causes of known problems, identify effective solutions, have management support, and provide for substantially completing corrective measures in the near term. It should also include performance metrics and milestones, as well as mechanisms to monitor progress. It will also be important for DHS to become more transparent and minimize recurring delays in providing access to information on its programs and operations so that Congress, GAO, and others can independently assess its efforts. DHS may require a chief management official, with sufficient authority, dedicated to the overall transformation process to help ensure sustainable success over time. |
For decades Cubans have fled Cuba, often by raft, seeking freedom in the United States. For example, during the first 6 months of 1993, the U.S. Coast Guard picked up about 1,300 rafters and brought them to the United States. This number increased to about 4,700 during the same period in 1994. At that time, Cuba was maintaining its strict policy of forbidding its citizens from illegally exiting the country. In June 1994, violence by both the Cuban authorities and would-be asylum seekers escalated when, for example, Cuban authorities shot and killed a Cuban who was attempting to escape the island. From July 13 through August 8, 1994, at least 37 asylum seekers and 2 Cuban officials were killed in a series of boat hijackings. In addition, a riot erupted in Havana on August 5 when police attempted to disperse a crowd that had gathered when a false rumor circulated that a flotilla of boats was on its way to pick up people seeking to leave. On August 13, Fidel Castro gave a televised speech blaming the United States for the riots and violence and threatened to remove restrictions on Cubans exiting the country if the United States did not take steps to deter boat departures and return those hijackers who had reached the United States. Not receiving the response he anticipated from the United States, Castro indicated he would not prevent Cubans from leaving. Over the next week, Cubans flocked to the beaches, where they constructed make-shift vessels and set out to sea. As the flow of rafters increased, President Clinton announced on August 19, 1994, that the Coast Guard would no longer bring interdicted Cubans to the United States but would hold them at Guantanamo Bay. The President and the Attorney General indicated at that time that those Cubans taken to Guantanamo Bay would have no opportunity for eventual entry into the United States. This announcement reversed a 3-decade policy of welcoming Cubans seeking refuge into the United States. Many Cubans did not believe that the United States would actually enforce the new policy and consequently continued to leave Cuba. About 33,000 Cubans were picked up at sea and taken to Guantanamo Bay. Concerned about the continuing exodus, on September 9, 1994, the United States and Cuba signed an accord under which the United States agreed to admit at least 20,000 Cubans per year directly from Cuba through legal channels. The U.S. Interests Section in Havana estimated that this number would comprise approximately 7,000 refugees and family members, 8,000 immigrant visa recipients and their families, and 5,000 paroled through the Special Cuban Migration Program—a special lottery. The Cuban government agreed “to prevent unsafe departures using mainly persuasive methods.” Within days the Cuban police again were patrolling the roads leading to the beaches, under orders to arrest persons carrying rafts or the materials to build them, and Cubans stopped departing by raft. The United States later began granting parole to certain categories of Cubans in the safe haven camps at Guantanamo Bay. On October 14, 1994, President Clinton announced that parole would be granted to those over age 70, unaccompanied minors, or those with serious medical conditions and their caregivers. On December 2, 1994, the Attorney General announced that parole would be considered on a case-by-case basis for children and their immediate families who would be adversely affected by long-term presence in safe havens. These four categories became known as the “four protocols.” On May 2, 1995, the White House Press Secretary announced that Cubans interdicted at sea would no longer be taken to safe haven at Guantanamo Bay but would be returned to Cuba where they could apply for entry into the United States through legal channels at the U.S. Interests Section. In discussing this announcement, the Attorney General stated that measures would be taken to ensure that persons who claimed a genuine need for protection, which they believed could not be satisfied by applying at the U.S. Interests Section, would be examined before their return to Cuba. She also announced at that time that remaining Cubans at Guantanamo Bay—about 18,500 as of June 7, 1995—would be considered for parole into the United States, excluding those found to be ineligible for parole due to criminal activity in Cuba, in the United States, or while in safe haven and those with certain serious medical conditions. Within the executive branch, an interagency working group is responsible for developing strategies for implementing the Cuban migration policy. The working group is chaired by the National Security Council and includes representatives from the State Department’s Bureaus for Inter-American Affairs and Population, Refugees, and Migration and the Legal Advisor’s Office; the Department of Justice’s INS and CRS; the Defense Department’s Offices of the Secretary of Defense (Humanitarian and Refugee Affairs) and Joint Chiefs of Staff; and the Coast Guard. The U.S. Interests Section in Havana is responsible for processing the more than 20,000 expected Cuban applicants for U.S. entry, annually. As of August 1995, the Interests Section had increased its processing staff to 6 full-time consular officers and about 3 temporary-duty consular officers, 4 INS officers, about 40 local nationals, and 4 U.S. and third country contract hires. Consular officers at the Interests Section process immigrant visa applications and prescreen parole applicants; the Refugee Coordinator prescreens refugee applicants. INS adjudicates refugee and parole applications in Havana and parole applications at Guantanamo Bay. The Defense Department is responsible for carrying out the safe haven program at Guantanamo Bay. The Office of the Secretary of Defense and the Joint Chiefs of Staff oversee safe haven operations, and the U.S. Atlantic Command has operational responsibility. Joint Task Force (JTF)-160 executes the safe haven mission at Guantanamo—caring for the inhabitants, providing for their security and protection, and preparing them for travel to the United States. JTF-160 is also charged with the safety and security of U.S. personnel at Guantanamo Bay and the security of the station itself. The U.S. Coast Guard interdicts rafters at sea and, until May 2, 1995, it took them to safe haven at Guantanamo Bay. Since May 2, 1995, most Cubans interdicted at sea have been returned by the Coast Guard to Cuba. Civilian agencies implement various components of the safe haven program. The Department of State’s Bureau for Population, Refugees, and Migration provides assistance to the safe haven population at Guantanamo Bay through a grant to the World Relief Corporation. At Guantanamo Bay, CRS assists in parole processing and provides human resource services, such as family reunification, conciliation and mediation assistance and training, and recreation and education. CRS also provides resettlement assistance to parolees when they arrive in the United States. The State Department also maintains an officer in Guantanamo Bay as a liaison with the military and civilian agencies. Other organizations are also involved in Cuban migration operations at Guantanamo Bay. The World Relief Corporation, a nongovernmental organization, provides public health and social services, vocational training, mail services, and coordination of private donations. The International Organization for Migration (IOM), an intergovernmental organization based in Geneva, Switzerland, arranges resettlement for Cubans wishing to migrate to countries other than the United States. Pursuant to an agreement with the Cuban government to allow some voluntary repatriation over land rather than flying to Havana, IOM also arranges voluntary repatriation through the station’s Northeast Gate. IOM was also working with the remaining Haitians in camps at Guantanamo Bay. Considerable military and civilian personnel resources are at Guantanamo Bay to support the safe haven operation. As shown in table 1, more than 5,000 personnel were providing security and services to Cubans in the safe haven camp at the time of our visit in June 1995. INS planned to increase its personnel to at least 18 by the end of summer to augment parole eligibility determination. IOM, on the other hand, expects to decrease its presence to six as the remaining Haitians are repatriated or allowed entry into the United States. We estimate that the total cost of the U.S. response to the Cuban exodus from August 1994 through fiscal year 1995 will exceed $497 million (see table 2). This represents incremental costs, which are costs that agencies would not have incurred had there been no Cuban migration crisis. Defense costs include procuring construction materials, food, medical supplies, and miscellaneous items for camps at Guantanamo Bay and in Panama; shipping food and supplies; transporting military personnel to the camps and about 500 to 550 parolees to Homestead Air Force Base, Florida, each week; and moving 8,763 Cubans from Guantanamo Bay to Panama in September 1994 and 7,291 back again in February 1995. Defense does not budget for such migrant operations, and it requested a $370-million supplemental appropriation for fiscal year 1995 to minimize the impact of these activities on military operations. Coast Guard expenses cover the costs of patrolling the waters between Cuba and Florida and bringing people to Guantanamo Bay and Cuba. CRS’ costs primarily cover resettlement assistance to parolees arriving in the United States (about $31.3 million). State Department costs include expanding consular processing in Havana and providing a liaison officer at Guantanamo Bay and a grant to the World Relief Corporation to provide services at the safe haven camps. Our review of the processing workload at the Interests Section indicates that it will process 20,000 applicants for U.S. entry and the additional 6,700 applicants on the waiting list by September 8, 1995—the end of the first year under the agreement. As of June 9, 1995, the Interests Section had approved 16,305 for entry into the United States. This number included 7,693 refugees, 40 paroled family members of refugees, 3,601 immigrant visas, 3,073 paroled family members of immigrant visa recipients, and 1,898 parolees selected through a lottery. An additional 4,451 applicants for immigrant visas who were on the noncurrent preference lists had been approved for parole and 1,269 of their immediate relatives had been issued immigrant visas, pursuant to the September 1994 agreement. From 1996 through 1998, the workload will be somewhat reduced because the 20,000-person requirement will be offset each year for 3 years by up to 5,000 as a result of the May 2 announcement that all eligible Guantanamo Bay camp applicants would be paroled into the United States. Resettlement processing continues, as about 500 to 550 Cubans enter the United States from Guantanamo Bay each week. As of June 27, 1995, 14,746 had been paroled into the United States under the four humanitarian protocols, including 1,270 paroled from the temporary Howard Air Force Base safe haven in Panama from October 1994 through February 1995. Another 622 had returned to Cuba through diplomatic channels, 139 had resettled in third countries, and 1,000 had returned to Cuba on their own, either over land or by water. Sixty Cuban rafters had been interdicted and repatriated to Cuba as of that date, pursuant to the May 2 announcement that such individuals would be returned to Cuba. At the time of our visit to Guantanamo Bay, 18,802 Cubans remained in the camps. Of these, 5,856 qualified for parole under the four protocols, and 12,946 were eligible to apply for parole consideration under the May 2 announcement. The INS officer-in-charge noted that JTF-160 had compiled about 4,500 camp incident reports involving camp infractions that INS staff will review for impact on individual parole eligibility. However, INS estimates that only a small number of those involved will be ineligible for parole. While detention in safe haven camps is undoubtedly difficult, our review at the Guantanamo Bay camps indicated that living conditions were adequate. While we found no internationally accepted criteria for minimal refugee living standards, we noted that the U.S. Atlantic Command had developed standards for safe haven conditions based on inspection guidelines of the United Nations High Commissioner for Refugees (UNHCR) and standard military regulations and manuals’ requirements. The Command developed a camp construction model for migrant operations based on a population of 10,000 that could be adapted for population changes and issued corresponding operational guidelines, including camp organization, services, construction, and logistics. We found that conditions generally met or exceeded Atlantic Command standards and UNHCR inspection guidelines. For example, minimal UNHCR inspection guidelines include 3.5 square meters of living space per migrant. Using this as guidance, the Command recommended using medium-sized tents to house up to 15 Cubans. We found no indication that these tents housed more than 15 persons. Camp conditions have improved since the influx of Cubans in the summer of 1994, due to decreasing population density and a Defense Department “Quality of Life” facilities upgrade. In late August 1994, thousands of people were arriving daily at the Guantanamo Bay camps. Together with about 12,000 Haitians, the camps’ population totaled about 45,000 in September 1994. At that time, living conditions were marginal, according to Atlantic Command officials, as JTF-160 was erecting tents and installing portable toilets as quickly as people arrived. Crowded conditions began easing as most Haitians were repatriated to Haiti following President Aristide’s return in October 1994, and more than 8,000 Cubans were relocated to safe haven camps in Panama for 6 months. Also, Cubans began leaving via parole following the October and December protocol announcements. The Defense Department had intended to spend almost $35 million to upgrade facilities to accommodate a longer term camp operation. However, the May 2 announcement that most camp inhabitants would be eligible for parole lessened the urgency to improve conditions. As a result, the Defense Department spent about $25.3 million for its upgrade program. Not all camps were upgraded; some camps were scheduled to be disassembled as populations decreased. Upgrades included elevated hardback tents, plumbing, tension fabric structures as multipurpose buildings, and electricity. (See figs. 1 through 3.) In general, those who are expected to be paroled in late 1995 and early 1996 are located in the newer camps. Those eligible for parole under the first four protocols are scheduled to leave by the end of summer 1995 and, for the most part, are located in the camps that have not been upgraded (see fig. 4). In addition to adequate shelter, camp residents receive breakfast, a hot dinner prepared by Cuban cooks, and Meals Ready to Eat (MRE) for lunch. Cubans with whom we spoke said that the food was better than when they first arrived, when they mostly received rice. They also receive medical treatment at camp clinics and in military medical and surgical units as necessary. Recreational activities include baseball, basketball, pool, ping-pong, movies, music, arts and crafts, and libraries. In addition, adults can attend English and vocational classes coordinated by World Relief. Most children have left the camps, but the few remaining receive basic schooling organized by CRS. Many of these services are provided by camp residents with special skills. Security is professional but not overtly oppressive. Camp residents are relatively free to move around within camp areas. When they first arrived, the Cubans were restricted to smaller areas behind razor concertina wire. According to military personnel, tensions have eased since the May 2 announcement that the Cubans would not be detained indefinitely but could apply for parole. Although by September 1995 the Interests Section will likely have processed for U.S. entry the 20,000 Cubans called for in the September 9, 1994, agreement as well as the 6,700 on the noncurrent immigrant visa preference list, it is unlikely that this number will travel to the United States by that date. Of the 7,693 refugees approved for travel, only 1,494 had left as of June 9, 1995. While this partly reflects the normal lag in obtaining sponsorship for approved refugees, the relatively small number who have left also reflects the adverse impact of steep Cuban government-imposed air fare increases and fees for migration-related services. In February 1995, the Cuban government raised the one-way fare from Havana to Miami from $150 to $990. When the rate was increased, the Interests Section refused to pay the higher amount and negotiated rates with commercial airlines for regularly scheduled flights to Miami through Mexico and Costa Rica. The number of such seats was limited, resulting in 5,267 refugees waiting to travel at the time of our visit. The remaining 932 refugees had been adjudicated but had not yet obtained all documents required for travel. Unlike refugees, immigrant visa holders and parolees must arrange and pay for their own transportation to the United States. Because they arrange their own travel, the Interests Section does not track the number that has departed from Cuba. Although INS will report in 1997 on numbers of Cubans coming through U.S. ports of entry in 1995, these numbers will reflect country of nationality, not country of departure. The U.S. government repeatedly voiced its concern to the Cuban government about the exorbitant airfare. Cuba agreed to lower the fare; however, it also imposed additional fees in June 1995, including $400 for the medical examination required for all people seeking U.S. entry ($250 for children), $250 for an exit permit and related documents, and $50 for a passport. U.S. officials told us that they believe that some fees for these previously free services may be reasonable, but the fees imposed (even with reduced air charters) will pose serious obstacles for Cubans seeking to emigrate. At the time of our visit to Guantanamo Bay, the backlog of those approved for travel from there was estimated by INS at about 1,200. Parolees leave Guantanamo Bay on three charter flights each week, and depending on the size of the aircraft, 500 to 550 parolees depart each week. At this rate, the camps should be empty by March 15, 1996. However, the availability of transportation is not the limiting factor in more rapidly reducing the camps’ population. Despite the backlog and the continuing cost to operate the camps, the weekly departure rates are not expected to increase. According to Defense, State, and Justice officials, state of Florida officials maintain that the state can accommodate no more than 550 parolees per week. Defense, State, and Justice officials said that senior Clinton administration officials have agreed not to exceed that figure. According to Defense Department officials, if departures could be accelerated to 690 per week, they could empty the camps by December 15, 1995, and save about $22.2 million in operating expenses. The Departments of Defense, Justice, and State provided oral comments on this report. Their technical comments have been incorporated where appropriate. State Department officials suggested that it would have been useful to have compared costs incurred with those that might have been incurred by both the federal and state of Florida governments had no action been taken to stem the flow of Cubans to the United States. Such an analysis may be interesting, but it was not within the scope of work we were requested to perform. Furthermore, such analysis would be highly subjective because the cost would depend on many unknown factors such as the number of Cubans who would have fled to the United States had no action been taken to stem the flow, and what benefits and services would have been provided. Also, we found no evidence that the decision to reverse a 30-year policy of welcoming fleeing Cubans to the United States was based on cost consideration. We identified U.S. policies toward Cubans seeking U.S. entry through discussions with State, INS, and CRS officials and reviewing documentation such as agreements with the Cuban government, joint communiques, administration announcements of parole and safe haven positions, and pertinent legislation. To determine the processing capabilities of the Interests Section, we interviewed INS officials in Washington, D.C., and visited the Interests Section in Havana. In Havana, we discussed with consular, INS, and senior post officials the various screening and adjudication processes for refugees, immigrants, and parolees; reviewed sample case files; and observed ongoing screenings. To determine living conditions at Guantanamo Bay, we visited the U.S. Atlantic Command in Norfolk, Virginia, to discuss its oversight of migrant operations and how it developed criteria for living standards. We also visited Guantanamo Bay, where we observed camp conditions, examined the parole processing procedures, and monitored the weekly meeting with the JTF Commander and the Cuban representatives from each camp. In addition, we met with JTF-160 operations and logistics officers and officials from CRS, INS, State, World Relief Corporation, IOM, and UNHCR to discuss their activities at Guantanamo Bay. To determine program costs, we obtained estimated actual and projected Cuban migrant program incremental cost data for fiscal years 1994 and 1995 from the Departments of Defense and State, INS, CRS, and the Coast Guard. We did not verify the accuracy of the agencies’ estimates. We conducted our review between April and August 1995 in accordance with generally accepted government auditing standards. Unless you announce its contents earlier, we plan no further distribution of this report until 15 days after its issue date. At that time, we will send copies to the Departments of State, Defense, and Justice and to interested congressional committees, and to others upon request. If you or your staff have any questions concerning this report, please contact me at (202) 512-4128. Major contributors to this report were David R. Martin, Assistant Director, and Audrey E. Solis, Senior Evaluator. Harold J. Johnson, Director International Affairs Issues The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO reviewed the U.S. government's actions to address the 1994 Cuban migration crisis, focusing on: (1) how U.S. policy has toward those seeking to leave Cuba has changed since that time; (2) the agencies involved with and costs to the government associated with the exodus of Cubans; (3) the capabilities of the U.S. Interests Section in Havana to process applicants seeking legal entry into the United States; and (4) the adequacy of living conditions at the Cuban safe haven camps at the U.S. Naval Station in Guantanamo Bay. GAO found that: (1) for over 30 years, fleeing Cubans had been welcomed to the United States; however, the U.S. government reversed this policy on August 19, 1994, when President Clinton announced that Cuban rafters interdicted at sea would no longer be brought to the United States and would be taken to safe haven camps at the U.S. Naval Station, Guantanamo Bay, Cuba, with no opportunity for eventual entry into the United States other than by returning to Havana to apply for entry through legal channels at the U.S. Interests Section; (2) on September 9, 1994, the U.S. and Cuban governments agreed that the United States would allow at least 20,000 Cubans to enter annually in exchange for Cuba's pledge to prevent further unlawful departures by rafters; (3) on May 2, 1995, a White House announcement was released stating that: Cubans interdicted at sea would not be taken to a safe haven but would be returned to Cuba where they could apply for entry into the United States at the Interest Section in Havana, eligible Cubans in the safe haven camps would be paroled into the United States, and those found to be ineligible for parole would be returned to Cuba; (4) several U.S. agencies have been involved in implementing the U.S. policy regarding Cubans wishing to leave their country including the: (a) Department of Defense which will spend about $434 million from August 1994 through September 1995 operating the safe haven camps, (b) U.S. Coast Guard which spent about $7.8 million interdicting Cubans at sea from August 1994 to the present, (c) Department of Justice's Immigration and Naturalization Service and Community Relations Service which together will spend about $48.6 million for the Cuban migration crisis from August 1994 through September 1995, and (d) Department of State which will spend an estimated $7.1 million during this same period; (5) the U.S. Interests Section in Havana has been able to meet the workload of processing applicants seeking legal entry into the United States; (6) as of June 9, 1995, it had approved 16,305 Cubans for U.S. entry; however, not all those approved will leave Cuba by September 1995, the anniversary of the September 1994 agreement; (7) the Cubans' living conditions at the Guantanamo Bay safe haven camps are difficult but adequate based on GAO's observations at the camps; and (8) conditions in all camps generally exceeded U.N. inspection guidelines for minimal shelter, food, and water, but found no internationally accepted standards of what the living conditions should be at refugee camps. |
All levels of government share responsibility in the overall U.S. election system. At the federal level, Congress has authority under the Constitution to regulate presidential and congressional elections and to enforce prohibitions against specific discriminatory practices in all federal, state, and local elections. Congress has passed legislation that addresses voter registration, absentee voting, accessibility provisions for the elderly and handicapped, and prohibitions against discriminatory practices. At the state level, individual states are responsible for the administration of both federal elections and their own elections. States regulate the election process, including, for example, the adoption of voluntary voting system guidelines, the state certification and acceptance testing of voting systems, ballot access, registration procedures, absentee voting requirements, the establishment of voting places, the provision of election day workers, and the counting and certification of the vote. In total, the overall U.S. election system can be seen as an assemblage of 55 distinct election systems—those of the 50 states, the District of Columbia, and the 4 U.S. territories. Further, although election policy and procedures are legislated primarily at the state level, states typically decentralize election administration, so that it is carried out at the city or county levels, and voting is done at the local level. As we reported in 2001, local election jurisdictions number more than 10,000, and their sizes vary enormously—from a rural county with about 200 voters to a large urban county, such as Los Angeles County, where the total number of registered voters for the 2000 elections exceeded the registered voter totals in 41 states. Further, these thousands of jurisdictions rely on many different types of voting methods that employ a wide range of voting system makes, models, and versions. Voting systems are but one facet of a multifaceted, continuous election system that involves the interplay of people, processes, and technology. All levels of government, as well as commercial voting system manufacturers and VSTLs, play key roles in ensuring that voting systems perform as intended. Electronic voting systems are typically developed by manufacturers, purchased as commercial off-the-shelf products, and operated by state and local election administrators. These activities can be viewed as three phases in a system’s life cycle: product development, acquisition, and operations (see fig. 1). Spanning these life cycle phases are key processes, including managing the interplay of people, processes, and technologies, and testing the systems and components. In addition, voting system standards are important through all of these phases because they provide the criteria for developing, testing, and acquiring the systems, and they specify the necessary documentation for operating the systems. We discuss each of these phases after figure 1. The product development phase includes such activities as establishing requirements for the system, designing a system architecture, and developing software and integrating components. Activities in this phase are performed by the system manufacturer. The acquisition phase includes such procurement-related activities as publishing a solicitation, evaluating offers, choosing a voting technology, choosing a vendor, and awarding and administering contracts. Activities in this phase are primarily the responsibility of state and local governments, but include responsibilities that are shared with the vendor, such as establishing contracts. The operations phase consists of such activities as ballot design and programming, setup of systems before voting, pre-election testing, vote capture and counting during elections, recounts and system audits after elections, and storage of systems between elections. Responsibility for activities in this phase typically resides with local jurisdictions, whose officials may, in turn, rely on or obtain assistance from system vendors for aspects of these activities. Standards for voting systems were developed at the national level by the Federal Election Commission (FEC) in 1990 and 2002 and were updated by EAC in 2005. Voting system standards serve as guidance for product developers in building systems, a framework for state and local governments to evaluate systems, and as the basis for documentation needed to operate the systems. Testing processes are conducted throughout the life cycle of a voting system. For example, manufacturers conduct product testing during development of the system. Also, national certification testing of products submitted by system manufacturers is conducted by nationally accredited VSTLs. States and local jurisdictions also perform a range of system tests. Management processes help to ensure that each life cycle phase produces desirable outcomes. Typical management activities include planning, configuration management, system performance review and evaluation, problem tracking and correction, human capital management, and user training. Testing electronic voting systems for conformance with requirements and standards is critical to ensuring their security and reliability, and an essential means to ensuring that systems perform as intended. In addition, such testing can help find and correct errors in systems before they are used in elections. If done properly, testing provides voters with assurance and confidence that their voting systems will perform as intended. Testing is particularly important for electronic voting systems because these systems have become our Nation’s predominant method of voting, and concerns have been raised about their security and reliability. As we reported in 2005, these concerns include weak security controls, system design flaws, inadequate system version control, inadequate security testing, incorrect system configuration, poor security management, and vague or incomplete voting system standards. Further, security experts and some election officials have expressed concerns that tests performed under the NASED program by independent testing authorities and state and local election officials did not adequately assess voting systems’ security and reliability. Consistent with these concerns, most of the security weaknesses that we identified in our prior report related to systems that NASED had previously qualified. Our report also recognized that security experts and others pointed to these weaknesses as an indication that both the standards and the NASED testing program were not rigorous enough with respect to security, and that these concerns were amplified by what some described as a lack of transparency in the testing process. Enacted in October 2002, HAVA affects nearly every aspect of the election system, from voting technology to provisional ballots and from voter registration to poll worker training. Among other things, the act authorized $3.86 billion in funding over several fiscal years for states to replace punch card and mechanical lever voting equipment, improve election administration and accessibility, and perform research and pilot studies. In addition, the act established EAC and assigned it responsibility for, among other things, (1) updating voting system standards, (2) serving as a clearinghouse for election-related information, (3) accrediting independent test laboratories, and (4) certifying voting systems. EAC began operations in January 2004. In 2004, we testified on the challenges facing EAC in meeting its responsibilities. For example, we reported that EAC needed to move swiftly to strengthen voting system standards and the testing associated with these standards. We also reported that the commission’s ability to meet its responsibilities depended, in part, on the adequacy of the resources at its disposal. Updating standards: HAVA requires EAC to adopt a set of federal voting system standards, referred to as the Voluntary Voting System Guidelines (VVSG). In December 2005, the commission adopted the VVSG, which defines a set of specifications and requirements against which voting systems are to be designed, developed, and tested to ensure that they provide the functionality, accessibility, and security capabilities required to ensure the integrity of voting systems. As such, the VVSG specifies the functional requirements, performance characteristics, documentation requirements, and test evaluation criteria for the national certification of voting systems. In 2007, the Technical Guidelines Development Committee submitted its recommendations for the next iteration of the VVSG to EAC. The commission has yet to establish a date when the update will be approved and issued. Serving as an information clearinghouse: HAVA requires EAC to maintain a clearinghouse of information on the experiences of state and local governments relative to, among other things, implementing the VVSG and operating voting systems. As part of this responsibility, EAC posts voting system reports and studies that have been conducted or commissioned by a state or local government on its Web site. These reports must be submitted by a state or local government that certifies that the report reflects its experience in operating a voting system or implementing the VVSG. EAC does not review the information for quality and does not endorse the reports and studies. Accrediting independent test laboratories: HAVA assigned responsibilities for laboratory accreditation to both EAC and NIST. In general, NIST focuses on assessing laboratory technical qualifications and recommends laboratories to EAC for accreditation. EAC uses NIST’s assessment results and recommendations, and augments them with its own review of related laboratory capabilities to reach an accreditation decision. As we have previously reported, EAC and NIST have defined their respective approaches to accrediting laboratories that address relevant HAVA requirements. However, neither approach adequately defines all aspects of an effective program to the degree needed to ensure that laboratories are accredited in a consistent and verifiable manner. Accordingly, we recently made recommendations to NIST and EAC aimed at addressing these limitations. Certifying voting systems: HAVA requires EAC to provide for the testing, certification, decertification, and recertification of voting system hardware and software. EAC’s voting system testing and certification program is described in detail in the following section. Prior to HAVA, no federal agency was assigned or assumed responsibility for testing and certifying voting systems against the federal standards. Instead, NASED, through its voting systems committee, assumed this responsibility by accrediting independent test authorities, which in turn tested equipment against the standards. When testing was successfully completed, the independent test authorities notified NASED that the equipment satisfied testing requirements. NASED would then qualify the system for use in elections. According to a NASED official, the committee has neither qualified any new or modified systems, nor taken any actions to disqualify noncompliant systems, since the inception of EAC’s testing and certification program in January 2007. EAC implemented its voting system testing and certification program in January 2007. According to the commission’s Testing and Certification Program Manual, EAC certification means that a voting system has been successfully tested by an accredited VSTL, meets requirements set forth in a specific set of federal voting system standards, and performs according to the manufacturer’s specifications. The process of EAC’s voting system testing and certification program consists of seven major phases. Key stakeholders that are involved in this process include voting system manufacturers, accredited VSTLs, and state and local election officials. These seven phases are described in the following text and depicted in figure 2. All manufacturers must be registered to submit a voting system for certification. To register, a manufacturer must provide such information as organizational structure and contact(s); quality assurance, configuration management, and document retention procedures; and identification of all manufacturing and assembly facilities. In registering, the manufacturer agrees to certain duties and requirements at the outset of its participation in the program. These requirements include properly using and representing EAC’s certification label, notifying the commission of any changes to a certified system, permitting EAC to verify the manufacturer’s quality control procedures by inspecting fielded systems and manufacturing facilities, cooperating with any inquiries and investigations about certified systems, reporting any known malfunction of a system, and otherwise adhering to all procedural requirements of the program manual. Once a manufacturer submits a completed application form and all required attachments, EAC reviews the submission for sufficiency using a checklist that maps to the application requirements listed in the program manual. If the application passes the review, EAC provides the manufacturer with a unique identification code and posts the applicant as a registered manufacturer on the commission’s Web site, along with relevant documentation. For each voting system that a manufacturer wishes to have certified, it submits an application package. The package includes an application form requiring the following: manufacturer information, accredited VSTL selection, applicable voting system standard(s), nature of submission, system name and version number, all system components and corresponding version numbers, and system configuration information. The package also includes the following documentation: system implementation statement, functional diagram, and system overview. EAC reviews the submission for completeness and accuracy and, if it is acceptable, notifies the manufacturer and assigns a unique application number to the system. Once the certification application is accepted, the accredited VSTL prepares and submits to EAC a test plan defining how it will ensure that the system meets applicable standards and functions as intended. When a laboratory submits its test plan, EAC’s technical reviewers assess it for adequacy. If the plan is deemed not acceptable, the commission provides written notice to the laboratory that includes a description of the problems identified and the steps required to remedy the test plan. The laboratory may take remedial action and resubmit the test plan until it is accepted by EAC reviewers. The VSTL executes the approved test plan and notifies EAC directly of any test anomalies or failures, along with any changes or modifications to the test plan as a result of testing. The laboratory then prepares a test results report. The VSTL submits the test results report to EAC’s Program Director who reviews it for completeness. If it is complete, the technical reviewers analyze the report in conjunction with related technical documents and the test plan for completeness, appropriateness, and adequacy. The reviewers submit their findings to the Program Director, who either recommends certification of the system to the Decision Authority, EAC’s Executive Director, or refers the matter back to the reviewers for additional specified action and resubmission. EAC’s Decision Authority reviews the recommendation of the Program Director and supporting materials and issues a written decision to the manufacturer. If certification is denied, the manufacturer may request an opportunity to correct the basis for the denial or may request reconsideration of the decision after submitting supporting written materials, data, and a rationale for its position. The Decision Authority considers the request and issues a written decision. If the decision is to deny certification, the manufacturer may request an appeal in writing to the Program Director. The Appeal Authority, which consists of two or more EAC Commissioners or other individuals appointed by the Commissioners who have not previously served as the initial or reconsideration authority, consider the appeal. The Appeal Authority may overturn the decision if it finds that the manufacturer has demonstrated by clear and convincing evidence that its system met all substantive and procedural requirements for certification. The initial decision becomes final and EAC issues a Certificate of Conformance to the manufacturer, and posts the system on the list of certified voting systems on its Web site, when the manufacturer and VSTL successfully demonstrate that the voting system under test has been: Subject to a trusted build: The voting system’s source code is converted to executable code in the presence of at least one VSTL representative and one manufacturer representative, using security measures to ensure that the executable code is a verifiable and faithful representation of the source code. This demonstrates that (1) the software was built as described in the technical documentation, (2) the tested and approved source code was actually used to build the executable code on the system, and (3) no other elements were introduced in the software build. It also serves to document the configuration of the certified system for future reference. Placed in a software repository: The VSTL delivers the following to one or more trusted repositories designated by EAC: (1) source code used for the trusted build and its file signatures; (2) disk image of the prebuild, build environment, and any file signatures to validate that it is unmodified; (3) disk image of the postbuild, build environment, and any file signatures to validate that it is unmodified; (4) executable code produced by the trusted build and its file signatures of all files produced; and (5) installation device(s) and its file signatures. Verified using system identification tools: The manufacturer creates and makes available system identification tools that federal, state, and local officials can use to verify that their voting systems are unmodified from the system that was certified. These tools are to provide the means to identify and verify hardware and software. To its credit, EAC has taken steps to develop an approach to testing and certifying voting systems that follows statutory requirements and many recognized and accepted practices. However, the commission has not developed its approach in sufficient detail to ensure that its certification activities are performed thoroughly and consistently. It has not, for example, defined procedures or specific criteria for many of its review activities, and for ensuring that the decisions made, and their basis, are properly documented. According to EAC officials, these gaps exist because the program is still new and evolving and resources are limited. Officials further stated that they do not yet have written plans for addressing these gaps. Until these gaps are addressed, EAC cannot adequately ensure that its approach is repeatable and verifiable across all manufacturers and systems. Moreover, this lack of definition has caused EAC stakeholders to interpret certification requirements differently, and the resultant need to reconcile these differences has contributed to delays in certifying systems that several states were planning on using in the 2008 elections. Product certification or conformance testing is a means by which a third party provides assurance that a product conforms to specific standards. In the voting environment, EAC is the third party that provides assurance to the buyer (e.g., state or local jurisdictions) that the manufacturer’s voting system conforms to the federal voting standards set forth in FEC’s 2002 Voting System Standards (VSS) or EAC’s 2005 VVSG. Several organizations, such as NIST, ISO, and IEC, have individually or jointly developed guidance for product certification and conformance testing programs. This guidance includes, among other things, (1) defining roles and responsibilities for all parties involved in the certification process, (2) defining a clear and transparent process for applicants to follow, (3) ensuring that persons involved in the process are impartial and independent, (4) establishing a process for handling complaints and appeals, and (5) having testing conducted by competent laboratories. Further, HAVA established statutory requirements for a federal testing and certification program. These requirements include ensuring that the program covers testing, certification, decertification, and recertification of voting system hardware and software. EAC’s defined voting system certification approach reflects these key practices. Specifically: EAC has defined the roles and responsibilities for itself, the VSTLs, and manufacturers in its Testing and Certification Program Manual. These roles and responsibilities are described in table 1. EAC’s testing and certification process is documented in its program manual. Among other things, the manual clearly defines the program’s administrative requirements that manufacturers and VSTLs are to follow. EAC has made the program manual, along with supporting policies and clarifications, publicly available on its Web site, and has made program- related news and correspondence publicly accessible as they have come available. EAC’s certification program addresses impartiality and independence. For example, EAC policy states that all personnel and contractors involved in the certification program are subject to conflict-of-interest reporting and review. In addition, the policy mandates conflict-of-interest and conduct statements for the technical reviewers that support the program, requires conflict-of-interest reporting and reviews to ensure the independence of EAC personnel assigned to the program, and requires that all VSTLs maintain and enforce policies that prevent conflict-of-interest or the appearance of a conflict-of-interest, or other prohibited practices. EAC’s program manual outlines its process for the resolution of complaints, appeals, and disputes received from manufacturers and laboratories. These can be about matters relating to the certification process, such as test methods, procedures, test results, or program administration. Specifically, the program manual contains policies and procedures for submitting a Request for Interpretation, which is a means by which a registered manufacturer or accredited laboratory seeks clarification on a specific voting system standard, including any misunderstandings or disputes about a standard’s interpretation or implementation. The manual also contains policies, requirements, and procedures for a manufacturer to file an appeal on a decision denying certification, request an opportunity to correct a problem, and request reconsideration of a decision. In addition, EAC provides for Notices of Clarification, which offer guidance and explanation on the requirement and procedure of the program. Notices may be issued pursuant to a clarification request from a laboratory or manufacturer. EAC may also issue a notice or interpretation if it determines that any general clarifications are necessary. EAC has a VSTL accreditation program. This program is supported by NIST’s National Voluntary Laboratory Accreditation Program (NVLAP), which is a long established and recognized laboratory accreditation program. According to EAC’s program manual, all certification testing is to be performed by a laboratory accredited by NIST and EAC. Further, all subcontracted testing is to be performed by a laboratory accredited by either NIST or the American Association of Laboratory Accreditation for the specific scope of needed testing. Finally, any prior testing will only be accepted if it was conducted or overseen by an accredited laboratory and was reviewed and approved by EAC. HAVA also established certain requirements for EAC’s voting system testing and certification program. Under HAVA, EAC is to provide for the testing, certification, decertification, and recertification of voting system hardware and software by accredited laboratories. EAC’s defined approach addresses each of these areas. According to program officials, EAC’s certification program reflects many leading practices because the commission consciously sought out these best practices during program development. Officials stated that their intention is to develop a program that stringently tests voting systems to the applicable standards; therefore, they consulted with experts to assist with drafting the program manual. For example, according to EAC officials, they met with officials from other federal agencies that conduct certification testing in order to benefit from their lessons learned. By reflecting relevant practices, standards, and legislative requirements in its defined approach, EAC has provided an important foundation for having an effective voting system testing and certification program. EAC has yet to define its approach for testing and certifying electronic voting systems in sufficient detail to ensure that its certification activities are performed thoroughly and consistently. It has not, for example, defined procedures or specific criteria for many of its review activities and for ensuring that the decisions made are properly documented. EAC officials attributed this lack of definition to the fact that the program is still new and evolving, and they stated that available resources are constrained by competing priorities. Until these details are defined, EAC will be challenged to ensure that testing and review activities are repeatable across different systems and manufacturers, and that the activities it performs are verifiable. Moreover, this lack of definition is likely to result in different interpretations of program requirements by stakeholders, which has already resulted in the need to reconcile different interpretations and thereby caused delays in certifying systems that several states intended to use in the 2008 elections. In such cases, the delays are forcing states to either not require EAC certification or rely on an alternative system. According to federal and international guidance, having well-defined and sufficiently detailed program management controls help to ensure that programs are executed effectively and efficiently. Relative to a testing and certification program, such management controls include, among other things, having (1) defined procedures and established criteria for performing evaluation activities so that they will be performed in a comparable, unambiguous, and repeatable manner for each system and (2) required documentation to demonstrate that procedural evaluation steps and related decisions have been effectively performed, including provisions for review and approval of such documentation by authorized personnel. EAC’s defined approach for voting system testing and certification lacks such detail and definition. With respect to the first management control, the commission has not defined procedures or specific criteria for many of its review activities, instead it relies on the personal judgment of the reviewers. Specifically, the program manual states that EAC, with the assistance of its technical experts, as necessary, will review manufacturer registration applications, system certification applications, test plans, and test reports, but it does not define procedures or criteria for conducting these reviews. For example: The program manual states that upon receipt of a completed manufacturer registration application, EAC will review the information for sufficiency. However, it does not define what constitutes sufficiency and what this sufficiency review should entail. Rather, EAC officials said that this is left up to the individual reviewer’s judgment. The program manual lists the information that manufacturers are required to submit as part of their certification applications and states that EAC will review the submission for completeness and accuracy. While the commission has developed a checklist for determining whether the required information was included in the application, neither the program manual nor the checklist describe how reviewers should perform the review or assess the adequacy of the information provided. For example, EAC requires certification applications to include a functional diagram depicting how the components for the voting system function and interact, as well as a system overview that includes a description of the functional and physical interfaces between components. Although the checklist provides for determining whether these items are part of the application package, it does not, for example, provide for checking them for completeness and consistency. Moreover, we identified issues with completeness and consistency of these documents for approved certification application packages. Again, EAC officials said that these determinations are to be based on each reviewer’s judgment. The program manual states that test plans are to be reviewed for adequacy. However, it does not define adequacy or how such reviews are to be performed. This lack of detail is particularly problematic because EAC officials told us that the VSS and VVSG contain many vague and undefined requirements. According to these officials, reviewers have been directed to ensure that VSTLs stress voting systems during testing, based on what they believe are the most difficult and stringent conditions likely to be encountered in an election environment that are permissible under the standards. The program manual states that EAC technical experts will assess test results reports for completeness, appropriateness, and adequacy. However, it does not define appropriateness and adequacy or the procedural steps for conducting the review. The program manual requires VSTLs to use all applicable test suites issued by EAC when developing test plans. However, program officials stated that they currently do not have defined test suites and NIST officials said that they are focused on preparing test suites for the forthcoming version of the VVSG and not the 2005 version. As a result, each laboratory develops its own unique testing approach, which requires each to interpret what is needed to test for compliance with the standards and increases the risk of considerable variability in how testing is performed. To address this void, EAC has tasked NIST with developing test suites for both the 2005 VVSG and the yet-to-be-released update to these guidelines. Until then, EAC officials acknowledge that they will be challenged to ensure that testing conducted on different systems or at different VSTLs is consistent and comparable. With respect to the second management control, the commission has not defined the documentation requirements that would demonstrate that procedural steps, evaluations, and related decision making have been performed in a thorough, consistent, and verifiable manner. Specifically, while the program manual requires the Program Director to maintain documentation to demonstrate that procedures and evaluations were effectively performed, EAC has yet to specify the nature or content of this documentation. For example: The program manual requires technical reviewers to assess test plans and test reports prepared by laboratories and then to submit reports of their findings to the Program Director. However, EAC does not require documentation of how these reviews were performed beyond completion of a recently developed checklist, and this checklist does not provide for capturing how decisions were reached, including steps performed and criteria applied. For example, the VVSG requires that systems permit authorized access and prevent unauthorized access, and lists examples of measures to accomplish this, such as computer-generated password keys and controlled access security. While the checklist cites this requirement and provides for the reviewer to indicate whether the test plan satisfies it, it does not provide specific guidance on how to determine whether the access control measures are adequate, and it does not provide for documenting how the reviewer made such a decision. The program manual does not require supervisory review of work conducted by EAC personnel. Moreover, it does not require that the reviewers be identified. According to EAC officials, its approach does not yet include such details because it is still new and evolving and because the commission’s limited resources have been devoted to other priorities. To address these gaps, EAC officials stated that they intend to undertake a number of activities. For example, the Program Director stated that in the near-term, the technical reviewers will collaborate and share views on each test plan and test report under review as a way to provide consistency, and they will use a recently finalized checklist for test plan and test report reviews. In the longer term, EAC intends to define more detailed procedures for each step in its process. However, it has yet to establish documented plans, including the level of resources needed to accomplish this. Until these plans are developed and executed, EAC will be challenged to ensure that its testing and certification activities are performed thoroughly and consistently across different systems, manufacturers, and VSTLs. Moreover, this lack of program definition surrounding certification testing and review requirements has already caused, and could continue to cause, differences in how EAC reviewers, VSTLs, and manufacturers interpret the requirements. For example, the program requires sufficient and adequate testing, but it does not define what constitutes sufficient and adequate. As a result, laboratory officials and manufacturer representatives told us that EAC reviewers have interpreted these requirements more stringently than they have, and that reconciling these different interpretations has already caused delays in the approval of test plans, and they will likely prevent EAC from certifying any systems in time for use in the upcoming 2008 elections. This is especially problematic for those states that have statutory or other requirements to use federally certified systems and that have a need to acquire or upgrade existing systems for these elections. In this regard, 18 states reported to us that they were relying on EAC to certify systems for use in the 2008 elections, but now will have to adopt different strategies for meeting their states’ respective EAC certification requirements. For example, officials for several states said that they would use the same system as in 2006, while officials for other states described plans to either undertake upgrades to existing systems without federal certification, or change state requirements to no longer require EAC certification. EAC has largely followed its defined certification approach for each of the dozen voting systems that it is in the process of certifying, with one major exception. Specifically, it has not established sufficient means for states and local jurisdictions to verify that the voting systems that each receives from its manufacturer have system configurations that are identical to those of the system that the EAC certified, and it has not established plans or time frames for doing so. This means that states and local jurisdictions are at increased risk of using a version of a system during an election that differs from the certified version. This lack of an effective and efficient verification capability could diminish the value of an EAC system certification. EAC has largely executed its voting system testing and certification program as defined. While no system has yet to complete all major steps of the certification process discussed in the Background section of this report, and thus receive certification, 12 different voting systems have completed at least the first step of the process, and some have completed several steps. Specifically, as of May 2008, EAC had received, reviewed, and approved 12 manufacturer registration applications, and these approved manufacturers have collectively submitted certification applications for 12 different systems, of which EAC has accepted 9. For these 9 systems, manufacturers have submitted 7 test plans, of which EAC has reviewed and approved 2 plans. In 1 of these 2 cases, EAC has received a test results report, which it is currently in the process of reviewing. At the same time, EAC has responded to 8 requests for interpretations of standards from manufacturers and laboratories. EAC has also issued 6 notices of clarification, which provide further guidance and explanation on the program requirements and procedures. Our analysis of available certification-related documentation showed that in each of the 12 systems submitted for certification, all elements of each executed step in the certification process were followed. With respect to the manufacturer registration step, EAC reviewed and approved all 12 applications, as specified in its program manual. For the certification application step, EAC reviewed and approved 9 applications. In doing so, EAC issued 3 notices of noncompliance to manufacturers for failure to comply with program requirements. In each notice, it identified the area of noncompliance, and described what requested relevant information or corrective action(s) was needed in order to participate in the program. In 1 of the cases, EAC terminated the certification application due to the manufacturer’s failure to respond within the established time frame, which is consistent with the program manual. These actions were generally consistent with its program manual. Notwithstanding EAC’s efforts to follow its defined approach, it has not yet established a sufficient mechanism for states and local jurisdictions to use in verifying that the voting systems that they receive from manufacturers for use in elections are identical to the systems that were actually tested and certified. According to EAC’s certification program manual, final certification is conditional upon (1) testing laboratories depositing certified voting system software into an EAC-designated repository and (2) manufacturers creating and making available system identification tools for states and local jurisdictions to use in verifying that their respective systems’ software configurations match that of the software for the system that was certified and deposited into the repository. However, EAC has yet to establish a designated repository or procedures and review criteria for evaluating the manufacturer-provided tools, and has not established plans or time frames for doing so. While none of the ongoing system certifications have progressed to the point where these aspects of EAC’s defined approach is applicable, they will be needed when the first system’s test results are approved. Until both aspects are in place, state and local officials will likely face difficulties in determining whether the systems that they receive from manufacturers are the same as the systems that EAC certified. EAC’s program requires the use of a designated software repository for certified voting systems. Specifically, the certification program manual states that final certification will be conditional upon, among other things, the manufacturer and VSTL creating and documenting a trusted software build, and the laboratory depositing the build in a designated repository. In its 2005 VVSG, EAC designated NIST’s National Software Reference Library (NSRL) as its repository and required its use. However, program officials stated that the commission does not intend to use the NSRL as its designated repository because the library cannot perform all required functions. While these officials added that they may use the NSRL for portions of the certification program, they said it will not serve as EAC’s main repository. Nevertheless, the commission has not established plans to identify and designate another repository, and has yet to define minimum requirements (functional, performance, or interface) for what it requires in a repository to efficiently support states and local jurisdictions. As an interim measure, an EAC official stated that they will store the trusted builds on compact disks and keep the disks in fireproof filing cabinets in their offices. According to the official, this approach is consistent with established program requirements because the program manual merely refers to the use of a trusted archive or repository designated by EAC. Under this measure, state and local election officials will have to request physical copies of material from EAC and wait for the materials to be physically packaged and delivered. This interim approach is problematic for several reasons, including the demand for EAC resources to keep up with the requests during a time when the Executive Director told us that a more permanent repository solution has not been a commission focus because its limited resources have been focused on other priorities. EAC has also not defined how it will ensure that manufacturers develop and provide to states and local jurisdictions tools to verify their respective systems’ software against the related trusted software builds. According to the program manual, final certification of a voting system is also conditional upon manufacturers creating and making available to states and local jurisdictions tools to compare their systems’ software with the trusted build in the repository. In doing so, the program manual states that manufacturers shall develop and make available tools of their choice, and that the manufacturer must submit a letter certifying the creation of the tools and include a copy and description of the tools to EAC. The commission may choose to review the tools. Further, the 2005 VVSG provides some requirements for software verification tools, for example, that the tools provide a method to comprehensively list all software files that are installed on the voting system. However, EAC has yet to specify exactly what needs to be done to ensure that manufacturers provide effective and efficient system identification tools and processes, and it has not developed plans for ensuring that this occurs. Instead, EAC has stated that until it defines procedures to supplement the program manual, it will review each tool submitted. However, the commission has yet to establish specific criteria for the assessment. Without an established means for effectively and efficiently verifying that acquired systems have the same system configurations as the version that EAC certified, states and local jurisdictions will not know whether they are using federally certified voting systems. The absence of such tools unnecessarily increases the risk of a system bearing EAC’s mark of certification differing from the certified version. As part of its voting system testing and certification program, EAC has broadly described an approach for tracking and resolving problems with certified voting systems, and using the information about these problems to improve its program. This approach reflects some key aspects of relevant guidance. However, other aspects are either missing or not adequately defined, and although EAC officials stated that they intend to address some of these gaps, the commission does not have defined plans or time frames for doing so. Commission officials cited limited resources and competing priorities as reasons for these gaps. In addition, EAC’s problem tracking and resolution approach does not extend to any of the voting systems that are likely to be used in the 2008 elections, and it is uncertain when, and to what extent, this situation will change. This is because its defined scope only includes EAC-certified systems; it does not include NASED-qualified systems or any other systems to be used in elections. EAC officials stated that the reason for this is because HAVA does not explicitly assign the commission responsibility for systems other than those it certifies. This means that no federal entity is currently responsible for tracking and facilitating the resolution of problems found with the vast majority of voting systems that are used across the country today and that could be used in the future, and thus states and local jurisdictions must deal with problems with their systems on their own. According to published guidance, tracking and resolving problems with certified products is important. This includes, among other things, (1) withdrawing certification if a product becomes noncompliant; (2) regularly monitoring the continued compliance of products being produced and distributed; (3) investigating the validity and scope of reports of noncompliance; (4) requiring the manufacturer to take corrective actions when defects are discovered, and ensuring that such actions are taken; and (5) using information gathered from these activities to improve the certification program. EAC’s approach for tracking and resolving problems with certified systems reflects some, but not all aspects of these five practices. First, its certification program includes provisions for withdrawing certification for noncompliant voting systems. For example, the program manual describes procedures for decertifying a noncompliant voting system if the manufacturer does not take timely or sufficient action to correct instances of noncompliance. According to these procedures, the decertification decision cannot be made until the manufacturer is formally alerted to the noncompliance issue and provided with an opportunity to correct the issue (problem) or to submit additional information for consideration. Also, a manufacturer can dispute the decision by requesting an appeal. The procedures also state that upon decertification, the manufacturer cannot represent the system as certified and the system may not be labeled with a mark of certification. In addition, EAC is to remove the system from its list of certified systems, alert state and local election officials to the system’s decertification via monthly newsletters and e-mail updates, and post all correspondence regarding the decertification on its Web site. Second, EAC’s certification program includes provisions for postcertification oversight of voting systems and manufacturers. Specifically, the program manual provides for reviewing and retesting certified voting systems to ensure that they have not been modified, and that they continue to comply with applicable standards and program requirements. In addition, the program manual calls for periodic inspections of manufacturers’ facilities to evaluate their production quality, internal test procedures, and overall compliance with program requirements. However, the program manual states that reviewing and retesting of certified systems is an optional step, and does not specify the conditions under which this option is to be exercised. Further, while the program manual provides for conducting periodic inspections of manufacturers’ facilities, it does not define, for example, who is to conduct the inspections, what procedures and evaluation criteria are to be used in conducting them, and how they are to be documented. Third, EAC’s certification program includes provisions for investigating reports of system defects. According to the program manual, investigations of reports alleging defects with certified systems begin as informal inquiries that can potentially become formal investigations. Specifically, the Program Director is to conduct an informal inquiry in which a determination is made regarding whether the reported defect information is both credible and deserving of system decertification if found to be credible. If both conditions are met, then a formal investigation is to be conducted. Depending on the outcome, decertification of the system could result. However, the program manual does not call for assessing the scope or impact of any defects identified, such as whether a defect is confined to an individual unit or whether it applies to all such units. Further, the program manual does not include procedures or criteria for determining the credibility of reported defects, or any other aspect of the inquiry or investigation, such as how EAC will gain access to systems once they are purchased and fielded by states and local jurisdictions. This is particularly important because EAC does not have regulatory authority over state election authorities, and thus, it cannot compel their cooperation during an inquiry or investigation. Fourth, EAC’s certification program does not address how it will verify that manufacturers take required corrective actions to fix problems identified with certified systems. Specifically, the program manual states that the manufacturer is to provide EAC with a compliance plan describing how it will address identified defects. However, the program manual does not define an approach for evaluating the compliance plan and confirming that a manufacturer actually implements the plan. According to EAC officials, they see their role as certifying a system and informing states of modifications. As a result, they do not intend to monitor how or whether manufacturers implement changes to fielded systems. In their view, the state ultimately decides if a system will be fielded. Fifth, EAC’s certification program provides for using information generated by its problem tracking and resolution activities to improve the program. According to the program manual, information gathered during quality monitoring activities will be used to, among other things, identify improvements to the certification process and to inform the related standards-setting process. Further, the program manual states that information gathered from these activities will be used to inform relevant stakeholders of issues associated with operating a voting system in a real- world environment and to share information with jurisdictions that use similar systems. However, the program manual does not describe how EAC will compile and analyze the information gathered to improve the program, or how it will coordinate these functions with information gathered in performing its HAVA-assigned clearinghouse function. EAC officials attributed the state of their problem tracking and resolution approach to the newness of the certification program and the fact that the commission’s limited resources have been devoted to other priorities. In addition, while these officials said that they intend to address some of these gaps, they do not have defined plans or time frames for doing so. For example, while EAC officials stated that they plan to develop procedures for investigating voting system problems and for inspecting manufacturing facilities, they said that it is the states’ responsibility to ensure that corrective actions are implemented on fielded systems. To illustrate their resource challenges, these officials told us that three staff are assigned to the testing and certification program and each is also supporting other programs. In addition, they said that the commission’s technical reviewers are experts who, under Office of Personnel Management regulation, work no more than one-half of the time of the year. Given that EAC has not yet certified a system, the impact of these definitional limitations has yet to be realized. Nevertheless, with 12 systems currently undergoing certification, it is important for the commission to address them quickly. If it does not, EAC will be challenged in its ability to effectively track and resolve problems with the systems that it certifies. The scope of EAC’s efforts to track and resolve problems with certified voting systems does not extend to those systems that were either qualified by NASED or were not endorsed by any national authority. According to program officials, the commission does not have the authority or the resources needed to undertake such a responsibility. Instead of tracking and resolving problems with these systems, EAC anticipates that they will eventually be replaced or upgraded with certified systems. Our review of HAVA confirmed that the act does not explicitly assign EAC any responsibilities for noncertified systems, although it also does not preclude EAC from tracking and facilitating the resolution of problems with these systems. As a result, the commission’s efforts to track and resolve problems with voting systems do not include most of the voting systems that will be used in the 2008 elections. More specifically, while EAC has efforts under way relative to the certification of 12 voting systems, as we have previously described in this report, commission officials stated that it will be difficult to field any system that EAC anticipates certifying before 2008 in time for the 2008 elections. Thus, voting systems used in national elections will likely be either those qualified under the now-discontinued NASED program, or those not endorsed by any national entity. Moreover, this will continue to be the case until states voluntarily begin to adopt EAC- certified systems, which is currently unclear and uncertain because only 18 states reported having requirements to use EAC-certified voting systems. Restated, most states’ voting systems will not be covered by EAC’s problem tracking and resolution efforts, and when and if they will is not known. Moreover, manufacturers may or may not upgrade existing, noncertified systems, and they may or may not seek EAC certification of those systems. Thus, it is likely that many states, and their millions of voters, will not use EAC-certified voting systems for the foreseeable future. Nevertheless, EAC has initiated efforts under the auspices of its HAVA- assigned clearinghouse responsibility to receive information that is volunteered by states and local jurisdictions on problems and experiences with systems that it has not certified, and to post this information on the commission’s Web site to inform other states and jurisdictions about the problems. In doing so, EAC’s Web site states that the commission does not review the information for quality and does not endorse the reports and studies. Notwithstanding this clearinghouse activity, this means that no national entity is currently responsible for tracking and facilitating the resolution of problems found with the vast majority of voting systems that are in use across the country. This in turn leaves state and local jurisdictions on their own to discover, disclose, and address any shared problems with systems. While this increases the chances of states and local jurisdictions duplicating efforts to get problems fixed, it also increases the chances that problems addressed by one state or jurisdiction may not even be known to another. A key to overcoming this situation will be strong central leadership. The effectiveness of our nation’s overall election system depends on many interrelated and interdependent variables. Among these are the security and reliability of the voting systems that are used to cast and count votes, which in turn depend largely on the effectiveness with which these systems are tested and certified. EAC plays a pivotal role in testing and certifying voting systems. To its credit, EAC has recently established and begun implementing a voting system testing and certification program that is to both improve the quality of voting systems in use across the country, and help foster public confidence in the electoral process. While EAC has made important progress in defining and executing its program, more can be done. Specifically, key elements of its defined approach, such as the extent to which certification activities are to be documented, are vague, while other elements are wholly undefined—such as threshold criteria for making certification-related decisions. Moreover, a key element that is defined—namely, giving states and local jurisdictions an effective and efficient means to access the certified version of a given voting system software—has yet to be implemented. While EAC acknowledges the need to address these gaps, it has yet to develop specific plans or time frames for completing them that, among other things, ensure that adequate resources for accomplishing them are sought. Addressing these gaps is very important because their existence not only increases the chances of testing and certification activities being performed in a manner that is neither repeatable nor verifiable, they also can create misunderstanding among manufacturers and VSTLs that can lead to delays in the time needed to certify systems. Such delays have already been experienced, to the point that needed upgrades to current systems will likely not be fielded in time for use in the 2008 elections. Such situations ultimately detract from, and do not enhance, election integrity and voter confidence. Moreover, by not having established an effective means for states and local jurisdictions to verify that the systems each acquires are the same as the EAC-certified version, EAC is increasing the risk of noncertified versions ultimately getting used in an election. Beyond the state of EAC’s efforts to define and follow an approach to testing and certifying voting systems, including efforts to track and resolve problems with certified systems and use this information to improve the commission’s testing and certification program, a void exists relative to having a national focus on tracking and resolving problems with voting systems that EAC has not certified, and thus has not been assigned explicit responsibility or has the resources to address. Unless this void is filled, state and local governments will likely continue to be on their own for resolving performance and maintenance issues for the vast majority of voting systems in use today and the near future. To assist EAC in building upon and evolving its voting systems testing and certification program, we recommend that the Chair of the EAC direct the commission’s Executive Director to ensure that plans are prepared, approved, and implemented for developing and implementing detailed procedures, review criteria, and documentation requirements to ensure that voting system testing and certification review activities are conducted thoroughly, consistently, and verifiably; an accessible and available software repository for testing laboratories to deposit certified versions of voting system software, as well as procedures and review criteria for evaluating related manufacturer-provided tools to support stakeholders in comparing their systems with this repository; and detailed procedures, review criteria, and documentation requirements to ensure that problems with certified voting systems are effectively tracked and resolved, and that the lessons learned are effectively used to improve the certification program. To address the potentially longstanding void in centrally facilitated problem identification and resolution for non-EAC-certified voting systems, we are raising for congressional consideration expanding EAC’s role under HAVA such that, consistent with both the commission’s nonregulatory mission and the voluntary nature of its voting system standards and certification program, EAC is assigned responsibility for providing resources and services to facilitate understanding and resolution of common voting system problems that are not otherwise covered under EAC’s certification program, and providing EAC with the resources needed to accomplish this. In written comments on a draft of this report, signed by the EAC Executive Director, and reprinted in appendix II, the commission stated that it agrees with the report’s conclusion that more can be done to build on the existing voting system certification program and ensure that certifications are based on consistently performed reviews. In addition, EAC stated that it has found our review and report helpful in its efforts to fully implement and improve this program. It also stated that it generally accepts our three recommendations with little comment, adding that it will work hard to implement them. In this regard, it cited efforts that are planned or underway to address the recommendations. EAC provided additional comments on the findings that underlie each of the recommendations, which it described as needed to clarify and avoid confusion about some aspects of its certification program. According to EAC, these comments are intended to allay some of the concerns raised in our findings. We summarize and evaluate these comments below, and to avoid any misunderstanding of our findings and the recommendation associated with one of them, we have modified the report, as appropriate, in response to EAC’s comments. EAC also provided comments on our matter for congressional consideration, including characterizing it as intending “to affect a sea change in the way that EAC operates its testing and certification” program. We agree that EAC does not have the authority to compel manufacturers or states and local jurisdictions to submit to its testing and certification program and that the wording of the matter in our draft inadvertently led EAC to believe that our proposal would require the commission to assume a more regulatory role. In response, we have modified the wording that we used to clarify any misunderstanding as to our intent. With respect to our first recommendation for developing and implementing plans for ensuring that voting system testing and certification review activities are governed by detailed procedures, criteria, and documentation requirements, EAC stated that it is committed to having a program that is rigorous and thorough, and that its program manual creates such a program. It also stated that it agrees with our recommendation and that it will work to further the process in its manual by implementing detailed procedures. In this regard, however, it took issue with five aspects of our finding. With respect to our point that criteria and procedures have not been adequately defined for reviewing the information in the manufacturer registration application package, EAC stated that such criteria are not necessary because the package does not require a determination of sufficiency. We do not agree. According to section 2.4.2 of the program manual, EAC is to review completed registration applications for sufficiency. However, as our report states, the manual does not define criteria as to what constitutes sufficiency and what this sufficiency review should entail, and EAC officials stated this determination is left up to the individual reviewer’s judgment. Concerning our point that criteria and procedures have not been adequately defined for reviewing the information in the system certification application package, EAC stated that a technical review of the package is not required. Rather, it said that the package simply requires a determination that all necessary information is present, adding that it has a checklist to assist a reviewer in determining this. We do not agree with this comment for two reasons. First, our report does not state that the package review is technical in nature, but rather is a review to determine a package’s completeness and accuracy. Second, the checklist does not include any criteria upon which to base a completeness and accuracy determination. As EAC’s comments confirm, this is important because the information in the package is to be used by technical reviewers as they review test plans and test reports to ensure that the testing covers all aspects of the voting system. For example, EAC requires certification applications to include a functional diagram depicting how the components for the voting system function and interact, as well as a system overview that includes a description of the functional and physical interfaces between components. Although the checklist provides for determining whether these items are part of the application package, it does not provide for checking them for completeness and consistency. We have clarified this finding in our report by including this example. As to our point that EAC has not defined how technical reviewers are to determine the adequacy of system test plans and reports, the commission stated that our report does not take into account what it described as a certification requirements traceability matrix that its technical reviewers use to assess the completeness and adequacy of the plans and reports. However, EAC also acknowledged in its comments that procedures have yet to be established relative to the use of the matrix. Further, we reviewed this matrix, which we refer to in our report as a checklist, and as we state in our report, this checklist does not provide for capturing how decisions were reached, including steps performed and criteria applied. For example, the VVSG requires that systems permit authorized access and prevent unauthorized access, and lists examples of measures to accomplish this, such as computer-generated password keys and controlled access security. While the checklist cites this requirement and provides for the reviewer to indicate whether the test plan satisfies it, it does not provide specific guidance on how to determine whether the access control measures are adequate, and it does not provide for documenting how the reviewer made such a decision. In response to EAC’s comments, we have added this access control example to clarify our finding. With regard to our point that the program manual does not include defined test suites, EAC commented on the purpose of these test suites and stated that it would not be appropriate to include them in the manual because the manual is not intended to define technical requirements for testing. We agree that the program manual should not include actual test suites, and it was not our intent to suggest that it should. Rather our point is that test suites do not yet exist. Accordingly, we have modified our report to more clearly reflect this. In addition, we acknowledge EAC’s comment that NIST is currently in the process of developing test suites, and that it recently sent several test suites to the VSTLs and other stakeholders for review. However, as we state in our report, NIST officials said that they are focused on preparing test suites for the yet-to-be-released update to the VVSG and not the 2005 version. Further, EAC has not yet established plans or time frames for finalizing test suites for either versions of these guidelines, and the program manual does not make reference to the development of these test suites. In commenting on our point that differences in interpretation of program requirements have resulted in test plan and report approval delays, EAC stated that its interpretation process provides a means for VSTLs and manufacturers to request clarification of the voting system standards that are ambiguous. We agree with this statement. However, we also believe that having the kind of defined procedures and established criteria that are embodied in our recommendation will provide a common understanding among EAC stakeholders around testing and certification expectations, which should minimize the need to reconcile differences in interpretations later in the process. With respect to our second recommendation for developing and implementing plans for an accessible and available software repository for certified versions of voting system software, as well as the related manufacturer-provided procedures and tools to support stakeholders in using this repository, EAC stated that it agrees that implementation of a repository is needed. However, it stated that there is some misunderstanding regarding the purpose of the repository and the creation of software identification tools. Specifically, the commission stated that the repository is intended for the commission’s own use when conducting investigations of fielded systems, while the manufacturer-provided system identification tools are for use by state and local election officials to confirm that their systems are the same as the one certified by EAC. In addition, it described steps taken or under way to ensure that a repository and identification tools are in place when needed. This includes “placing the onus” on system manufacturers to create verification tools, investigating software storage options, and discussing with another government agency and outside vendors the possibility of providing secure storage for certified software. We agree with EAC that its repository serves as a tool for its internal use. However, the repository is also to serve state and local election officials in verifying that their respective systems are identical to the EAC-certified versions of the systems. According to the 2005 VVSG software distribution and setup validation requirements, the process for voting system purchasers in verifying that the version of the software that they receive from a manufacturer is the same as the version certified by EAC is to be performed by comparing it with the reference information generated by the designated repository. Further, commission officials told us that EAC’s repository needs to be accessible and easy to use by state and local election officials. While we understand that the manufacturer-provided system identification tools serve a separate function from the repository, both tools together are required for state and local election officials to verify their systems. We also acknowledge that the commission has initiated steps relative to establishing a repository and identification tools. However, our point is that EAC does not have any plans or time frames for accomplishing this. Further, while we agree that the manufacturers are responsible for creating the identification tools, as we stated in our report, EAC has not defined how it will evaluate the manufacturer-provided tools. To avoid any misunderstanding as to these points, we have slightly modified our finding and related recommendation. Concerning our third recommendation for developing and implementing detailed procedures, review criteria, and documentation requirements for tracking and resolving problems with certified voting systems and applying lessons learned to improve the certification program, EAC stated that the report does not correctly represent its role in confirming that manufacturers actually correct anomalies in all fielded systems, and it added that the commission does not have the authority or the human capital to do so. Accordingly, EAC stated that it informs affected jurisdictions of system changes, but that it is at the discretion of the states and local jurisdictions, and beyond the scope of the commission, to determine whether fixes are made to individual systems in the field. We agree that the states and local jurisdictions have the responsibility and authority to determine whether they will implement EAC-approved fixes in the systems that they own. However, as we state in our report, published ISO guidance on tracking and resolving problems with certified products recognizes the importance of the certification body’s decision to require manufacturers to take corrective actions when defects are discovered, and to ensure that such actions are taken. Although this guidance acknowledges the difficulty in ensuring corrective actions are implemented on all affected units, it states that products should be corrected “to the maximum degree feasible.” Given EAC’s authority over registered manufacturers, it can play a larger role in ensuring that problems with fielded system are in fact resolved, while maintaining the voluntary nature of its program, by monitoring the manufacturers’ efforts to fix systems for those jurisdictions that choose to implement such corrections, and holding manufacturers accountable for doing so. To avoid any confusion about this point, we have slightly modified our finding. As to our matter for congressional consideration to amend HAVA to give EAC certain additional responsibilities relative to problem resolution on voting systems not certified by EAC, the commission voiced several concerns. Among other things, it stated that our proposal would “affect a sea change in the way that EAC operates its testing and certification” program, changing it from voluntary to mandatory. Further, it stated that it would, in effect, place EAC in a position to act in a regulatory capacity without having the specific authority to do so, as it would necessitate making both the voluntary voting system guidelines and the testing and certification program mandatory for all states. It also stated that it would require EAC to have specific authority to compel manufacturers of these noncertified voting systems to submit their systems for testing, and to compel states and local jurisdictions to report and resolve any identified system problems. We recognize that both the voting system guidelines and the testing and certification program are voluntary, and that EAC does not have the authority to compel manufacturers or states and local jurisdictions to submit to its testing and certification program, or to force them to correct any known problems or report future problems. We further acknowledge that the wording of the matter for congressional consideration in our draft report resulted in EAC interpreting accomplishment of it as requiring such unintended measures. Therefore, we have modified it to clarify our intent and to avoid any possible misunderstanding. In doing so, we have emphasized our intent for EAC to continue to serve its existing role as a facilitator and provider of resources and services to assist states and local jurisdictions in understanding shared problems, as well as the voluntary nature of both the system guidelines and the testing and certification program. Further, we seek to capitalize on EAC’s unique role as a national coordination entity to address a potentially longstanding, situational awareness void as it pertains to voting systems in use in our nation’s elections. As we state in our report, this void increases the chances of states and local jurisdictions duplicating efforts to fix common system problems, and of problems addressed by one state or local jurisdiction being unknown to others. We believe that a key to overcoming this will be strong central leadership, and that with the appropriate resources, EAC is in the best position to serve this role. We are sending a copy of this report to the Ranking Member of the House Committee on House Administration, the Chairman and Ranking Member of the Senate Committee on Rules and Administration, the Chairmen and Ranking Members of the Subcommittees on Financial Services and General Government, Senate and House Committees on Appropriations, and the Chairman and Ranking Member of the House Committee on Oversight and Government Reform. We are also sending copies to the Chair and Executive Director of the EAC, the Secretary of Commerce, the Acting Director of the NIST, and other interested parties. We will also make copies available to others on request. In addition, this report will be available at no charge on the GAO Web site at www.gao.gov. Should you or your staff have any questions on matters discussed in this report, please contact me at (202) 512-3439 or at [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. Our objectives were to determine whether the Election Assistance Commission (EAC) has (1) defined an effective approach to testing and certifying voting systems, (2) followed its defined approach, and (3) developed an effective mechanism to track problems with certified systems and use the results to improve its certification program. To address the first and third objectives, we researched leading practices relevant to certification testing/conformity assessment and tracking and resolving problems with certified products, including published guidance from the National Institute of Standards and Technology (NIST), International Organization for Standardization, and International Electrotechnical Commission, and legal requirements in the Help America Vote Act. We obtained and reviewed relevant EAC policies and procedures for testing, certifying, decertifying, and recertifying voting systems. Specifically, we reviewed the EAC Certification Program Manual and other EAC-provided documents. We interviewed EAC officials and their technical reviewers, NIST officials, representatives from the industry trade association for voting system manufacturers, representatives from the voting system test laboratories, and the National Association of State Election Directors’ point-of-contact for qualified systems. We then compared this body of evidence with the leading practices and related guidance we had researched, as well as applicable legal requirements, to determine whether EAC’s program had been effectively defined. In addition, for the third objective, we reviewed the contents and policy of EAC’s clearinghouse. To address our second objective, we obtained and reviewed actions and artifacts from EAC’s execution of its certification program to date. We assessed this information against the policies, procedures, and standards outlined in the EAC Certification Program Manual and the 2005 Voluntary Voting System Guidelines, and after discussing and confirming our findings with EAC officials, we determined whether EAC had followed its defined approach. In addition, to determine the impact of federal certification time frames, we included a question about EAC certification on a survey of officials from all 50 states, the District of Columbia, and 4 territories. We also contacted officials from states that indicated their intent to use EAC certification for the 2008 elections to better understand how they plan to address voting system certification in their state relative to EAC’s program. To develop our survey, we reviewed related previous and ongoing GAO work, and developed a questionnaire in collaboration with GAO’s survey and subject matter experts. We conducted pretests in person and by telephone with election officials from 5 states to refine and clarify our questions. Our Web-based survey was conducted from December 2007 through April 2008. We received responses from 47 states, the District of Columbia, and all 4 territories (a 95 percent response rate). Differences in the interpretation of our questions among election officials, the sources of information available to respondents, and the types of people who do not respond may have introduced unwanted variability in the responses. We examined the survey results and performed analyses to identify inconsistencies and other indications of error, which were reviewed by an independent analyst. We also contacted officials from those states whose survey response indicated their intent to use EAC certification for the 2008 elections to identify their plans and approaches for state certification in the event that federal certification could not be completed to meet their election preparation schedules. We conducted this performance audit at EAC offices in Washington, D.C., from September 2007 to September 2008 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the person named above, Paula Moore, Assistant Director; Mathew Bader; Neil Doherty; Nancy Glover; Dan Gordon; David Hinchman; Valerie Hopkins; Rebecca LaPaze; Jeanne Sung; and Shawn Ward made key contributions to this report. | The 2002 Help America Vote Act (HAVA) created the Election Assistance Commission (EAC) and, among other things, assigned the commission responsibility for testing and certifying voting systems. In view of concerns about voting systems and the important role EAC plays in certifying them, GAO was asked to determine whether EAC has (1) defined an effective approach to testing and certifying voting systems, (2) followed its defined approach, and (3) developed an effective mechanism to track problems with certified systems and use the results to improve its approach. To accomplish this, GAO compared EAC guidelines and procedures with applicable statutes, guidance, and best practices, and examined the extent to which they have been implemented. EAC has defined an approach to testing and certifying voting systems that follows a range of relevant practices and statutory requirements associated with a product certification program, including those published by U.S. and international standards organizations, and those reflected in HAVA. EAC, however, has yet to define its approach in sufficient detail to ensure that certification activities are performed thoroughly and consistently. This lack of definition also has caused voting system manufacturers and test laboratories to interpret program requirements differently, and the resultant need to reconcile these differences has contributed to delays in certifying systems that several states were intending to use in the 2008 elections. According to EAC officials, these definitional gaps can be attributed to the program's youth and the commission's limited resources being devoted to other priorities. Nevertheless, they said that they intend to address these gaps, but added that they do not yet have written plans for doing so. EAC has largely followed its defined approach for each of the dozen systems it is in the process of certifying, with one major exception. Specifically, it has not established an effective and efficient repository for certified versions of voting system software, or related procedures and tools, for states and local jurisdictions to use in verifying that their acquired voting systems are identical to what EAC has certified. Further, EAC officials told GAO that they do not have a documented plan or requirements for a permanent solution. As an interim solution, they stated that they will maintain copies of certified versions in file cabinets and mail copies of these versions upon their request by states and local jurisdictions. In GAO's view, this process puts states and local jurisdictions at increased risk of using a version of a system during an election that differs from the certified version. Under its voting system testing and certification program, EAC has broadly described an approach for tracking problems with certified voting systems and using this information to improve its certification program. While this approach is consistent with some aspects of relevant guidance, key elements are either missing or inadequately defined. According to EAC officials, while they intend to address some of these gaps, they do not have documented plans for doing so. In addition, even if EAC defines and implements an effective approach, it would not affect the vast majority of voting systems that are to be used in the 2008 elections. This is because the commission's approach only applies to those voting systems that it has certified, and it is unlikely that any voting systems will be certified in time to be used in the upcoming elections. Moreover, because most states do not currently require EAC certification for their voting systems, it is uncertain if this situation will change relative to future elections. As a result, states and other election jurisdictions are on their own to discover, disclose, and address any shared problems with these noncertified systems. |
In April 2003, Congress enacted P.L. 108-18, which reduced the Service’s annual payment into the CSRS pension fund, in part, to reflect a reduction in the Service’s estimated unfunded obligation for prior years’ service from about $30 billion to about $5 billion. The difference between the Service’s CSRS payment required prior to enactment of P.L. 108-18 and the payment after enactment is labeled the “savings” in the legislation. However, P.L. 108-18 requires the Service to use the savings in fiscal years 2003 and 2004 to pay down outstanding debt and in fiscal year 2005 to extend the current rate cycle. Therefore, according to the Service, all of the overfunding generated by current rates will be completely consumed by the end of fiscal year 2005. In fiscal year 2006, the Service is required to begin making payments into an escrow account that it cannot use until otherwise provided for by law. The amount of the payments into the escrow account would have to be included in the Service’s rate base. The Service’s report recommended that the escrow requirement be repealed, and provided two proposals for use of the “savings.” A brief description of each proposal is given below. Transferring the military costs from the Service to the Treasury, as detailed in Proposal I, increases the projected overfunding of the postal CSRS pension fund from $78 billion to $105 billion. This would result in an overall cost reassignment of $27 billion and a $10 billion overfunding of the postal CSRS pension fund as of the end of fiscal year 2002. The Service proposes that the $10 billion in overfunding would remain in the pension fund, in a separate account designated as the “Postal Service Retiree Health Benefit Fund (Retiree Health Fund).” The Service made a payment of about $1.3 billion for its pension obligation into the CSRS pension fund in fiscal year 2003. Under current legislation, it would continue to make payments of $2.2 billion in fiscal year 2004 and $2.1 billion in fiscal year 2005. If responsibility for all military service costs is transferred back to the Treasury, the resulting overfunded status would negate the need for further Postal Service annual CSRS payments. The Service proposes that the CSRS payments it made in fiscal year 2003, and will make in fiscal years 2004 and 2005, remain in the CSRDF in the newly designated Retiree Health Fund. Beginning in fiscal year 2006, the Service proposes to make annual payments into the Retiree Health Fund. This new fund would be used to pay retiree health insurance premiums in the future. This proposal assumes that the escrow requirement would be eliminated. However, the Service estimates that the expense for prefunding retiree health obligations would add $1.2 billion to its expenses in fiscal year 2006. The Service estimates that this expense would require a rate increase that would be 2 percent higher than would be necessary to cover inflationary expense growth. Otherwise, the Service believes it can pay down debt and finance its capital investment needs through its normal cycle of inflation- based rate increases. Proposal II, other than funding a small amount of the retiree health benefits obligation, results primarily in rate mitigation. This proposal is based on the assumption that the escrow requirement would be repealed and that the Service would remain responsible for military service costs. Under this scenario, the Service proposes to prefund the retiree health benefits cost for employees hired after fiscal year 2002. It would not fund the retiree health benefits cost already incurred for current and former employees, which comprises most of the obligation. The Service estimates that the expense created to prefund retiree health benefit costs for new employees would require a rate increase in fiscal year 2006 that would be 0.3 percent higher than necessary to cover normal inflationary expense growth. Although the Service’s proposal stated that some funds would be used to pay down debt and fund capital investments, postal officials have told us that the proposed debt repayment and capital investment costs are equal to what they had planned to spend regardless of enactment of P.L. 108-18. Consequently, the Service believes that, absent the escrow requirement, it would be able to continue to pay the retiree health premium costs for current and former employees on a pay-as-you-go basis, pay down debt, and finance its capital investment needs through normal rate increases that would correspond with general inflation trends. We believe that both proposals are generally consistent with the “Sense of Congress” expressed in P.L. 108-18, that some portion of the savings should be used to address the Service’s unfunded obligations. However, Proposal I goes much further in this area because it proposes prefunding a substantial portion of retiree health benefits for all current and former employees, while Proposal II would prefund these costs only for employees hired after fiscal year 2002. Both proposals also address, to varying degrees, the Matters to Consider, outlined in P.L. 108-18. Proposal I addresses, almost exclusively, matter (ii)—prefunding of postretirement health benefits for current and former employees. Proposal II addresses matter (ii) to a limited extent, and matter (iv)—delaying or moderating increases in postal rates. Under both proposals, the Service believes that it can address matter (i)—debt repayment—and matter (iii)—productivity and cost saving capital investments—through inflation-based rate increases. The legislation also directed the Postal Service to consider the work of the Commission. The Commission recommendations, like our previous work, stressed the significance of funding the retiree health benefits cost to the extent that the Service’s finances permit. The Commission pointed out that the pension obligation is funded as benefits are earned and recovered through rates, but the retiree health benefits obligation is funded as the benefits are paid and not as they are earned. The Commission strongly urged the Service to consider funding a reserve account to begin paying down the retiree health benefits obligation so future ratepayers are not forced to pay for costs associated with postal services delivered today. The Commission also stated that raising rates should be the last recourse, not the first, to cover rising costs. In our November 2003 testimony before the Senate Committee on Governmental Affairs, we also raised concerns about rate increases, stating that raising rates may provide an immediate boost to the Service’s revenues but would likely accelerate the transition of mailed communications to electronic alternatives. In addition, the Commission expressed concern regarding the Service’s ability to repay its debt and stressed the importance of the Service improving its operational efficiency. Another important recommendation of the Commission was that the Service should review its current policy relating to the accounting treatment of retiree health care benefits, and work with its independent auditor to determine the most appropriate treatment of such costs in accordance with applicable accounting standards and in consideration of the Postal Service’s need for complete transparency in the reporting of future liabilities. We have also discussed these issues in our previous work. Proposal I addresses the issue of funding retiree health benefits to a greater extent than Proposal II, while Proposal II addresses the matter of mitigating rates to a greater extent than Proposal I. Both proposals address the issue of debt repayment and capital investment through inflation-based rate increases. The Service recommended in its report that Congress eliminate the escrow requirement, because of its negative impact on postage rates and the mailing industry, the general public, and the economy as a whole. The Service estimates that it would need an additional rate increase of 5.4 percent, including 2 cents on the 37-cent First-Class stamp, in order to generate the $3.2 billion required to be placed in an escrow account in fiscal year 2006. This is because P.L. 108-18 requires all “savings” attributable to fiscal years after 2005 to be considered an “operating expense” and placed into an account that the Service cannot use until Congress specifies how the funds may be used. All of the “savings” accruing under current rates would likely be expended or absorbed by inflationary cost increases by the end of fiscal year 2005. Thus, in order to pay this “operating expense” the Service would need to include the $3.2 billion in its rate base in fiscal year 2006 and collect the money from its ratepayers or lower expenses by a corresponding amount. The Service has taken steps to reduce its total expenses over the past 2 fiscal years, and we believe it is important for the Service to continue its cost-cutting efforts. However, setting aside unused funds in an escrow account that must be considered an “operating expense” would serve to lessen the financial benefits of the Service’s cost-cutting efforts. For fiscal years after 2006, an increasing amount—estimated to eventually reach a peak of $8.7 billion—would have to be placed annually in the escrow account. This would be in addition to its operating expenses, such as compensation and retiree health premiums, as well as any amounts needed to pay down debt or fund capital investments. The Service estimates that it would require additional biannual rate increases between 1 percent and 1.5 percent to cover the required escrow amount. Frequent rate increases of this magnitude would likely hasten the decline in First- Class Mail volume and increase the risk of volume declines in other mail classes. In our view, the escrow requirement could be viewed as one means to direct funding for specific purposes that Congress may believe to be especially important. We also believe it is critical to the Service’s future viability that it continue to make progress on addressing its financial challenges, such as prefunding retiree health obligations, repaying debt, and financing capital needed to implement its transformation initiatives. Several options include (1) tying the repeal of the escrow requirement to congressional review of the Service’s progress on transformation, which could include the Service providing Congress with an acceptable plan for realigning its infrastructure and workforce; (2) repealing the escrow requirement but specifying the use of funds; or (3) repealing the escrow requirement and allowing the Service to fund activities as specified in its proposals. Another option would be to retain the escrow requirement and direct funding for specific purposes, which would likely require Congress to periodically revisit the use of funds. We believe this option could be problematic if an impasse arose, which could make the funds unavailable to the Service to spend on specific purposes. If Congress does not want to specify by law the purposes and amounts that should be funded, but rather permit the Service to decide which activities to fund, we believe that Congress would need to have sufficient information to determine that the Service is making or accelerating progress in achieving its transformation goals. In this regard, we have already recommended that the Service provide periodic reports on the status of its transformation initiatives and other Commission recommendations that fall within the scope of its existing authority. The Chairman of the Senate Committee on Governmental Affairs, along with Senator Carper, requested in a letter to the Postmaster General dated November 19, 2003, that the Service provide the Committee with a comprehensive plan that lays out how the Service intends to optimize its infrastructure and workforce. Further, the letter requested biannual updates on the status of implementing transformation initiatives and recommendations of the Presidential Commission. In November 2003, the Service provided the congressional oversight committees with a progress report on its transformation initiatives. We also assessed the Service’s two proposals in the context of three key issues emerging from our previous work and the Commission’s recommendations. The first issue is whether the proposals are fair and balanced between current and future ratepayers regarding who pays for employee benefits earned today. Another aspect of this issue is fairness between ratepayers and taxpayers regarding responsibility for military service costs and the effect of the proposals on the federal budget. The second issue is whether the proposals are affordable in light of the Service’s current financial situation. Given declining First-Class Mail volume, rising compensation costs, and a significant retiree health benefits obligation, if the Service’s proposals greatly exacerbate these financial challenges, affordable universal service could be jeopardized. The third issue is how these proposals assist the Service in achieving or accelerating its transformation initiatives. The importance of this issue lies in the need for the Service to become a more efficient and effective organization in order to remain financially viable. One factor that should be kept in mind when evaluating these proposals is the issue of maintaining an equitable balance between the postal costs paid for by current and future ratepayers and the impact of these proposals on taxpayers. As we noted in our November 2003 testimony, under the Service’s current accounting and rate-setting methods, current ratepayers have not fully covered the total costs of the postal services they have received. Further, future ratepayers are likely to face more significant and frequent rate increases to cover the cost of benefits being earned by current employees. The equity of this arrangement should be considered in evaluating these proposals. Likewise, the effects of these proposals on the federal budget—which specifies the spending and financing of the federal government—and whether these effects are equitable to both ratepayers and taxpayers, should also be considered. Proposal I strikes a better balance between current and future ratepayers by prefunding the retiree health benefits obligation for both retirees and current employees and providing a mechanism for better aligning current expenses with current revenues. Therefore, benefits being earned by today’s employees would be built into the current rate base. While Proposal II does partially address the issue of striking a balance between current and future ratepayers in regard to the retiree health benefits obligation, it does not go as far as Proposal I in this area. By only prefunding the retiree health benefits cost for new employees, it leaves a sizable portion of this obligation unfunded. This means that future ratepayers will still be required to pay for most of the retiree health benefits earned by today’s workforce. In addition, mailers argue that prior to enactment of P.L. 108-18, they were paying too much for the CSRS obligation; therefore, mitigating rate increases now is merely recompense. However, while mailers may have been paying more than necessary to fund the pension obligation, they were paying less than necessary to fund the retiree health benefits obligation. Another important consideration is the effect these proposals would have on the federal budget and, therefore, the taxpayer. An issue currently before Congress is who should be responsible for paying the military service pension costs of postal employees covered by CSRS. Proposal II is predicated on the assumption that current ratepayers pay for pension costs related to military service, much of which was vested prior to creation of the Postal Service and had already been paid by Treasury. If Congress decides that the Service should retain responsibility for these costs, the postal ratepayers would bear the costs. If Congress determines that the Treasury should be responsible for these costs, then the costs would be borne by taxpayers. The Service has stated that the impact on the federal budget of transferring these costs under Proposal I would likely be minimal. The budgetary effects of the Service’s proposals have not been scored by CBO. However, based on its scoring of the Postal Civil Service Retirement System Funding Reform Act, we believe that Proposal I might be scored as having little effect on the deficit in the short term. In the long term, it could have an effect when the Service’s cash flow changes in later years as the prefunded benefits are paid. However, insufficient detail has been provided on both proposals to determine their overall budget effects. The CBO is required to “score,” or estimate, the budgetary effects of legislation reported out of committees, so it has not scored the Service’s proposals. However, the CBO scoring report on the bill that resulted in the pension legislation provides some insight into how this proposal might be scored. CBO scoring considers both on-budget and off-budget effects of legislative proposals. As an off-budget entity, any payments that the Service makes to the retirement trust fund (an on-budget entity) are considered offsetting receipts; reducing those payments would reduce on-budget receipts. Under P.L. 108-18, after fiscal year 2005, savings resulting from the act are to be considered operating expenses of the Service. Therefore, these expenses would be included in rate setting, even though the Service’s actual expenses would decline by the amount placed in escrow. As a result, net off-budget outlays of the Postal Service would decline by the same amount as the savings from lower pension payments, beginning in fiscal year 2006. This is reflected in the CBO scoring report. These lower off-budget outlays would offset the on-budget impact of lower payments to CSRS. Thus, any proposal that uses the escrowed savings could affect the overall federal budget deficit. Scoring of the Service’s proposals hinges on what the Service would do with the escrowed savings. Proposal I, in shifting the cost of military service back to the Treasury, would result in a reduction in on-budget receipts. But Proposal I, in using most of the savings to prefund retiree health benefits, would also keep those amounts in a separate CSRS account. The combined impact might be scored as having little effect on the deficit in the short term. However, in the long term, it could have an effect because at some point, the prefunded benefits would be paid out, resulting in changes in cash flows in later years. In addition, Proposal I would use a small amount of the savings for debt reduction, which would cause on-budget interest receipts to be lower. Under Proposal II, which assumes that the Service would retain responsibility for the military service costs, the Service said it would fund its retiree health benefits obligation only for its employees hired after fiscal year 2002 and then fund, in priority sequence, debt repayment and capital investments to improve productivity and cost-savings. This proposal also raises issues related to the federal budget. The continuation of payments for military service costs would mean that there would be no reduction in on-budget receipts. In the short term, prefunding some retiree health benefits could have a small positive effect on the budget, because the Service would be collecting revenue that would not be immediately paid out. In general, any reduction in the Service’s debt would reduce on-budget interest receipts. Any additional capital investments would increase off- budget outlays. However, if the Service can provide credible support that the investments would result in cost savings, the scoring may show increased outlays initially and savings subsequently. The Service believes that its proposals are affordable, meaning they would not cause rate increases that irreparably harm volume, or hinder the Service’s ability to sustain current operations and implement transformation initiatives. We are concerned that the Service may not be able to achieve all of these goals if its financial situation worsens. Therefore, we believe it is imperative for the Service to continue addressing its key financial challenges—long-term obligations and debt, difficulty raising revenue, and aggressive cost-cutting measures—to the extent that it is able. The Service faces a difficult challenge in trying to balance all of these issues. The Service’s proposals attempt to balance both short-term rate mitigation and some level of prefunding of retiree health obligations to address its long-term obligations, while also providing for debt repayment and capital investment. However, the Service did not present an analysis of how its proposals would affect the overall financial condition of the Postal Service. Consequently, it is difficult to assess which, if either, of these proposals would improve the long-term financial situation of the Postal Service or ensure its future financial viability. Therefore, we believe that the Service’s financial situation will need to be closely monitored to ensure that its proposals are indeed affordable. The affordability of these proposals to ratepayers is also a consideration, as is the effect of rate increases on volume because, as we have previously reported, the Service faces uncertainty regarding its future revenue stream. Since fiscal year 2000, the Service’s total mail volume has declined by almost 6 billion pieces and is estimated to continue declining. In a report for the Commission, the Institute for the Future developed a mail volume estimate that shows a gradual 10 percent decline from 202.8 billion pieces in fiscal year 2002 to 181.7 billion pieces in 2017. Also, First-Class Mail volume, which provides the bulk of the Service’s revenue, has been declining and shows no sign of rebounding. Declines in First-Class Mail are particularly troublesome to the Service, because First-Class Mail pays almost 70 percent of the Service’s institutional costs. These costs, which are approximately 40 percent of all postal expenses, include some administrative, facility, postmaster, and supervisor costs, and a large portion of the expanding delivery network costs. Therefore, if First-Class Mail volume continues to decline, it would become more difficult for the Service to fund its institutional costs without raising postal rates. Historically, when the Service has raised postal rates, mail volume growth declined in the fiscal year immediately following the rate increase but rebounded in the next fiscal year. However, over the last 3 years this has not been the case. The Service raised rates twice in fiscal year 2001 and once in fiscal year 2002. Total estimated mail volume at the end of fiscal year 2003 was almost 6 billion pieces lower than total mail volume was in fiscal year 2000. In this climate, rate increases may lead to further volume declines, which in turn would necessitate additional rate increases and begin a cycle often referred to as the “death spiral.” The Service’s first proposal would require a larger rate increase than the second proposal. Under Proposal I, the Service estimates that prefunding retiree health benefits would add $1.2 billion to its expenses in fiscal year 2006 compared with its expenses in that year under the current law, assuming the escrow requirement were eliminated. According to the Service, this additional expense would require a rate increase in fiscal year 2006 that is 2 percent higher than the increase that would be necessary due to inflationary expense growth alone. In fiscal years after 2006, the Service would continue to make these additional payments and future rate increases would likely be marginally higher than would be necessary to reflect inflationary pressures alone. Figure 1 shows the annual additional amount the Service proposes to spend on prefunding under Proposal I. If the Service’s mail volume continues to decline and the Service is unable to cut costs accordingly, or if the Service is faced with higher retiree health premium costs than estimated, the Service may not be able to afford to continue prefunding the retiree health benefits obligation. Therefore, the Service’s financial condition must be carefully monitored under this proposal. Proposal II would require a lower rate increase than Proposal I in fiscal year 2006, and thus would likely have less of an impact on postal volumes in the short term. However, in the long-term it may require larger rate increases that could have a negative impact on future volumes. As seen in figure 2, the estimated retiree health premium expense will eventually outpace the estimated difference between the CSRS payment prior to enactment of P.L. 108-18 and the payment required under the legislation. Consequently, in order to pay the retiree health premiums in the future, the Service would need to raise additional revenue through rate increases or lower its operating expenses. The Postal Service is required to pay the retiree health premiums regardless of whether it prefunds some or all of these costs, and the annual costs are expected to increase over the next 20 years. If prefunding retiree health benefits for new employees proves to be more costly than estimated, or if the premiums for current retirees continue to grow rapidly, the Service could find itself facing a significant obligation at a time when revenues are shrinking. It seems prudent to set aside funds now, while they are available, to address escalating future costs rather than waiting until costs are higher and adequate revenue may not be forthcoming. Because Proposal II would result in a smaller rate increase in fiscal year 2006 than Proposal I, it raises the question of whether it would be possible for the Service to increase its proposed level of prefunding retiree health benefits under Proposal II. By setting aside an additional $1 billion in funding for this obligation, the Service would need an additional rate increase of 2 percent, the same increase the Service proposes under Proposal I. The Service has stated that the decision to prefund only retiree health benefits for new employees arose from the desire to have a logical basis for its funding proposal. Because the legislation was enacted in fiscal year 2003, the Service decided to begin prefunding with a corresponding time period. While this may provide a baseline, we agree with the Commission that the Service should address its retiree health benefits obligation to the extent that its financial situation allows. Again, we believe the Service’s financial situation will have to be carefully monitored to ensure that this option remains affordable. Another factor associated with the affordability of the proposals concerns how they address the Service’s outstanding debt level, which in fiscal year 2002 was close to statutory limits and was threatening the Service’s ability to fund capital improvements. The Service made significant progress in reducing its outstanding debt in fiscal year 2003, from $11.1 billion to an estimated $7.3 billion, and plans to continue paying down its debt in fiscal years 2004 and 2005. The Service has estimated that debt outstanding as of the end of fiscal year 2005 will be $3 billion. Under both proposals, the Service proposes to repay the same amount of debt in fiscal years 2006- 2010. As seen in table 1, the Service estimates that its outstanding debt will be paid off by 2010. These estimates assume that the Service would raise rates when necessary to break even for each of the fiscal years 2006 through 2010. If this break-even assumption is not correct, or if the Service faces unforeseen financial problems, the Service may not be able to pay down the amount of debt it proposes, and may, in fact, have to borrow more. The affordability of these proposals is also tied to a separate matter currently before Congress—who should bear responsibility for military service pension costs and how these costs should be determined. If Congress determines that the Treasury should bear responsibility for military service costs, then the Service believes that it can afford to prefund retiree health care costs for all of its current and former employees. If Congress determines that the Service should retain responsibility for the military service costs, then the Service believes that it can only afford to prefund the retiree health benefits cost for employees hired after fiscal year 2002, which would leave the obligation for current and former employees unfunded. As both the Commission and we have noted, the Service has had limited success in its pursuit of new revenue streams. Therefore, to counter the loss in revenue due to declining mail volume without resorting to frequent rate increases, the Service must aggressively cut costs. To its credit, the Service has decreased work hours, reduced its workforce, and closed some facilities. However, we do not believe that these incremental savings will be enough to ensure a financially viable Postal Service over the longer term, especially if mail volumes continue to decline. For this reason, we believe the Service must continue to make progress in implementing its transformation goals. In assessing these proposals, we also considered how the Service would be able to fund cost saving and productivity initiatives needed to successfully transform itself into a viable organization for the 21st century. In April 2002, in response to a GAO recommendation and a request by the Senate Committee on Governmental Affairs, the Postal Service prepared a Transformation Plan that outlined strategies for transforming the organization into an efficient and performance-based entity. Among those initiatives were plans to standardize operations, increase customer access, and realign the processing and distribution network. The Commission’s report also made suggestions for improving postal efficiency. We agree with the Commission that the Service must continue to pursue aggressive cost- cutting strategies and productivity gains in an effort to become more efficient. We also believe that the mandate for the Service to report on the potential use of savings from P.L. 108-18 was an opportunity for the Service to present its plans in this area, and the Service’s proposals must be evaluated with the need for cost-cutting and productivity gains in mind. Under both proposals, the Service believes it can finance capital investments related to upgrading existing assets and the investment needed to implement transformation initiatives through inflation-based rate increases. We are concerned that the Service’s financing plan may not be adequate to provide for its capital investment needs, because historically, the Service has found it problematic to finance its capital needs with operating revenues. Thus, it has often resorted to borrowing to finance its capital needs. In contrast, under both proposals, the Service would finance its capital needs while continuing to pay down debt through inflation-based rate increases. Another possible source of capital funds could be the proceeds from the sale of excess property. However, the Service did not discuss this issue in its report. We are also concerned with the Service’s lack of specifics on capital investments under both proposals. While the Service stated that its capital investments for productivity gains and cost saving initiatives were related to its Transformation Plan, it has provided little detail on any of these initiatives in its pension savings report, its Five-Year Strategic Plan FY 2004- 2008, or its Five-Year Strategic Capital Investment Plan 2004-2008. The Service did provide a breakdown of some capital investments related to its Transformation Plan initiatives, but did not provide sufficient back-up data or description to enable us to determine to what transformation initiatives these investments were related or to what extent they would meet transformation goals. In our November 2003 testimony, we also noted our concern that since the Service issued its Transformation Plan in April 2002, it has not provided adequate transparency on its plans to rationalize its infrastructure and workforce; the status of initiatives included in its Transformation Plan; and how it plans to integrate the strategies, timing, and funding necessary to implement its plans. While the Postal Service is moving forward with its Transformation Plan initiatives, and has made meaningful progress in a number of areas, it is not clear how it will be able to finance these initiatives within inflation-based rate increases, especially if mail volume continues to decline. Therefore, we recommended in our November testimony that the Postmaster General develop a comprehensive and integrated plan to optimize the Service’s infrastructure and workforce, in collaboration with its key stakeholders, and make it available to Congress and the general public. We also recommended that the Postmaster General provide periodic reports to Congress and the public on the status of implementing its transformation initiatives and other Commission recommendations that fall within the scope of its existing authority. Postal officials have agreed to develop a comprehensive and integrated plan to optimize its infrastructure and provide periodic reports on the implementation of its transformation initiatives and make them available to Congress and the public. As previously mentioned, the Service provided its congressional oversight committees with a progress report on its transformation initiatives in November 2003. The infrastructure and workforce plan and the periodic reports on the status of transformation initiatives will be critical to oversight in this area. During our review, we identified implementation issues that Congress may want to consider if it determines that the Service should prefund some or all of its retiree health benefits obligation. Under Proposal I, implementation issues involve the method that would be used to fund the retiree health benefits, and the demographic and economic assumptions that would be used to determine the amount of the total obligation as well as the annual funding amount. Under Proposal II, the question arises as to how the annual cost of retiree health benefits for employees hired after fiscal year 2002 would be calculated. In addition, neither proposal ensures that the Service would continue to prefund the retiree health benefits obligation. Additional questions arise about the Service’s accounting treatment for retiree health benefits under both proposals. If Congress decides to accept one of the proposals, technical issues related to implementing the proposal would need to be addressed. Under Proposal I, the Service would fund the retiree health benefits obligation by making payments into a fund currently maintained by OPM. Postal officials raised questions about which agency—the Service or OPM—should determine the amount of the obligation, and what economic and demographic assumptions should be used. In addition, we have questions about the Service’s proposed funding mechanism, because it does not amortize the obligation over a specific time period. In Proposal II, the Service would maintain control of the retiree health benefits fund. Under both proposals, the Service would continue to make payments into the respective funds after 2010; however, under P.L. 108-18, the Service would be under no obligation to prefund the retiree health benefits obligation. One issue pertains to the assumptions used by the Service to estimate its retiree health benefits obligation. If these assumptions change, then the future funded status of the obligation would also change. This estimated obligation is based on several assumptions, such as premium costs, retirement rates, termination rates, mortality assumptions, disability assumptions, plan enrollment, and coverage election that could change annually and may differ between the Postal Service and other agencies. These assumptions materially affect the future funded status of the obligation. An illustration of the practical effect of using different assumptions can be seen in the estimate of the Service’s total retiree health benefits obligation. A postal estimate of its retiree health benefits obligation as of the end of fiscal year 2003 differs from an estimate for the same period prepared by OPM by about 4 percent, or $2.2 billion. According to the Service, the difference in these two estimates is primarily due to differences in the measurement date, the discount rate, the health care trend rate, the cost basis, and the attribution method used. The Service’s estimate was actuarially certified as reasonable. However, a different set of results could also be considered reasonable actuarial results, because the actuarial standards describe a “best-estimate range” for each assumption rather than a single best-estimate value. In addition, the Service said it would not amortize the retiree health benefits obligation within a specified time frame. Instead, the proposed funding that the Service calculates to address its retiree health benefits obligation is the amount that would be required to fund the annual retiree premium cost plus the estimated future cost of retiree health premiums for current employees (service costs), and interest expense on both the outstanding obligation and the new service cost. According to the Postal Service, while it is the Service’s intention to eventually fully fund its retiree health benefits obligation under Proposal I, this proposal does not fully fund all prior years’ service costs—the $54 billion obligation—within a specified time period. In fact, because the proposed funding under Proposal I includes a beginning asset balance of $10 billion, but does not amortize any of the retiree health benefits obligation, approximately $45 billion of the obligation would not be funded. The Service’s proposed funding for the retiree health benefits obligation is modeled after the funding method used by some utilities to prefund their retiree health benefits. However, other options might allow the Service to amortize its existing obligation and prefund the retiree health benefits obligation for future retirees. While postal officials indicated that under these proposals the Service intends to make annual payments for prefunding, the Service would be under no obligation to do so. Consequently, if Congress wanted to ensure that the Service prefunds its retiree health benefits, legislative action would be required. In considering Proposal I, we identified the following unresolved questions: Should prefunding Postal Service retiree health benefits be mandated by Congress, or left to the Service’s discretion? Should the Postal Service, OPM, or another entity determine the amount of the Service’s total retiree health benefits obligation? Who should determine the proper funding mechanism for the retiree health benefits obligation? Should the Postal Service be required to amortize its prior years’ service obligation within a set time frame? If so, what is the appropriate time frame? What economic and demographic assumptions should be used to determine the current obligation, service costs, and asset balance, and future estimates of these amounts? Furthermore, how often should these assumptions be updated, and what process should be used to update future estimates? What recourse, if any, should parties have if they disagree with this funding mechanism? What oversight, if any, is needed in this area? According to postal officials, unlike Proposal I, in Proposal II the Service would maintain control of the funds used to prefund the retiree health benefits cost for new employees. These officials have also stated that the Service would be responsible for determining the proper economic and demographic assumptions to be used in calculating the annual fund amount. However, questions arise about how the Service estimated these costs for fiscal years 2006-2010. For example, the Service provided us with estimates of these costs that ranged from $214 million in fiscal year 2006 to $687 million in fiscal year 2010. The Service then adjusted these numbers downward to $100 million for fiscal year 2006 and to $300 million for fiscal year 2010. According to postal officials, this downward adjustment was made to reflect attrition. Although we attempted to verify the method used to lower these estimated costs, we were unable to obtain the necessary data in the time available to complete our work. As with Proposal I, while the Service has said that it intends to fund this obligation for employees hired after fiscal year 2002, it is not currently required to prefund. Questions similar to those raised in Proposal I would also relate to consideration of this proposal, including the following: Should prefunding retiree health benefits for new employees be voluntary or legislatively mandated? How should the annual funding amount be determined? What oversight, if any, is needed in this area? Regardless of which proposal is adopted, questions remain about how the retiree health benefits obligation should be reflected in the Service’s financial statements. The Service currently uses a pay-as-you go basis of accounting for its retiree health benefits obligation. We previously reported that we believe the Service should consider whether the accrual basis of accounting is both the acceptable and appropriate method for this obligation, especially considering the importance of giving full consideration to economic realities as the Service attempts to transform itself in order to respond to major operational and financial challenges. Postal Service management and the Board of Governors, the Postal Rate Commission, Congress, and other stakeholders need to have a clear understanding of the Service’s true financial condition as difficult transformation decisions are being considered. It is our understanding that the Service would not adopt the accrual basis of accounting under either of the proposals presented, but would disclose the amount of its retiree health benefits obligation in the footnotes to its financial statements. While enhanced disclosure would be a positive step, we continue to believe that accrual accounting is needed in order to provide all stakeholders with the soundest and most transparent basis for decisionmaking. In our view, the enactment of P.L. 108-18 could be viewed as a significant event that triggers the need to reassess the accounting treatment currently used by the Service with respect to these obligations, and even more strongly reinforces our view that full accrual accounting should be adopted for financial statement reporting purposes. Given the unique nature of the Postal Service retiree health benefits obligation and the impact of P.L. 108-18, it may be prudent for the Service and its auditors to consult with the Financial Accounting Standards Board (FASB) on the appropriate accounting treatment for this obligation for financial statement reporting purposes. A postal official has expressed concern that accrual accounting for this obligation would result in immediate rate increases of significant magnitude. We recognize that such an approach may initially result in higher rate increases than would otherwise be the case under a pay-as-you- go basis; however, rate increases would likely be more moderate in the longer term. Various options may exist for addressing the effect of recognizing this obligation, including possible amortization of any current unfunded obligation over a reasonable time period, such as 20-40 years. To further explore these options, we believe that the Service should work with the Postal Rate Commission and other appropriate stakeholders to determine options for phasing in any potential effect on postal rates. We will be assessing the impact of the accounting treatment for the retiree health benefits obligation for whichever proposal is adopted, as well as for the Service’s pension obligation, as part of our ongoing work. The Service faces an uncertain future. First-Class Mail volume continues to decline, and new revenue sources are not apparent. The Service faces significant unfunded obligations, the largest of which is for retirees, which must be addressed. Further, decisions must be made as to whether current or future ratepayers, or taxpayers, should be responsible for paying these obligations. The Service has acknowledged that it needs to reduce its operating costs to deal with the decline in First-Class Mail volume and meet its obligations. The most direct way for the Service to do this is to become more efficient by standardizing its operations and reducing excess capacity in its network as part of an integrated strategy to rationalize its infrastructure and workforce. The Service has stated that it plans to reduce its debt and finance capital investment necessary to transform itself from rate increases within the rate of inflation. It also proposes to prefund at least some of its retiree health benefits obligation. However, it is not clear based upon available information from the Postal Service whether it can accomplish these goals. If sufficient funding for transformation initiatives is not available, or if it does not achieve additional cost savings, significant additional efficiency gains may not be achieved. In addition, if larger postal rate increases are needed, further declines in mail volume could result. These scenarios could thereby threaten the Service’s future financial viability. It is against this backdrop of fairness to current and future ratepayers and taxpayers, affordability, and the ability of the Service to achieve its transformation goals that the Service’s proposal to eliminate the escrow requirement and its two funding proposals must be weighed. We believe that the continuation of the escrow requirement after fiscal year 2005 without allowing the Service to use the funds has the potential for significantly raising postal rates unnecessarily. Rate increases of the magnitude necessary to fund this escrow requirement in the future may precipitate further declines in mail volume and could hinder the Service’s ability to achieve other financial goals. Furthermore, Congress has other means by which it can direct or guide the Service in its use of funds if it chooses to do so. Both funding proposals presented by the Service are generally consistent with the provisions of P.L. 108-18. Proposal I, which is preferred by the Service, hinges on transferring the responsibility for military service pension costs from the Service to the Treasury. Proposal I would result in a greater postal rate increase and would shift more of the responsibility for the retiree health benefits obligation to current ratepayers. Proposal II, on the other hand, would require less of a postal rate increase, focus more on rate mitigation, and shift less of the responsibility for the retiree health benefits obligation to current ratepayers than Proposal I. This would leave future postal ratepayers with more of the burden of paying for costs unrelated to products and services they receive. Under both proposals, a portion of the retiree health benefits obligation would remain unfunded, and the Service currently does not intend to account for or report on its retiree health benefits obligation on an accrual basis under either proposal. Thus, the full extent of the Service’s obligation would not be recognized on its financial statements. Finally, the Service anticipates that it will be able to pay down debt and fund capital investments through inflation-based rate increases under both proposals. In our view, the Service needs to begin addressing its retiree health benefits obligation as soon as it can afford to do so, and to the extent it can. The most substantive way it will be able to do this, as well as enhance its overall financial viability, is by effectively implementing the transformation goals it and the President’s Commission set forth, particularly by becoming more efficient and rationalizing its infrastructure and workforce. It is therefore critical for the Service to have the capital funding needed for transformation. Although the Service believes it would be able to generate enough funds, this is not clear because the Service has not yet presented a comprehensive integrated infrastructure and workforce rationalization plan. However, the Service has agreed to do so, as well as report periodically on its progress in implementing its Transformation Plan. Finally, a number of technical issues need to be considered that are associated with the Service’s two funding proposals, including the implementation of any prefunding of the Service’s retiree health benefits obligation and the manner in which the Service should amortize and report on its obligation. To ensure continuing progress in addressing the Service’s financial challenges, we suggest that Congress consider the following: Repealing the escrow requirement after receiving an acceptable plan from the Service describing how it intends to rationalize its infrastructure and workforce and is confident that the Service is making satisfactory progress on transforming itself into a more efficient organization and implementing its transformation goals. Directing the Service to fund specific purposes that Congress believes are especially important—such as prefunding the retiree health benefits obligation or supporting and possibly accelerating the Service’s transformation efforts—if the Service does not provide an acceptable plan for rationalizing its infrastructure and workforce, or show satisfactory progress in implementing transformation, or if Congress wants greater assurance that the Service will spend funds in a particular manner. In this regard, we have already recommended that the Service provide periodic reports on the status of its transformation initiatives and other Commission recommendations. Addressing implementation issues related to the retiree health benefits obligation. For example, one key issue that would need to be further explored is what options may be available that will allow the Service to amortize its unfunded retiree health benefits obligation over a specified time period (e.g., 20-40 years) and prefund the retiree health benefits obligation for future retirees. The Postal Service provided comments on a draft of this report in a letter from the Chief Financial Officer dated November 21, 2003. These comments are summarized below and reproduced in appendix II. The Service’s letter stated the following: It was pleased that our report found its proposals to be consistent with P.L. 108-18, and that its preferred proposal presented a more equitable balance of costs between current and future ratepayers. It would have to raise rates to generate funds for the escrow requirement. The issue of the affordability of the proposals should be viewed as a question of whether the ratepayers can afford them. It was concerned with our recommendation that Congress repeal the escrow requirement after it receives an acceptable plan from the Service concerning rationalization of its infrastructure and workforce, and if Congress believes that the Service is making satisfactory progress on its transformation goals. The Service believes that it already provides adequate information to Congress for reviewing its plans and progress on transformation. Thus, the Service believes that using the escrow as an oversight mechanism is not necessary and will result in forcing the Service to raise rates. It believes that its Proposal I is in the best interest of the taxpayers and postal stakeholders. In response to the Service’s comment regarding the affordability issue, we agree that affordability to ratepayers is an important consideration and discuss the impact of these proposals on rate increases and volume. We have also added language to our report to clarify this point. Regarding the Service’s concern about tying the escrow requirement to an acceptable infrastructure and workforce rationalization plan, we understand the Service’s concern that if the escrow requirement is not repealed, it would have to raise rates unnecessarily. We agree that establishing an escrow account without allowing the Service to use the funds would not be a desirable outcome, and that is one of the reasons why we suggested that Congress consider repealing the escrow requirement. On the other hand, contrary to the Service’s view, we believe the escrow requirement is an opportunity for Congress to review how the Service plans to address a number of long-term challenges, including debt repayment, capital projects, an unfunded retiree health benefits obligation, and its progress toward transformation. If the Service provides Congress with an acceptable plan in the next several months and Congress finds the plan and the Service’s transformation progress satisfactory, we believe Congress should have sufficient time to repeal the escrow requirement so that an escrow account would not be needed. Thus, the Service would not have to include the operating expense associated with the escrow requirement in its rate base for the next rate case filing. Alternatively, if Congress is not satisfied, it could direct the Service to fund specific activities or purposes through means other than an escrow requirement. Finally, the Service believes that using the escrow requirement for additional oversight is not needed, because it has provided Congress with adequate information on its plans and progress toward transformation. While we agree that the Service provides a variety of reports and plans to Congress, including its November 2003 Transformation Plan Progress Report, the Service has not provided Congress with a comprehensive and integrated infrastructure and workforce rationalization plan. We believe such a plan is needed because the Service’s rationalization of its infrastructure and workforce is among the most important initiatives in the Service’s Transformation Plan since it will significantly affect the Service as well as so many employees, mailers, and communities. Recognizing the widespread interest and potential controversy associated with any changes in this area, it is critical that the Service inform Congress and the public about its rationalization strategies and plans. We, as well as the President’s Commission, believe that these initiatives are also key to the Service’s efforts to cut costs and become more efficient. Accordingly, we believe oversight in this area is necessary, and that information related to the cost of these initiatives and the Service’s ability to fund them will be needed to assure Congress that the Service is continuing to make progress in implementing its Transformation Plan. We will send copies of this report to the Chairmen and Ranking Minority Members of the House and Senate Committees on Appropriations, as well as Representative John M. McHugh, Chairman of the House Special Panel on Postal Reform and Oversight; Representative Danny K. Davis, Senator Daniel K. Akaka, Senator Thomas R. Carper, the Postmaster General, the Secretary of the Treasury, the Director of the Office of Personnel Management, the Director of the Office of Management and Budget, the Chairman of the Postal Rate Commission, and other interested parties. We will also make copies available to others on request. In addition, this report will be available at no charge on GAO's Web site at http://www.gao.gov. Staff acknowledgments are included in appendix III. If you have any questions about this report, please contact Bernard L. Ungar, Director, Physical Infrastructure Issues, at (202) 512-2834 or at [email protected]. Our objectives for this report were to fulfill our legislative mandate to evaluate the Postal Service’s proposal for use of the savings accruing to the Service from enactment of pension reform legislation. We evaluated the report based on its consistency with P.L. 108-18. We also addressed the escrow requirement that the Service identified as an issue in its report, and identified issues based upon our previous work that Congress may want to consider in assessing the Service’s proposals, including the fairness and affordability of the proposals, and the ability of the proposals to help the Service achieve its transformation goals. Finally, we discussed other pertinent issues that we identified in the course of our review. To assess whether the proposals were consistent with the provisions of P.L. 108-18, we reviewed the legislative history of P.L. 108-18. We then assessed how well each of these proposals addressed the Sense of Congress and the Matters to Consider expressed in that legislation. We also reviewed the Commission recommendations to determine if the proposals were consistent with this work. To assess the escrow requirement, we reviewed the Service’s report, interviewed postal officials, and analyzed the Postal Service’s financial data to assess the impact of the escrow requirement on the Service’s financial situation. We also interviewed congressional staff to discuss the purpose of this account. To identify issues we had previously reported on, we reviewed our previous work. To assess how well each proposal addressed fairness issues, we reviewed Postal Service documents and interviewed Postal Service officials. We also assessed the affordability of each proposal by obtaining and analyzing Postal Service documents, including the Five-Year Strategic Plan FY 2004-2008, the Integrated Financial Plan for Fiscal Year 2004, the Five-Year Strategic Capital Investment Plan 2004-2008, annual reports, and materials provided by the Service in support of its proposals. We did not independently verify any of the financial data provided by the Postal Service. We also reviewed actuarial reports regarding the retiree health benefits obligation, and analyzed the Service’s proposed funding mechanism. We did not independently verify any of the actuarial reports. We also reviewed the Service’s April 2002 Transformation Plan to assess progress in this area. To assess the impact on the federal budget, we reviewed the federal budget and documents prepared by the Congressional Budget Office related to the effect of P.L. 108-18 on the federal budget, and we conducted interviews with officials from the Congressional Budget Office. To identify other pertinent issues that Congress may want to consider, we reviewed Postal Service documents, the Commission’s report, and our previous work. We also conducted interviews with congressional staff, OPM, and Postal Service officials. The Service raised another issue in its report that was not within the scope of our review. The Service has expressed concern with the method that OPM used to determine the amount of the postal CSRS fund. The Service believes that OPM’s methodology assigns an unreasonably low portion of the retirement benefit to the federal government, so it provided OPM with two alternatives to consider. OPM did not agree with the first alternative and did not respond to the second alternative. P.L. 108-18 required OPM, in consultation with the Postal Service, to develop the methodology used to determine the amount of the postal CSRS fund. The law also afforded the Service the opportunity to appeal OPM’s methodology to the Board of Actuaries of the Civil Service Retirement System, which the Service is currently considering. Thus, we did not include this issue in the scope of our review. We conducted our review at Postal Service headquarters in Washington, D.C., from October 1, 2003, through November 25, 2003, in accordance with generally accepted government auditing standards. Bernard L. Ungar, (202) 512-2834. Teresa L. Anderson, Alan N. Belkin, Christine Bonham, Margaret Cigno, Nikki Clowers, Kathy Gilhooly, and Kenneth E. John made key contributions to this report. The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to e-mail alerts” under the “Order GAO Products” heading. | In April 2003, Congress enacted the Postal Civil Service Retirement System (CSRS) Funding Reform Act of 2003 (P.L. 108-18), which\ lowered the Postal Service's (Service) annual payment for its CSRS obligation by over $2.5 billion beginning in fiscal year 2003. P.L. 108-18 includes requiring (1) the Service to begin making payments into an escrow account in fiscal year 2006, (2) the Service to issue a report on its proposed use of "savings" resulting from the lower CSRS payments, and (3) GAO to evaluate the Service's report and present its findings to Congress. GAO evaluated whether the Service's proposals were consistent with P.L. 108-18; the impact of the escrow account; and whether the proposals were fair to current and future ratepayers, affordable, and helped achieve transformation goals. The Service's report presented two proposals for how it would use the "savings," and GAO found both to be generally consistent with P.L. 108-18. The first proposal assumes that responsibility for military service pension costs shifts to the Treasury Department and proposes prefunding retiree health benefits for retirees and current employees. The second proposal assumes that the Service retains responsibility for military service pension costs and proposes prefunding retiree health benefits only for new employees. Both proposals assume that the Service would pay down debt and fund capital investment through inflation-based rate increases. Under both proposals, the Service proposes that the escrow requirement be eliminated, so that the Service would not have to include $3 billion as a mandated incremental operating expense beginning in fiscal year 2006. The Service cannot use the escrow funds unless Congress eliminates the escrow requirement or specifies by law how these funds may be used. If no action is taken, the Service believes that it would have to raise rates higher than would otherwise be necessary. The escrow requirement provides Congress an opportunity to review how the Postal Service will address a number of long-term challenges, such as progress toward transformation and funding its retiree health benefits obligation. Once Congress is satisfied, it could repeal the escrow requirement so that an escrow account is not needed. GAO assessed the Service's two proposals according to their fairness, affordability, and the ability to achieve transformation goals. Fairness: Proposal I strikes a more equitable balance of allocating costs between current and future ratepayers, because benefits earned by today's employees will be built into the current rate base. Under Proposal II, much of the retiree health benefits obligation would remain unfunded, thereby placing the burden of the benefits being earned today on future ratepayers. Affordability: The Service's proposals attempt to balance short-term rate mitigation with some level of prefunding to address its long-term obligations. The first proposal would require a larger postal rate increase than the second proposal and would prefund more of the retiree health benefits. The second proposal focuses more on rate mitigation. Given the Service's uncertain financial future, its ability to raise revenues, reduce costs, and improve productivity and efficiency is critical to affordability. Transformation goals: Although the Service believes it can pay down debt and fund the capital investments associated with its transformation initiatives, this is not clear because the Service has not yet presented a comprehensive, integrated infrastructure and workforce rationalization plan. GAO has previously recommended that the Service provide Congress with such a plan and periodic reports on its transformation progress. The Service disagrees with GAO that the escrow repeal should be tied to a plan. |
Since its inception in 1989, the Environmental Management (EM) program has used management contractors to perform cleanup projects and operate its major sites. While EM contracts authorize fees (i.e., profits) to motivate management contractors to high-quality performance, subpar performance in the areas of controlling costs and meeting schedules has repeatedly occurred. For example, a 1996 study commissioned by EM found that while cost and schedule performance had improved since 1993, cost overruns on EM projects still ranged from 30 percent to 50 percent.More broadly, in November 1996, we found that, of 15 major system acquisitions completed by the Department of Energy (DOE) from 1980 through 1996, the projects cost an average of 63 percent more than the original cost estimates and were completed an average of 71 months late.More recently, DOE’s Inspector General found a number of problems with the implementation of performance incentives in management contracts, including DOE having paid incentives for work that was not completed by the required performance date, for work done before performance measures were established, and for work that was not done at all. EM’s privatization program is one aspect of DOE’s Department-wide effort that began in 1994 to reform the Department’s contracting practices, including an increased emphasis on the use of performance incentives and fixed-price contracts. EM’s privatization approach currently has two key elements. First, privatization uses fixed-price contracts under which the contractor is paid a fixed amount for acceptable goods and services regardless of the costs the contractor incurs. Second, privatization contractors are expected to provide private financing for the construction of facilities, if needed, to produce the final product EM is buying. The privatization program receives a separate appropriation to cover the capital investment portion of these contracts. However, in the event the contract is terminated by the government before completion, the privatization funding will be used to reimburse the contractor for its capital investment. If the contract is continued through completion, the privatization funding will be used to repay the capital investment as acceptable goods or services are provided. Although this is the current approach to privatization, according to DOE officials, EM’s privatization program will continue to evolve over time as DOE learns more through evaluating actual business proposals. The privatization program was first funded in fiscal year (FY) 1997, when the Congress appropriated $330 million to support five projects, including the Tank Waste Remediation System at Hanford (see table 1 below). In FY 1998, the Congress provided an additional $200 million for one existing project and four new projects, including Spent Nuclear Fuel projects at Savannah River and at Idaho, a transportation project at Carlsbad, and a waste disposal project at Oak Ridge. In addition, the Congress provided $31.7 million in FY 1998 through the Defense Facilities Closure Projects account for two smaller privatization projects at EM’s Fernald Environmental Management Project in Ohio. The FY 1999 budget request includes about $517 million to continue work on ongoing privatization projects at Hanford, Idaho, and Oak Ridge, and one new transportation project administered by the Carlsbad Area Office. In 1997, we reported problems with DOE’s FY 1997 and FY 1998 privatization budget requests. These included estimated costs for projects that did not always include all relevant costs, such as those that would be incurred by the sites’ management contractors to support privatized projects. In addition, funding for some projects was not needed when requested. For example, although funds were requested for FY 1998, we found that the Power Burst Facility at Idaho would not be ready for deactivation until FY 1999. In addition, in computing cost savings, EM did not always compare projects of comparable scope, as in the case of the Savannah River M-Area Mixed Waste Tank Remediation project. Finally, in its fiscal year 1997 budget request, EM cited the Idaho Pit 9 project as a successful privatization on the basis of its placement of a fixed-price contract. However, we found that, since the contract was let, the project has fallen significantly behind schedule and that EM and its management contractor are involved in a disagreement with the fixed-price subcontractor over a number of performance issues. The future course, including the ultimate cost, of this project is uncertain until these disagreements have been formally resolved. These early problems with implementing the privatization program have led to concern in the Congress about whether privatization, as defined by EM, is appropriate for large, capital-intensive projects. These concerns led the Congress to deny a substantial portion of EM’s FY 1998 privatization budget request and to require EM to submit detailed reports analyzing privatization contracts for a 30-day congressional review before incurring any additional contractual obligations. Specifically, DOE cannot (1) enter into a new privatization contract, (2) exercise authorization to proceed with a privatization contract, or (3) extend a privatization contract by more than 1 year without providing the Congress an opportunity to review the proposed action. The Chairman and Ranking Minority Member of the House Committee on Appropriations, Subcommittee on Energy and Water Development, asked us to review EM’s privatization program. Specifically, we determined (1) what conditions need to be present in order to successfully use fixed-price contracting for EM privatization cleanup projects, (2) what alternative financing approaches could be used for EM privatization contracts, and (3) how alternative financing methods for EM privatization projects might affect budget scoring. To determine the elements needed to successfully use fixed-price contracts for cleanup projects, we visited three Department of Energy sites with active privatization programs—Hanford, Idaho, and Oak Ridge. During our site visits, we gathered information on cleanup projects formally proposed for privatization. We also reviewed a judgmentally selected group of cleanup projects that used an alternative to the traditional method of having the management contractor perform the work on a cost-reimbursement basis, such as the use of various forms of fixed-price and cost-reimbursement incentive contracts. In addition, to determine what factors DOE considers in selecting the type of contract for cleanup projects, we interviewed the privatization coordinators, contracting staff, and project management staff at each of the sites. At DOE headquarters, we also interviewed officials from (1) EM’s Office of Program Integration, (2) DOE’s Contract Reform and Privatization Project Office, and (3) DOE’s Office of Procurement and Assistance Management. We researched the Federal Acquisition Regulation for information on the various types of contracts, their major features, and criteria for selecting which type of contract to use. Finally, we interviewed officials from the Army Corps of Engineers’ Environmental Division. To identify alternative financing approaches for EM’s privatization contracts, we interviewed officials of companies currently participating in privatization projects and representatives of financial consulting firms that help clients secure capital for environmental and construction projects. We also discussed project financing issues with the DOE headquarters and field staff listed above and searched relevant financial literature to gather background on issues such as private firms’ capital structures and estimation of financing costs. Finally, we constructed a model using actual data from the contract for the Idaho Advanced Mixed Waste Treatment Project. We used the model to determine the comparative costs of financing under several scenarios. We received assistance in the modeling effort from our Office of the Chief Economist. To evaluate how alternative financing and contracting approaches might affect budget scoring of EM’s privatization projects, we analyzed the scoring guidelines in the Office of Management and Budget’s (OMB) Circular A-11. We also discussed budget scoring issues with officials of OMB and the Congressional Budget Office. We received assistance in this effort from our Accounting and Information Management Division. We provided a draft of this report to DOE for its review and comment. DOE’s comments and our response are included as appendix III and are discussed in the chapters where appropriate. We performed our review from July 1997 through May 1998 in accordance with generally accepted government auditing standards. Fixed-price contracts can be used for cleanup projects, including privatization projects, when certain conditions in the Federal Acquisition Regulation are met. For example, the regulation finds that fixed-price contracts are appropriate when projects are well-defined, uncertainties can be allocated between the parties, and sufficient price information and/or multiple competing bidders are available to help determine a fair and reasonable price for the work. In addition, EM’s projects place special demands on both EM and the contractor which must be considered when selecting the contracting strategy that will be most cost-effective. For example, contracts for EM’s projects must consider the need to indemnify contractors for accidents involving nuclear materials. Over the past few years, EM has had some success with fixed-price cleanup contracts; however, experiences in Idaho and Oak Ridge illustrate that fixed-price contracting is not appropriate for every cleanup project. The Federal Acquisition Regulation finds that fixed-price contracting is the preferred type of contract for government acquisitions when certain conditions are met. In general, a fixed-price contract provides the most incentive for the contractor to perform efficiently and to exercise cost control. The risk of cost overruns from poor performance is generally borne by the contractor, which helps to protect the government’s interest. In addition, most fixed-price contracts are awarded through an open competition process that helps the government determine a fair price for the work. The conditions most conducive to using fixed-price contracts include the following: a clearly defined scope of work; low probability of major changes to work scope or conditions to avoid costly renegotiation of price; existence of proven technologies that can be applied with no more than sufficient price information and/or multiple competing bidders to aid in determining a fair price for the work, that is, a price that minimizes the cost to the government while providing a fair profit to the contractor; easily verifiable performance measures to facilitate monitoring progress toward project completion; and thorough analysis of risks and appropriate allocation or sharing of risks so that the party best able to manage each risk is responsible for addressing it. When the conditions discussed above have been present, EM has used several varieties of fixed-price contracts to help ensure cost-effective cleanup. For example, Idaho and Hanford have used fixed-price contracts for laundry services for items such as contaminated workers’ uniforms. DOE has estimated the savings from the Idaho contract at $3 million to $8 million over the next 10 years, and savings from the Hanford contract are estimated to be about $4.5 million per year. Hanford also contracted for the treatment of 24,000 to 26,000 gallons of tri-butyl phosphate wastes on a fixed-price contract at a total savings of about $1.5 million. At Savannah River, the M-Area Mixed Waste Tank Remediation project was privatized in 1993. While the contractor has experienced some technical problems, the contractor expects to successfully complete waste treatment operations under the terms of the original contract. EM estimates this contract will save a total of $19 million to 28 million. Finally, at Idaho the fixed-price contract for low-level waste treatment has a unit cost of about one-half that of the on-site facility that formerly performed this work. While EM’s focus in pursuing fixed-price contracts has been on saving money, fixed-price contracts can incorporate incentives that accommodate other goals. For example, Oak Ridge used an incentive to reduce the amount of waste created in its contract for the cleanup of the St. Louis North County site of the Formerly Utilized Sites Remedial Action Program. If the contractor shipped less waste, primarily soil, to the designated disposal site than estimated in the contract, DOE avoided the costs of waste disposal. As an incentive for the contractor to minimize waste shipments, DOE split the value of those savings with the contractor. Similarly, the contracts for the Oak Ridge Broad Spectrum Low-Level Mixed Waste Treatment are planned to include incentives for minimizing the volume of waste to be disposed of or stored after treatment. If a fixed-price contract does not appear to be cost-effective, other contracting methods may offer similar benefits. One such alternative is the use of incentives in cost-reimbursement contracts to motivate the contractor to achieve better cost control and performance. For example, Oak Ridge and its management contractor agreed to cost-plus-incentive-fee contracts for several cleanup projects. While the contractor’s costs were covered, the only way for the contractor to earn a fee or profit on the work was to meet or improve on cost and schedule targets. Under this contract, if the contractor missed the targets by specified amounts, the fee earned could be a negative amount, that is, a loss. The first of these incentive projects was for the demolition of a powerhouse complex on the K-25 site. The project was completed 6 months ahead of schedule and $5 million under target cost. Under another cost-plus-incentive-fee contract for the demolition of cooling towers on the K-25 site, the contractor completed the project 2 months ahead of schedule and more than $5 million under target cost, partly by finding an innovative way to dispose of contaminated water that had accumulated in the basins under the cooling towers. (See app. I for further discussion of alternative contract types and illustrative examples of EM’s cleanup contracts using them.) When contracting for cleanup, EM must also consider additional factors that occur because of the unique characteristics of cleanup projects and the special conditions pertaining to working in the DOE complex. These factors include several types of risks that must be shared or allocated between EM and the contractor, the unique aspects of each project, and the availability of personnel to properly manage fixed-price procurements and projects. Risks must be identified and addressed in the contract so that each party’s responsibilities are clearly defined. Some risks, such as the possibility of changes in environmental regulations during a project’s lifetime, third-party liability and insurance, environmental indemnification, construction cost and schedule changes, interest rate fluctuations, material cost escalation, lack of sufficient appropriations to support the original schedule, and termination for convenience of the government, are not unique to EM’s cleanup projects but must still be considered in estimating the contract price. Other risks, such as indemnification for accidents involving nuclear materials, working with EM’s stakeholders, and addressing the concerns of unionized workers at EM sites, generally are not found outside of the DOE complex. There are also risks inherent in cleanup projects, such as determining whether the existing waste characterization data are sufficient to support technology selection or design, and how new or existing treatment technologies will perform on a specific waste stream. EM also faces risks such as pre-existing site conditions and paying contractors for idle facilities if, for example, EM or the management contractor fails to deliver waste for treatment as specified in the contract. EM’s 1997 Privatization Project Team Staffing Report states that “mplementing privatization will require the modification of the Department’s traditional project management practices.” When compared to starting cleanup projects using management contractors, EM officials acknowledge that using fixed-price contracts requires additional project definition and planning before and during the procurement process. Under management contracts, EM managers could make changes as the project progressed without explicit recognition of the costs of those changes. While fixed-price contracts can help to reduce costs and improve performance when used properly, the cost of any changes to work scope must be negotiated with the contractor, potentially raising the price of the contract. In recognition of that fact, EM’s Privatization Management Planrequires that privatization contracts contain a clause limiting who can direct the contractor to make changes that could affect the scope (and, implicitly, the price) of the contract. Not all EM managers are comfortable using fixed-price contracts because of this limited flexibility to make changes after the contract is awarded. Using fixed-price contracts requires that employees have a different skill mix than EM has needed in the past to manage cleanup projects through its management contracts. The Project Team Staffing report also highlighted some areas in which EM managers will need new or strengthened skills to effectively implement the program. For example, the report notes that privatization procurements require more effort in the early stages of procurement development and more staffing in contract administration and monitoring. The report also recognizes that EM project teams have not traditionally had all of the skills—such as those associated with corporate budgeting, capital market analysis, financing of employee benefit programs, and hands-on experience developing complex schedules and project management plans—needed to ensure that privatization procurements and contracts are fully executable. Consequently, some project managers and procurement staff may need additional training to use fixed-price contracts to full advantage. One step DOE has taken to address these new demands on its staff is to require that all privatization procurement requests for proposals and contracts be sent to headquarters for review and concurrence by functional experts, staff in the Office of Procurement and Assistance Management, and other key officials before they are issued. In addition, EM management is working with the field offices to develop a new training curriculum to provide project managers and procurement staff with additional skills so that they can better recognize when to use fixed-price contracts. Our work has repeatedly highlighted continuing problems with DOE’s management of projects and contracts. In November 1996, we reported that lack of sufficient DOE personnel with the appropriate skills to oversee contractors’ operations was one of the key factors underlying the cost overruns and schedule slippages DOE has experienced in major systems acquisitions. In March 1997, we reported that a key cleanup project at EM’s Fernald, Ohio, site has experienced significant delay and cost growth because DOE did not assign a sufficient number of staff with the proper skills to the project. Finally, as we discuss in detail in the next section, Idaho has experienced problems with the Pit 9 cleanup, which DOE chose to privatize, in part, because of the lack of in-house expertise in large remediation projects. Without careful attention to devising the right type of contract, the unique aspects of cleanup projects, and proper management oversight, EM may not get the cost reduction and performance it anticipates from using fixed-price contracts. As we noted in our recent report on DOE’s estimates of potential savings from privatizing cleanup projects, DOE’s use of fixed-price contracts has not always been an effective method to minimize cost growth on projects. EM contracted with a consulting firm, which issued a report in November 1993 and an update in April 1996, to review EM’s performance on cleanup projects performed under both cost-reimbursement and fixed-price contracts. The report found that EM’s costs for environmental work were substantially higher than private industry’s. In 1993, it found that growth from estimated to actual costs on a sample of 65 projects with fixed-price contracts was almost 75 percent. In the 1996 update, it reported that EM’s projects typically cost 25 percent to 40 percent more than similar projects in the private sector. While it found that EM’s cost performance had improved since the 1993 review, EM was still experiencing cost growth in the range of 30 percent to 50 percent over original estimates. It concluded that this cost growth has occurred primarily because projects were poorly defined, leading to change orders after the contracts were signed. In 1994, Lockheed Martin Idaho Technologies Company, the management contractor at Idaho, awarded a fixed-price subcontract to Lockheed Martin Advanced Environmental Systems (LMAES) for the cleanup of Pit 9. Pit 9 is about one acre in size and contains various wastes ranging from contaminated rags to plutonium-contaminated sludge. The cleanup was expected to cost about $200 million and to be completed in 1999. DOE chose a fixed-price approach for the Pit 9 project because Department officials believed a fixed price would help limit the project’s total cost and provide an incentive for contractors to use efficient practices in carrying out the project by shifting the risk of nonperformance to the contractors. During the early stages of the procurement process, concerns arose about the appropriateness of a fixed-price approach given the uncertainty about the contents of the pit. Nevertheless, senior DOE officials decided that this approach was warranted, given the high costs and the inefficient performance the Department had experienced with cost-reimbursement contracts, private industry’s expressed interest in performing the cleanup using a fixed-price arrangement, and the potential benefits of the approach. However, in March 1997, when the subcontractor estimated that project completion would be 26 months behind schedule, LMAES requested an equitable adjustment and conversion of the contract type to cost reimbursement. LMAES claims that DOE failed to properly describe the contents of the pit and that DOE and its management contractor have interfered with the contractor’s operations, preventing it from meeting its contractual commitments. DOE and the management contractor at Idaho disagree with LMAES’ claims and claim, in their turn, that the contractor failed to properly manage the project. LMAES has requested a total of $257 million for costs through June 1997, $78 million more than the project was expected to cost, but the waste retrieval and processing facilities are not ready and no wastes have been retrieved or processed. As of May 1998, these issues remain unresolved and the project remains stalled. In Oak Ridge, a multiphase cleanup project was discontinued after the first phase because the treatment system proposed by the contractor was too expensive and treatment was determined not to be necessary. The management contractor, Lockheed Martin Energy Systems, attempted to contract with multiple firms for the first phase of the West End Treatment Facility project to design a treatment process for a fixed payment. However, they discovered that because the project required each contractor to be able to perform several different types of activities—such as removing sludge from storage tanks, transferring the waste to a treatment facility, and treating the waste—only one firm submitted a responsive bid. Ultimately, the management contractor recommended to EM that the second phase procurement for waste treatment be canceled, but because only one contract had been let, and that contractor had invested more than the fixed amount, EM ended up paying a negotiated equitable adjustment that more than doubled the cost of the first phase contract from $400,000 to about $900,000. In retrospect, EM and management contractor officials told us that they should have reconsidered the project when only one responsive bid was received and determined why they did not receive the level of competition they were expecting. The lack of competition in the procurement for the first phase of this project ultimately led to increased costs when the later phases of the project were canceled. Another project at Idaho for the long-term storage of damaged fuel from the Three Mile Island reactor has been delayed and the fixed-price contract has been modified 12 times. The Idaho project managers stated that a fixed-price contract would probably not have been chosen for this project if they had known that a change from DOE regulation to Nuclear Regulatory Commission regulation would be made and that the condition of temporarily stored fuel was different from what was expected at the time the contract was awarded. In this instance, the delays and contract modifications have added about $4 million (or 33 percent) to the cost of the project, raising the cost from $12 million to $16 million. EM’s privatization program relies on private financing of construction costs to create a performance incentive for the contractor to construct a successful facility. However, private financing increases the performance risk borne by the contractor, and as a result, private financing costs can be significant. Other financing options exist that would leave some performance risk with the government by increasing the use of government financing. However, the risk associated with these options could result in significant costs to the government that may offset—or more than offset—the benefit of lower financing costs. In weighing the financing and risk costs, consideration should also be given to the impact of the option selected on ownership of facilities, government oversight, and the terms of contractors’ performance. EM’s privatization program relies on private financing for the acquisition of needed cleanup facilities and equipment. Under EM’s approach, the contractor will own all facilities required to deliver the desired cleanup services. The contractor is responsible for all construction costs, including the development of technologies, procurement of equipment, and new-facility construction. In addition, the contractor is expected to finance these construction costs until the facilities are completed and operations begin. Financing cost includes the costs of raising money, taxes, and profit. The contractor is expected to provide the financing for these costs through some combination of its own funds (owners’ equity) and borrowed funds (debt). As the contractor begins to deliver cleanup services, the contractor is paid for its operating costs. In addition, each year the contractor is paid a portion of the construction and financing costs it has incurred until these costs are eventually recouped. These payments for the contractor’s construction and financing costs are directly tied to the amount of cleanup services it provides. EM expects that its private financing approach will ensure that contractors are properly motivated to perform successfully in two ways. First, because a contractor’s recoupment of its investment is dependent on performance, it will have a greater incentive to perform. Second, because the contractor is financing construction through the use of debt, EM believes that the lenders will provide third-party oversight to ensure that their investment is sound. They are likely to hire various consultants to review all aspects of the contractor’s plans to ensure that the project is feasible, which provides assurance that the likelihood of contractor failure is minimized. In addition, if the contractor does fail to complete the project for some reason, this oversight provides further assurance to the lenders that they could take over the project, bring in another contractor to complete it, and recoup their investment. While fixed-price contracting is believed to provide some greater control over price, EM believes that private financing is key to ensuring that the project is successful. With only a fixed-price contract and no private financing at stake, EM is concerned that it will have little recourse against a contractor that does not deliver as promised. EM’s concern stems from the fact that contractors that have expressed an interest in large cleanup projects have indicated that they will form separate subsidiaries to perform the contract, using a commonly employed approach known as limited liability companies, that are heavily debt-financed and have few assets of their own. Without appropriate warranties from the parent company, the use of these limited liability companies can financially and legally isolate the project from the parent companies and limit the parent companies’ liability for contract performance. However, under such an arrangement, EM is concerned that if the contractor fails to meet the terms of the contract, the contractor could shut down, leaving EM with an inoperable facility and little hope for recourse against a heavily debt-financed company with few assets. The total capital cost of a facility consists of the construction costs (including design, construction, and equipment procurement costs) and the financing costs. Private financing costs can be high and can significantly increase the total capital costs. For example, under DOE’s contract with British Nuclear Fuels Limited, Inc., to build the Advanced Mixed Waste Treatment Project in Idaho, EM will pay construction costs of $244.6 million in 1998 dollars. Private financing of these costs will add another $137.9 million, more than half of the construction costs. As larger cleanup projects are considered by EM, such as Hanford’s Tank Waste Remediation System project, which is expected to have construction costs of more than $1 billion, concerns have been raised about whether private financing is a realistic alternative. Total private financing represents only one end of a continuum of construction financing options. Total government financing, as traditionally used in EM’s cost-reimbursement management contracts, represents the opposite end of the continuum. Under total government financing, contractors are paid as costs are incurred, eliminating the need to arrange private financing to carry these costs. The performance risk faced by the contractor is also low because the payment is based on costs incurred, not for performance of cleanup services. The government, through EM, bears the bulk of the performance risk. In between these two extremes, other financing options exist that attempt to strike a balance between financing cost and performance risk. On the basis of reviews of literature and discussions with government and private-sector officials involved with privatization financing, we identified several other financing options. These options are by no means inclusive of all of the possible financing options available to EM, but they reflect a range of options that might be considered and the trade-off between financing costs and performance risk borne by the government. These options include government guarantee of private-sector debt, a performance-based partial-payment plan, and progress payments. A contractor’s construction financing will likely include a great deal of private debt financing. The total amount of debt financing is expected to account for about 70 percent or more of the total financing required. Lenders will charge an interest rate on the debt on the basis of their perceived risk of losing the money loaned to the contractor for construction. The higher the perceived risk that the contractor will not be successful and default on the loan repayment, the higher the interest rate charged for the debt financing, assuming private debt financing is available at all. However, if the government were to guarantee the lenders that they would not lose their money through default, then the interest rate charged and the contractor’s financing costs would be lowered. The government could choose to guarantee all or some portion of the total private debt, which could significantly lower the contractor’s financing costs. Even with a government guarantee of debt, the contractor would still face a performance risk; that is, the contractor would not get paid unless it delivered cleanup services. However, with government involvement in the financing, the government would also bear a performance risk it did not face under total private financing. The government guarantee of debt would put the government in a position in which it would have to reimburse lenders for any defaults on debt financing for the project. If the amount of debt is significant, a 100-percent government guarantee could result in high costs to the government in case of default. Because of the default risk faced by the government, EM would be required to estimate a subsidy cost of providing any debt guarantee. This cost must be considered in addition to the contractor’s financing costs when considering this type of financing option. Another option that may be available to EM is a partial-payment plan that is tied to the contractor’s performance. Under this option, the government would pay for a portion of the construction costs as they are incurred, while the contractor would be required to finance the balance until it began operations. Then, as in the private financing option, the government would make payments based on the performance of cleanup services—such as the amount of waste processed—that would allow the contractor to recoup its construction costs plus its financing costs. For example, the government could pay 80 percent of construction costs as they are incurred while the contractor would be required to finance 20 percent of the construction costs. With the government providing an increasing portion of construction costs, the amount of private financing required would drop and financing costs could be lowered significantly. With the performance-based partial-payment plan, the contractor would still face performance risk to the extent that recouping its portion of the construction and financing costs still would be dependent on successful performance. However, as the amount of government financing increases, the amount of performance risk assumed by the government also increases. Many variations of this option may be considered that either increase or decrease the amount of funding the government provides. Progress payments are used throughout the federal government for the procurement of various types of assets, including capital assets. Generally, the government uses progress payments to assist a contractor who will incur significant expenditures prior to the delivery of products that it will not be able to finance itself. The government may provide up to 80 percent of the costs as they are incurred under a contract. The balance is generally paid upon successful completion of the contract. EM’s privatization projects with large construction costs will cause contractors to incur significant expenditures prior to the completion of facilities and the delivery of cleanup services. Under a progress payment option, the government could pay for a portion of the costs as they are incurred while the contractor would be required to finance the balance of its costs. This option is similar to the performance-based partial-payment plan; however, under the progress payment option, the contractor would recoup its construction costs plus its financing costs as the cleanup facility (the asset) was successfully completed. Payment to the contractor for construction would not be based on performance over an initial operations period. Financing costs would be lower because the contractor would not carry its construction costs over a period of operations. With the progress payment option, the contractor would still face performance risk for the delivery of a completed facility that works as designed. Many variations of this option could be considered that either increase or decrease the amount of funding the government provides. Once again, as the amount of government financing increases, the amount of government funding at risk to performance increases. In order to evaluate the impact of other construction financing options on financing costs, we reviewed the financing schedule of EM’s privatization contract with British Nuclear Fuels Limited, Inc., to build the Advanced Mixed Waste Treatment Project in Idaho. The contract, signed in December 1996, is one of the few privatization contracts that has been signed whose construction costs are financed by the private sector. Assuming that construction costs of $244.6 million, in 1998 dollars, would be the same for each financing option, we analyzed the difference in financing costs for the five financing options. (For further detail and discussion of the analysis conducted and the impact on results of using different assumptions, see app. II.) Using the Advanced Mixed Waste Treatment Project as a model, total private financing represents the highest financing cost—$137.9 million—for construction financing. As the amount of government involvement in financing increases, the financing costs of the options decrease. With a 100-percent government guarantee of debt, the contractor’s financing costs are $104.1 million. Under a performance-based partial-payment plan that assumes government financing of 80 percent of costs and payment of the balance over the first 5 years of operations, financing costs are $62.7 million. Under a progress payment option with the government financing 80 percent of costs until construction is completed, financing costs are $47.1 million. Finally, with total government financing, no private financing costs are incurred because contractors are paid as costs are incurred. While government financing of construction costs would appear to be the most attractive option, under this approach the government is assuming a much greater level of performance risk than it would face under a private financing option. This risk includes the risk that the facility the government finances will not be completed successfully or that the facility will experience significant cost growth. The potential costs associated with these risks could offset—or more than offset—any potential benefits of lower-cost government financing. On the basis of DOE’s past experience with major government financed projects, including EM’s projects, these risks are real. For example, we found that between 1980 and 1996, 31 of DOE’s 80 major system acquisitions were terminated prior to completion after the government had expended over $10 billion, in part, as the result of weaknesses in DOE’s contractor management. In addition, for the 15 projects that were completed, final costs exceeded original estimates by an average of 63 percent. However, it is difficult to determine how much of the costs attributable to these risks could have been reduced through the use of more private financing. We found that termination and/or cost growth of projects is the result of a variety of factors—some of which may be affected by the choice of financing. For example, the risk of cost growth because of a flawed system of incentives for contractors may be reduced by private financing that provides better incentives to perform. However, other factors contributing to risk may not be dependent on the financing choice. For example, changes in work scope could result in terminations or cost growth under any financing approach. As a result, it is difficult to quantify the degree of performance risk borne by the government as government involvement in financing increases. This uncertainty is represented in figure 3.1 by a potential range of additional performance risk assumed by the government with increased levels of government financing. The options that lie between total private financing and total government financing attempt to strike a balance in the trade-off between the cost of financing and the cost of added performance risk. The cost of added performance risk to the government is difficult to quantify, but it must be considered in weighing any decision to reduce private-sector risk (thereby increasing government risk) by lowering financing costs. The consideration of added risk costs has been recognized in the government’s policy on the guarantee of debt. If EM were to pursue an option whereby it would guarantee debt, EM would have to estimate and obtain funding for the subsidy cost of providing that debt guarantee. Thus, assuming a 100-percent debt guarantee, costs would include construction costs ($244.6 million), contractor financing costs ($104.1 million), and an estimated subsidy cost. That subsidy cost would largely consist of an estimate of the risk that a contractor might default on its debt obligations. While the estimate of the subsidy cost is difficult, the risk of default could be high for a complex facility that typifies some of EM’s cleanup projects. If the subsidy cost estimate is higher than $33.8 million, then according to our model, this option would be more expensive than total private financing. The consideration of added risk costs must also be recognized for other financing alternatives to private financing. Using our model, under a performance-based partial-payment plan whereby EM pays 80 percent of construction costs, EM has placed at risk $195.7 million in payments (80 percent of the $244.6 million in construction costs) over the 5 years of construction. This risk must be weighed against the 20 percent of construction costs plus the financing costs that the private contractor has at stake over the construction period and an initial period of operations. If the private contractor does not perform, it will lose its $48.9 million in construction costs (20 percent of the $244.6 million in construction costs) plus as much as $62.7 million in finance costs. In weighing this type of option, EM will have to consider whether the amount of private-sector investment at risk is enough to ensure that the contractor is motivated to deliver a facility that works as designed without significant cost growth. The consideration of the cost of added risk under a progress payment option is similar to the partial payment aspect of the option discussed above. If the facility does not work, EM may not regain its $195.7 million. However, under a progress payment option, the contractor would be paid for its construction costs and financing costs after the facility is successfully completed, thereby avoiding financing costs over the operations period. Thus, the government is assuming some added risk that the facility may not operate as promised over the first 5 years of operations. Unlike the performance-based option, the contractor will have no investment at stake whose recoupment is dependent upon successful operations. In considering this type of option, EM will have to consider whether the payback (or amortization) of the contractor’s costs over the first 5 years of operations is necessary to ensure that the contractor has delivered an effective plant. An initial testing phase after construction may be sufficient, depending on the size and complexity of the project. The choice of financing options is affected by other factors that affect total costs and financing decisions. As government involvement in financing increases, the government assumes more of an ownership role and has to exercise more oversight, an area in which DOE has not enjoyed success. More importantly, the actual terms of performance in the contract will dictate what performance risk is eventually assumed by the contractor and the government. With an increased use of government financing, the issues of government ownership and oversight become important considerations. As the government provides more financing of construction costs, it becomes more likely that EM will be the owner of the facility instead of the contractor. However, along with the benefits of government ownership, EM must consider the negative consequences of ownership, particularly the demands of an increased oversight role. Financing construction costs could put the government in a position of ownership, especially if it is providing the majority of the funding. This ownership is a positive benefit of government financing that addresses monopoly concerns about private ownership of cleanup facilities. For example, if the private sector owns a facility whose construction costs are paid for after an initial period of operations, it could place the private sector in a monopolistic position for the remainder of the potential operating period. The government may be at a disadvantage in negotiating prices for waste treatment in the future because there will be no other facilities available to compete. However, the government may be able to alleviate monopoly concerns by negotiating long-term operating agreements or having the contract option to take title to the facility. Given EM’s acknowledged poor history of oversight, government ownership could also be viewed as a negative consequence of government financing. If EM begins to make payments prior to performance, it will need to assure itself that the contractor is making satisfactory progress. However, EM does not have a history of successful contractor oversight. The private sector views increased government oversight as meddlesome, inefficient, costly, and directly counter to the concept of allowing the private sector to decide how to best provide cleanup services. Our discussion of construction financing options has focused on the level of risk that is transferred between the government and the private sector as the level of financing provided by each party changes. However, it is important to point out that the mix of financing provided by the government and the private sector has no bearing on the actual terms of performance that are agreed to in a contract. As noted earlier in our discussion of contracting, risks must be identified and addressed in the contract so that each party’s responsibilities are clearly defined. The government could face more risk and incur more costs from a contract that is totally privately financed if the terms of the contract give the contractor less responsibility for risks compared to another contract that may have government financing. DOE agreed with our statement that while government financing appears less costly, the greater performance risk the government assumes when it finances a project, and the potential costs associated with this greater risk, could offset any apparent advantage gained by using lower-cost government financing. However, DOE felt that we should have attempted to compensate for these potential increases in costs in the model we used to estimate the impact of financing alternatives. In earlier meetings with DOE officials, they had suggested performing a sensitivity analysis that would vary the construction costs to reflect various levels of cost growth, specifically 20, 40 and 60 percent. We considered performing the sensitivity analysis DOE suggested; however, we decided not to do so because we did not have a factual basis for assigning levels of cost growth to all of the various financing alternatives we analyzed. If not properly managed, each of the alternatives we analyzed, including the private finance option, could experience cost growth. However, we could not locate any data that would identify how much cost growth might be associated with one financing option versus another. Applying the same cost growth to all of the options would not change the relative results, only the total costs. To compensate, we emphasized throughout the report that cost growth was a possibility as the government took on more performance risk and cited what evidence we had in terms of independent studies and GAO reports to indicate how large this growth had been under DOE’s existing contracts. Because of budget limitations or “caps” instituted to help balance the federal budget, all budget appropriations and spending for discretionary programs, such as EM’s privatization program, must be measured or “scored” to ensure that the caps are not exceeded. Federal agencies may acquire or use long-term assets constructed to meet the government’s needs, such as the waste treatment facilities EM needs, in several ways. Each of those arrangements may be scored differently. Which arrangement and, hence, which method of scoring is most appropriate may change depending on how the asset is financed, whether the government takes ownership of the asset, and how much risk the government assumes for the cost of construction. Under the Budget Enforcement Act of 1990, as amended, discretionary spending is constrained by caps or strict dollar limits both on new budget authority and budget outlays. To ensure that caps are not exceeded, the scoring rules contained in the conference report accompanying the Budget Enforcement Act of 1990, as amended, and published in the Office of Management and Budget’s (OMB) Circular A-11 are used to determine when budget authority and budget outlays are scored for discretionary spending proposals—including spending for capital assets. To stay within the caps, budget authority and the resulting outlays are limited for all programs. The way transactions between EM and its privatization contractors are structured affects how they are scored and, because of the budget caps, has consequences not only for EM but also for all the other programs and activities funded by the committees that provide EM’s appropriations. There are several ways the federal government can acquire capital assets or the use of capital assets, such as an office building or waste treatment facility, that are being constructed for its use. The most direct way is to simply purchase the asset outright, taking full ownership of it. In that case, budget authority for the full cost of the purchase would be scored in the year the budget authority is first made available, and budget outlays would be scored as payments are made to the contractor during construction. Alternatively, agencies may choose not to purchase the asset itself (for example, a waste treatment plant) but merely the services connected with the asset (for example, waste treatment services). For such a service contract, the agency would need budget authority in each year equal to its legal obligations under the contract, including cancellation costs. In a case in which services will not be delivered until the construction of a facility is complete, outlays would not be scored during the construction period; instead, they would be scored as services are delivered. In addition to outright purchase of an asset or purchase of services, the agency may choose to lease the asset from the private contractor. Under the budget scoring guidelines, the government may enter into three types of leases with private vendors—operating leases, lease-purchases, and capital leases. Operating leases may be used to contract for assets such as general-purpose office space. In an operating lease, the facility or equipment is not built to unique government specifications, there is a private-sector market for the asset, and the present value of the government’s lease payments does not exceed 90 percent of the asset’s fair market value at the beginning of the lease, among other criteria. For an operating lease, the agency would need budget authority and have outlays in each year equal to the payments due to the contractor under the lease. Transactions that do not meet all of the criteria of an operating lease are considered either lease-purchases or capital leases. In a lease-purchase transaction, ownership of the facility or other assets transfers to the government at or shortly after the end of the lease. If ownership does not transfer, the transaction is a capital lease. For a lease-purchase arrangement, the government’s risk is assessed against criteria that indicate the government’s acceptance of risk such as whether (1) the government provides financing, (2) the government guarantees third-party financing, (3) there is no private-sector market for the assets, (4) the asset is built to unique specifications, (5) the risks of ownership do not remain with the contractor, and (6) the project is constructed on government land. For a lease-purchase without substantial government risk, the agency would need budget authority in the first year equal to the present value of its obligations under the lease, and outlays would be scored over the lease term. The government’s obligations would include the contractor’s capital investment and termination or cancellation costs. If the government does have substantial risk, budget authority would be scored the same way as noted above, but outlays would be scored during the construction period in the same proportion as the contractor’s costs are incurred. Capital leases are scored in the same way as a lease-purchase without substantial government risk. Finally, if an agency were to offer a federal government guarantee of some or all of a contractor’s debt financing, the subsidy cost of the guarantee would be scored. The agency would need specific legislative authority to offer a government loan guarantee. If the authority were granted, the agency would have to estimate the subsidy cost of the loan guarantee, which would be based on the risk of default or nonpayment of the loans, among other factors. Estimating the subsidy cost is a very complex process and is subject to review by OMB and the Congressional Budget Office. The agency would need budget authority for the full net present value of the subsidy cost before it could make the guarantee. Outlays of the subsidy cost would occur over the same period and in the same proportion as the lender disbursed the loan to the contractor. That is, if all of the loan money were disbursed in the first year, all of the subsidy cost would be outlayed in the first year as well. If the loan were disbursed over a period of several years, the outlays would be spread over the same period. Currently, OMB scores EM’s privatization projects as service contracts. Under this practice, EM must have enough budget authority each year during the life of the contract to (1) pay off its liability to the contractor if, for example, the project is canceled, and (2) pay for treated waste once facilities begin operations. The contractor is to provide all of the financing for constructing the necessary facilities and equipment to treat EM waste. EM does not intend to acquire title to the facilities that would be constructed by its privatization contractors, even when those facilities are built on federal land and are constructed to provide services strictly for the government. Under OMB’s current scoring, no outlays would be scored until construction of a project is completed and waste processing begins. Outlays would then be scored from the privatization account as the capital cost of the project is amortized or repaid over the first few years of operations. Therefore, while the impact on outlays in the budget is minimized in the early years of the privatization program under this option, it will increase outlays dramatically in later years as these projects come on-line. Government loan guarantees are usually offered only when the borrower or the project is too risky for private lenders. Because many of the projects proposed for privatization are technically risky—that is, they involve the use of innovative technologies that must be modified to meet EM’s needs—the subsidy cost of a loan guarantee for privatization projects could be substantial. For this option, EM also would need additional budget authority upfront for the subsidy cost. Outlays of the subsidy cost would occur over the construction period as the loan is disbursed by the private lender to the contractor. If EM were given authority to provide a loan guarantee for the construction of a contractor-owned facility, OMB may decide to continue to score the project’s total capital costs as a service contract. Alternatively, the capital costs might be scored as a capital lease. In that case, budget authority and outlays would occur sooner than under the current service contract scoring method. If EM used a performance-based partial-payment plan, scorekeeping guidelines could be interpreted to require EM to have budget authority up front for the net present value of the government’s share of costs. Outlays would occur during the construction period equal to the amount of incurred costs for which EM reimburses the contractor and during the initial period of performance for the remainder of the construction costs. For this option, which might be scored as a capital lease or a lease-purchase (if it is judged that EM has effective ownership), the timing of budget authority and outlays would change, occurring sooner than under the current service contract scoring scenario. Under this scenario, the greater degree of financial investment and risk that EM would incur could make government ownership of the facility more attractive than only contracting for the services of the completed facility. In that event, EM could choose to structure the contract so that it would acquire ownership of the facility at or near the end of the initial performance period. The initial performance period would provide EM assurance that the facility works and would give the contractor time to recoup all of its investment. However, in that case, the transaction could be deemed a lease-purchase with substantial government risk under the budget scoring guidelines. As a result, the timing of budget authority and outlays would change, occurring sooner than under the current service contract scoring scenario. If the construction were financed using progress payments, the transaction might be scored as a capital lease or a lease-purchase (if it is judged that EM has effective ownership). In that event, EM would need budget authority equal to the net present value of the government’s share of the costs plus a rate of return earned on the held-back portion. Outlays would occur during construction equal to EM’s share of the portion of costs incurred by the contractor and for the lump-sum payment of the held-back portion of the construction costs once the contractor’s work had been accepted. For this financing option, the timing of budget authority and outlays would change, occurring sooner than under the current service contract scoring scenario. In this scenario, EM would again be making a substantial financial investment in the facility and incurring a greater degree of risk than it would if the contractor privately financed the construction of the facility. In that case, EM might decide to include an option in the contract allowing it to take title to the completed facility, and the transaction might be considered a lease-purchase under the budget scoring guidelines. In that case, EM would need budget authority equal to the full net present value of the project, regardless of what proportion of the costs were paid to the contractor in progress payments and what proportion are held back for lump-sum payment when the contractor’s work has been accepted. Finally, if EM fully finances the projects, it would need budget authority to cover the full amount of costs and fee or profit owed to the contractor for the construction of the facility. Outlays would be incurred for the amount of costs incurred by the contractor and any fee or profit earned in each fiscal year. In that case, the government would bear the financial risk and, logically, may want to have ownership of the facility. The contractor would be reimbursed for all allowable costs, including costs for the design and testing of the facility and equipment, during the construction period. For this option, budget authority needs could be larger in the first years of the project than under EM’s current privatization approach, and once again, outlays would occur sooner. Under the budget scoring guidelines, how EM’s privatization projects are scored depends on two key factors: who owns the facility and, if the government will have ownership, what degree of risk the government assumes. However, several factors may cause EM to decide to own the facility itself. Some projects may require EM to make a large investment of government funding in the construction of a facility. In addition, privatization contracts are expected to contain clauses, such as termination for convenience and idle facility payments, to protect the contractor from loss if the project is canceled or delayed by the government. For example, a termination for convenience clause provides the government the option of canceling the project if EM cannot get sufficient funding to proceed in any fiscal year. The government may also be liable for payments for idle facilities if the contractor’s facility is ready to operate and EM fails to deliver waste to be treated. In such circumstances, EM may have to outlay a large proportion of the construction costs whether or not it receives waste treatment services, and it may be in the government’s interest to also take ownership of the facility. In that case, the scoring rules pertaining to outright ownership or lease-purchases would apply. Under other circumstances, such as if total private financing has the hoped-for effect of lowering the total cost and risk, pursuing service contracts may be the best decision. These factors, in addition to scoring implications, will need to be considered in deciding whether ownership of a capital asset is in the best interest of the government. In general, as the financial commitment of the government decreases (that is, moves further away from purchase), the amount of budget authority and outlays that must be scored up front also decreases. We found that this situation may tempt agencies to move away from ownership when caps are very restrictive and to choose arrangements in which budget authority and outlays are not scored all at once or as soon. In some cases, we found that these decisions resulted in agencies spending more than they would have if they had purchased the assets outright. Budget scoring does not affect the total cost of the projects but does change when budget authority is needed and when outlays occur. Under all of the alternative financing scenarios we analyzed, except possibly full-government financing, scorekeeping guidelines could result in EM needing more budget authority earlier in the projects and incurring outlays sooner than under OMB’s current method of scoring privatization projects as service contracts. EM officials have noted that one advantage of privately financing projects is that it allows EM to defer budget outlays to future time periods. While this may be true, EM’s decisions on how to structure privatization contracts need to consider the other factors we have discussed previously—contract type, financing method, risk allocation, and long-term cost—as well as the budget scoring implications of the contracts. DOE expressed the view that individual projects have different financing requirements that are not directly addressed by the current budget scoring guidelines of OMB Circular A-11. They also expressed the view that there is considerable flexibility in the scoring rules. We agree that the scoring guidelines do not directly address the unique projects that DOE is considering. We specifically state in all of our discussions of scoring that the scoring rules have to be interpreted for DOE’s projects. Unsatisfied with its management contractors, EM has attempted to improve the cost and schedule performance of the cleanup program through the adoption of its privatization approach. In theory, EM’s privatization contractors have a greater incentive to perform under a fixed-price contract than in the traditional cost-reimbursement environment, under which most cleanups have been performed. While we did find examples when the use of fixed-price contracting produced positive results, simply entering into a fixed-price contract is no guarantee of success. If fixed-price contracts are used in situations when they are not appropriate—for example, where waste is inadequately characterized—the cost and schedule performance of the contractor can be worse than under a cost-reimbursement contract. Private contractor financing, which has the potential to improve cost and schedule performance, comes at a significant increase in financing costs. However, it would be incorrect to look at this difference and simply conclude that traditional cost-reimbursement government financing is cheaper. The apparent difference in cost reflects the different amount of risk the government is bearing. Moreover, if the performance under the cost-reimbursement type of financing is as poor as past history would suggest, the difference, or “savings,” observed in our analysis could easily be consumed by cost overruns. With respect to scoring, how these projects are scored will depend on how certain key aspects of the scoring rules are interpreted. For example, if ownership is viewed as the critical variable and the government does not own the final facility, any approach we have analyzed could be scored as a capital lease. However, if the government assumes ownership upon completion of an initial performance period, then a lease-purchase would appear more appropriate. Use of a government loan guarantee would require the estimation of the subsidy cost, for which additional budget authority would be needed, and could add significantly to the total budget authority required for privatization projects. In the end, it is not simply a choice between traditional cost-reimbursement contracting and EM’s new privatization approach. As our analysis shows, a complex matrix of decision factors needs to be considered when deciding how to contract for and finance a cleanup. Among the factors that need to be weighed are the following: (1) What waste needs to be cleaned up and how well is the waste characterized? (2) How much competition is there is among firms with the necessary cleanup expertise? (3) What financing options are available in the private sector? (4) What risks are associated with the cleanup and who is best prepared to bear them? (5) How well equipped is DOE’s staff to design and oversee a cleanup contract? Once a contract type and financing method are chosen, DOE and the contractor would need to carefully develop a contract that clearly defines each party’s roles and accountability through provisions that allocate project risk between the parties, define DOE’s oversight role, and identify appropriate measures against which the contractor’s performance will be judged. Ideally, selection of the appropriate type of contract and method of financing for each project would be made on the basis of what will provide EM with the best chance of successfully completing its cleanup goals at the lowest total cost. | Pursuant to a congressional request, GAO reviewed whether privatization will achieve the nuclear weapons waste clean-up goals expected by the Department of Energy (DOE), focusing on: (1) what conditions need to be present in order to successfully use fixed-price contracting for the Office of Environmental Management's privatized clean-up projects; (2) what alternative financing approaches could be used for Environmental Management's privatization contracts; and (3) how alternative financing methods for Environmental Management's privatization projects might affect budget scoring. GAO noted that: (1) fixed-price contracting, one key aspect of Environmental Management's privatization program, can successfully be used for environmental clean-up projects when certain conditions in the Federal Acquisition Regulation are met; (2) when these conditions exist, GAO found that the Office of Environmental Management has successfully used fixed-price contracts for a variety of activities ranging from cleaning up contaminated soils to decontaminating workers' uniforms; (3) however, when these conditions do not exist, GAO found instances in which clean-up projects being performed under fixed-price contracts encountered cost increases and schedule delays; (4) in addition, risks and issues that could affect the eventual performance of the contract must be clearly defined; (5) total private financing represents one end of a continuum of construction financing options; (6) private financing transfers performance risk from the government to the private contractor, but costs for this approach are significant because of the increased risk assumed by the contractor; (7) total government financing represents the other end of the continuum of options; (8) with government financing, financing costs are minimized, but performance risk which has also proven to be costly, remains with the government; (9) in between these two extremes, other financing options exist that attempt to strike a balance between performance risk and financing costs; (10) how Environmental Management's privatization projects are scored for budget purposes depends on the way certain key aspects of the scoring rules are interpreted; (11) Environmental Management's privatization projects are currently scored as service contracts; (12) the use of alternative financing methods may change the interpretation of the scoring guidelines for these projects; (13) as a result, under all of the alternative financing options that GAO analyzed, the Office of Environmental Management would need more budget authority earlier in the projects and would also incur outlays sooner than under the Office of Management and Budget's method; (14) a complex matrix of decision factors needs to be considered when deciding who to contract for and finance a cleanup project; and (15) once a contract type and financing method are chosen, DOE and the contractor would need to carefully develop a contract that clearly defines each party's roles and accountability through provisions that allocate the project's risks between parties. |
EPA does not actively seek out sites for the Superfund program but relies on states or interested parties to report them. Once reported, sites are added to EPA’s inventory for evaluation. As of March 1994, EPA’s inventory had 36,785 nonfederal sites, of which 1,192 had been placed on the National Priorities List. Evaluation of potentially hazardous sites occurs in several stages. At the completion of each stage, EPA may determine that no federal action is needed or it may proceed to the next stage. First, EPA requires that a site receive a preliminary assessment within a year of its entry into the inventory. The preliminary assessment involves a review of available documents and possible site reconnaissance. If the preliminary assessment indicates a potential problem, the site moves to the next stage of evaluation—the site inspection—which involves collecting and analyzing soil and water samples as appropriate. If warranted by the results of the site inspection, sites enter the final decision process. This process involves other evaluations, including an extended site inspection if needed, scoring under EPA’s hazard ranking system; and a judgment by EPA officials on the appropriateness of listing the site on the priorities list. An extended site inspection requires more samples and could involve installing wells to monitor groundwater or other nonroutine data collection activities. The hazard ranking system is a method of quantifying the severity of site contamination to determine if a site should be placed on the list. The system assigns a numerical score based on the likelihood that a site has released or has the potential to release contaminants into the environment, the characteristics of the contaminants, and the people or environments affected by the release. A site must score at least 28.5 on the hazard ranking scale in order to be placed on the list. Sites can be dropped from further consideration following the extended site inspection or the scoring process. Sites also can be dropped from further consideration if, in the judgment of EPA regional officials, the sites do not pose risks great enough to warrant a Superfund cleanup. In addition to the sites following the process described above, the EPA inventory includes a large group of sites that have already been inspected but are awaiting reevaluation because of a change in the evaluation process. The Superfund Amendments and Reauthorization Act of 1986 required EPA to revise its evaluation system to make it more comprehensive and accurate in its assessment of threats to human health and the environment. According to EPA site assessment officials, the revision will change the mix of sites, but not necessarily the number of sites, that will end up on the priorities list. The revision was effective in March 1991. During the transition to the revised system, sites were evaluated through the site inspection stage using the original evaluation system. However, EPA decided to use the new system to make final decisions about placing these sites on the priorities list. In October 1991, EPA began to reevaluate these 6,467 sites, which it referred to as its evaluation backlog. Reevaluation could include collecting additional site information as well as limited sampling. As of the close of fiscal year 1993, EPA had completed this process for about 1,600 of the 6,467 sites. Fewer sites are being reported to EPA for evaluation, but site inspection results indicate that new sites reaching the site inspection stage are as likely to have contamination requiring a Superfund cleanup as those inspected in the past. The number of sites reported annually has been declining since fiscal year 1985. (See fig. 1.) In fiscal year 1993, 1,159 sites were added to the inventory—29 percent less than the prior year and 68 percent less than in fiscal year 1985. EPA attributed the decline since 1985 to the fact that many states now have their own Superfund programs. According to EPA site assessment officials, states are reluctant to report new sites, preferring instead to manage the cleanup themselves. EPA Region I site assessment officials suggested that states generally report sites that present challenging enforcement or cleanup problems. The percentage of sites that EPA believes warrant further consideration after completing the site inspection has been fairly steady for the last 10 years. (See fig. 2.) From program inception through fiscal year 1993, 43 percent of the 17,556 sites inspected were considered hazardous enough to need further consideration for the priorities list. In fiscal year 1993, 43 percent of the 725 sites inspected were also considered for further action. (App. II provides statistics on the number and percent of nonfederal sites accepted and rejected for further consideration after site inspection.) EPA officials do not expect to find in the future very large, heavily contaminated sites equivalent to Love Canal, which entered the Superfund program early in its history. However, the officials believe that contamination at newly discovered sites is generally not less severe than at previously reported sites—just less obvious. Earlier site discoveries more often included sites where the hazards were visible, such as barrels of hazardous waste above ground. Sites that are being discovered and reported now, according to EPA officials, are those with less obvious—but equally serious—problems, such as groundwater or drinking water contamination. Recent estimates of the future size of the Superfund workload have differed. In congressional testimony in February 1994, EPA forecast the smallest increase—1,700 new sites. In a report dated January 1994, CBO predicted 3,300 new sites through 2027, although it said that a wide range of additions was possible. EPA’s Inspector General in a January 1994 report estimated that 3,000 of the 6,467 sites in the agency’s evaluation backlog could be added to the Superfund. In February 1994 congressional testimony, EPA’s Administrator testified that the Superfund National Priorities List could grow to about 3,000 federal and nonfederal sites, or roughly 1,700 more sites than are currently on the list. According to EPA officials, this estimate was based on an internal agency analysis prepared by the Office of Emergency and Remedial Response. The Office prepared low, medium, and high estimates, and EPA based its testimony on the medium estimate. (See app. III for a detailed breakdown of EPA’s estimates.) EPA’s estimates treated current and future inventory sites differently. In EPA’s medium estimate, 6.5 percent of the currently reported sites were estimated to become Superfund sites compared with 3.5 percent of the sites that will be reported in the future. The inventory of reported sites was estimated to grow by 20,500 sites by the year 2020, or 54 percent more than at present. The estimate projected that the number of sites added to the inventory each year would decline from 1,500 sites in fiscal years 1994 through 1999 to 500 sites in fiscal years 2010 through 2019. EPA officials said that they based the decline on less state reporting, not on the existence of fewer sites that could be reported. CBO’s estimate of potential future Superfund additions was developed in two parts. (See app. V.) First, CBO estimated the number of sites that would be reported to EPA’s inventory of potential hazardous waste sites by developing trend lines based on the number of sites reported from 1981 to 1992. Because of the data’s variability, CBO developed a base case, or most probable scenario, and low- and high-case scenarios. In the base case, CBO estimated that 25,394 sites would be added to the inventory by the year 2027. This estimate was about 5,000 sites higher than EPA’s medium estimate. In the low and high cases, CBO estimated that 15,151 and 50,000 sites, respectively, would be added. Second, to determine the percentage of reported sites that would ultimately be placed on the priorities list, CBO relied on EPA staff’s opinion since, according to CBO’s report, usable site evaluation data were not available. When asked by CBO, EPA staff estimated that between 5 and 10 percent of all future inventory sites would be placed on the priorities list. CBO chose 8 percent for its base-case estimate and applied this rate to current and future inventory sites. For its own medium forecast, EPA estimated that 6.5 percent of the current inventory and 3.5 percent of the sites added to the inventory in the future will be placed on the priorities list. CBO’s base-case estimate, after adjustment to eliminate federal sites, resulted in adding 3,300 more sites to the priorities list. The range of additional sites for the low- and high-case scenarios was between 1,100 and 6,600 sites. EPA’s Inspector General estimated that 3,136 sites in the evaluation backlog could move to the priorities list. This estimate was made as part of a study of EPA’s processing of these backlogged sites. At the time of the Inspector General’s review, EPA had evaluated only 942 of the 6,467 sites. To estimate the number of potential sites for the priorities list, the Inspector General determined the proportion of sites evaluated in each region that were found to warrant consideration for the priorities list. The Inspector General then applied these proportions to the total number of backlogged sites in each region and added the regional numbers. The Inspector General reduced the total to account for an estimated proportion of sites that drop out in the final decision process. More recent data suggest that the Inspector General’s estimate may be somewhat high. According to EPA’s site evaluation staff, the Inspector General’s estimate of 3,136 additional sites is high since it assumed that in the future, 52 percent of the sites in the backlog could move beyond the site inspection stage, the rate prevailing when the Office of Inspector General did its study. However, data for fiscal year 1993, available after the Inspector General completed the study, showed that the percentage of the backlogged sites warranting priorities list consideration had dropped to 28 percent. The number of future Superfund sites cannot be predicted with certainty. However, data from an EPA study of potential U.S. hazardous waste sites and our own analysis indicate that, assuming no major restructuring of the program, EPA’s estimate of 1,700 additional future Superfund sites is conservative. The CBO estimate, especially the upper bounds of that estimate, may be a better predictor of potential program growth. Given the limited pace of site cleanup by the Superfund program to date, any of the increases in Superfund’s size discussed in this report may be difficult for the program to manage. A September 1991 EPA analysis estimated that 58,000 sites could be added to the inventory in the future. When EPA made this estimate, it already had 34,618 sites in its inventory, for a combined total of 92,618 sites. This total is almost 6,000 sites more than CBO’s high-case scenario estimate for the number of sites that would be in the inventory by 2032 and 1-1/2 times as high as the upper-bound estimate by EPA for the size of the inventory by 2020. Both CBO and EPA based their estimates on the number of sites expected to be reported under current EPA and state policies, not on the number that could be reported. The 58,000-site estimate, on the other hand, is for sites that could be reported. The estimated 58,000 sites consisted of sites that were assessed as having a high or moderate hazard potential. The estimate was developed from estimates for 12 individual industries provided by EPA divisions familiar with them. Each industry estimate was based on an analysis of data and judgment by EPA officials. Most of the sites were in one of the following categories: Resource Conservation and Recovery Act industrial process waste facilities, municipal solid waste landfills, off-site oil and gas waste management facilities, and large-quantity hazardous waste generators. EPA officials familiar with seven of the major categories, accounting for 93 percent of the 58,000 sites, told us that the results are still valid. The officials said that the study’s figures represent the best estimates of the potential number of sites that could be added to the inventory in the future, although one official believed that the number of treatment, storage, and disposal facilities was overstated by 2,000 sites. The officials said that in no case did an actual inventory of potential sites exist. Our analysis indicates that between 10 and 11 percent of the currently reported nonfederal sites could become Superfund sites. This percentage is greater than the 6.5 percent indicated in EPA’s medium estimate and is closer to CBO’s 10 percent high-case estimate. As of September 30, 1993, EPA had completed evaluation for 26,026 of the 35,782 nonfederal sites in its inventory. The remaining 9,756 sites were in various stages of evaluation: 930 sites were awaiting final listing decisions, 4,892 backlog sites were awaiting final evaluation, 2,373 sites were awaiting site inspection, and 1,561 sites were awaiting preliminary assessment. If 1993 screening rates for these categories, as described in appendix IV, were to continue into the future, 2,497 to 2,799 of the 9,756 sites could become Superfund sites. Adding this range to the 1,177 sites already on the priorities list would result in a total estimate of 3,674 to 3,976 Superfund sites, or 10 to 11 percent, of the 35,782 inventoried sites. The Acting Deputy Director for EPA’s Hazardous Site Evaluation Division believed that the 1993 evaluation rates were a reasonable basis for forecasting future Superfund additions from the current inventory. We also recognize, however, that certain factors make estimates of the number of future Superfund sites subject to substantial uncertainties. First, the rate at which sites move through the assessment process onto the priorities list may change in the future, making projections based on past rates inaccurate. Also, proposed legislation to reauthorize Superfund, which has been considered by the Congress, contains provisions to encourage parties responsible for hazardous waste sites to clean them up outside of the regular Superfund program and to authorize states, in cooperation with EPA, to assume certain cleanup responsibilities. These changes could reduce the number of sites that EPA would have to manage in the Superfund program. Any of the estimates discussed in this report suggest that EPA will be challenged by its future Superfund workload. In the 14-year history of the program through July 1994, Superfund has completed the construction of remedies (such as the installation of groundwater pumps and filters) at 234 of the 1,300 federal and nonfederal Superfund sites. Two years ago, EPA estimated that 650 sites would reach the construction-completed stage by the year 2000. At these completion rates, it could take many decades for Superfund to clean up its current inventory and future additions to the inventory. Although EPA has recently developed new procedures to speed up the cleanup process, it is too early to tell what impact they will have on the overall pace of the program. As agreed with your offices, we did not obtain written agency comments on a draft of this report. However, we discussed the contents of this report with program officials from EPA’s Office of Emergency and Remedial Response (Superfund). EPA’s Acting Site Assessment Branch Chief said that the facts presented in this report were balanced, fair, and accurate. He also said that program changes under consideration by the Congress and EPA, such as proposals to increase the states’ cleanup role, could significantly reduce the number of sites to be added to the Superfund program. We conducted our work at EPA headquarters in Washington, D.C., and at its regional offices in Boston (Region I), Chicago (Region V), and Denver (Region VIII). We selected these regions because they presented a cross-section of Superfund activity and were geographically diverse. We obtained and reviewed recent reports and studies on the future size of the Superfund workload. We obtained and analyzed site inventory statistics on preliminary assessment and site inspection processing since program inception through the first quarter of fiscal year 1994. We interviewed EPA headquarters officials and program management officials in three EPA regional offices, as well as environmental protection officials in two states, about Superfund site discovery and evaluation. We reviewed the relevance and appropriateness of studies conducted by CBO, EPA, and EPA’s Office of Inspector General and interviewed EPA program officials on the status of major site categories that could affect the Superfund site inventory. We performed our work in accordance with generally accepted government auditing standards between August 1993 and July 1994. As arranged with your offices, unless you publicly announce its contents earlier, we will make no further distribution of this report until 30 days after the date of this letter. At that time, we will send copies of the report to other appropriate congressional committees; the Administrator, EPA; the Director, Office of Management and Budget; and other interested parties. We will also make copies available to others upon request. Should you need further information, please contact me at (202) 512-6112 if you or your staff have any questions. Major contributors to this report are listed in appendix VI. Number of sites in inventory (col A) Percent of sites that could be listed (col B) Range of sites that could col B) Sites evaluated—not placed on priorities list Sites evaluated—placed on priorities list Sites still to be evaluated Sites awaiting final listing decision Backlogged sites awaiting final evaluation Subtotal of sites still to be evaluated Overall percentage of sites that could be placed on the priorities list(Table notes on next page) Number of federal and nonfederal inventory sitesPlacement rate (percent) Estimated priority list size before rounding Estimated priority list size (rounded) Bruce Skud, Senior Evaluator The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO reviewed the Environmental Protection Agency's (EPA) Superfund Program, focusing on: (1) trends in the number of reported hazardous waste sites; (2) EPA evaluation of potential contamination at these sites; and (3) recent estimates of the program's future growth. GAO found that: (1) the number of sites reported each year has steadily declined since 1985, primarily because the states believe that they can handle cleanups more efficiently and prefer to do the cleanups themselves; (2) states generally report sites that present challenging enforcement or cleanup problems; (3) the percentage of seriously contaminated sites among those reported has remained constant at 43 percent over the past 10 years; (4) EPA officials believe that contamination at newly discovered sites is not less severe, just less obvious; (5) EPA believes 1,700 new federal and nonfederal sites could be added to the National Priorities List through the year 2020; (6) the Congressional Budget Office believes that 3,300 new nonfederal sites could be added to the list through the year 2027; (7) the future Superfund workload could be higher than EPA estimated; and (8) any additions to the Superfund program will be difficult for EPA to manage. |
FAA conducted a series of analyses to identify the most cost-effective way to use the radar data from Grand Junction. On the basis of the results of a 1992 study, FAA decided that building a TRACON facility at Grand Junction was less costly than remoting the radar signal from Grand Junction to Denver. However, in May 1994 FAA conducted another cost analysis that factored in the use of a new technology for remoting radar signals known as video compression. The results of this analysis showed that it would be less costly to remote the radar signal from Grand Junction to Denver, and in August 1994, FAA announced its choice of the less costly option. FAA’s decision to remote the radar signal also means that the tower at Grand Junction will be operated by a contractor. FAA’s decision to provide approach guidance to aircraft through the Denver TRACON dictates that the Grand Junction tower be classified as a level-1 tower that operates using visual flight rules (VFR). In 1993, the House and Senate Appropriations Committees directed FAA to contract out all level-1 VFR towers to the private sector. In March 1995, Grand Junction community leaders and local air traffic controllers met with FAA to outline their concerns about FAA’s analyses and conclusions. The major concerns of the controllers and the city’s representatives were (1) the accuracy and completeness of the cost comparisons between the two options and (2) the considerations about safety and efficiency associated with remoting radar signals and contracting out a tower’s operations. FAA agreed to conduct a new study that would consider two options—(1) a local option that would establish either a TRACON or a TRACAB at Grand Junction or (2) a long-distance option that would remote the radar signal to Denver—and found once again that remoting the radar signal to Denver was the most cost-effective option and that it would not compromise the system’s safety and efficiency. FAA’s 1995 analysis of the costs of establishing a new TRACAB facility at Grand Junction or remoting the radar data to Denver was based on a comparison of the costs for facilities and equipment, telecommunications, staffing, and relocating staff over the 20-year life cycle of the project. FAA estimated that the cost of remoting the signal to the Denver TRACON would be about $9.4 million, while the cost of establishing a TRACAB in Grand Junction would be about $12.8 million, a difference of about $3.4 million.FAA also estimated that an additional $2.5 million would be saved over the same 20-year period by contracting out the tower at Grand Junction. According to FAA’s estimates, these two actions would save about $5.9 million. To verify whether FAA chose the most cost-effective option for providing radar approach control to the Grand Junction airport, we performed an independent cost analysis of FAA’s 1995 study. While we agree that FAA’s analysis identified the most cost-effective option, FAA did not take into account three factors that, in our opinion, are valid in evaluating the options studied. When these factors are considered, FAA’s total projected savings attributable to remoting and contracting out the tower operation at Grand Junction are reduced by about $500,000, from $5.9 million to $5.4 million. The principal findings from our analysis are summarized below. (See app. I for a detailed presentation of our analysis.) FAA did not include a cost for establishing telephone lines between Grand Junction and Denver under the remoting option. The overlooked cost of annual telephone lines was $107,500, or, when discounted over the 20-year life cycle of the project, $853,000 in 1995 dollars . We revised FAA’s estimated total telecommunications cost under the remote option upward by $853,000, from $618,000 to $1,470,000. FAA overestimated the cost of staffing under each of the options studied because the agency used authorized staffing levels—even though the positions were often unfilled. Using staffing levels that more closely approximate actual levels in the Northwest Mountain Region, we estimate that the annual staffing cost would be lower by $147,600 (about $1.82 million over 20 years) for the TRACAB option and by $168,900 (about $2.091 million over 20 years) for the remote option. The net effect of these changes increases the savings attributable to remoting by about $271,000 over 20 years. Moreover, when using staffing levels that more closely approximate actual levels in the field, we estimate that the TRACAB option’s staff relocation and training costs would be lower and further reduce the savings attributable to the remote option by $174,000. FAA underestimated the savings associated with contracting out the air traffic control functions at Grand Junction. We estimate that contracting out saves about $2.7 million—or about $218,000 more than FAA estimates—over 20 years after factoring in FAA’s previous experience with contractor-operated towers and the additional costs of relocating the Grand Junction controllers who choose not to work for the contractor. The representatives of the city of Grand Junction expressed concern that by remoting the radar signal to Denver and by contracting out a tower’s operation, FAA jeopardizes the safety and the efficiency of the air traffic control system at the Grand Junction airport. Specifically, the representatives questioned the implications for safety and efficiency of transmitting radar data over 250 miles and having Denver controllers provide Grand Junction’s radar approach control. The city’s representatives also questioned the safety and efficiency implications of contracting out Grand Junction’s tower. We discussed remoting and considerations about the safety and efficiency of a contractor-operated tower with officials at FAA headquarters and at FAA’s Northwest Mountain Region, who have jurisdiction over the Grand Junction and Denver areas. We also discussed these issues with officials from major aviation-related associations. According to the air traffic officials in FAA’s Northwest Mountain Region, the agency has successfully transmitted radar data hundreds of miles to its enroute centers for the past 30 years without compromising or affecting the system’s safety. Because FAA’s ability to transmit radar data over 250 miles of mountainous terrain was a concern to the Grand Junction representatives, we reviewed FAA’s information on the reliability and availability of radar data transmissions. The information showed that the reliability and availability of the transmissions averaged 99.98 percent nationally over the past 5 years and that they were unaffected by mountainous terrain. According to FAA and aviation association officials, a controller’s physical location is not a safety issue, and controllers routinely control air traffic safely without having visual contact with other air traffic controllers. The critical issue is that information be exchanged in a timely manner, not that two individuals be in visual proximity. Moreover, FAA officials told us that when normal modes of communication are disrupted, the agency adjusts its operating procedures—such as transferring the control of air space to an enroute center or using nonradar approaches—to ensure the timely flow of information. The city’s representatives believed that remoting caused traffic delays at the Grand Junction airport because Denver controllers were not trained to manage the airport’s air traffic. According to FAA Air Traffic officials in the Northwest Mountain Region, Grand Junction incurred initial start-up problems similar to those that other facilities incurred when FAA began to remote radar data. To eliminate these problems, FAA provided refresher briefings to the Denver controllers on managing Grand Junction’s air traffic. Grand Junction air traffic controllers told us that the Denver controllers are now efficiently managing this air traffic and delays are no longer a problem. According to the aviation association officials, their members had not raised any concerns about efficiency associated with FAA’s remoting of radar data. In connection with private-sector controllers under contract to FAA, the manager of FAA’s contract tower program told us that contract controllers are as well trained as FAA controllers. He provided documentation showing that contract controllers average 18 years of experience. The program manager also told us that contract controllers are certified by FAA and operate under the same regulations as FAA controllers. Additionally, officials representing various aviation associations told us that their members were provided with safe and efficient services by both FAA-operated and contractor-operated towers. As a result, these officials told us that they had no reason to question the safety and efficiency of FAA’s contract tower program. The concerns raised by representatives of the city of Grand Junction have also been raised by citizens’ groups in other communities where FAA has proposed to consolidate facilities and contract out a facility’s operation. That other communities had similar concerns leads us to believe that FAA can do a better job of communicating the reasons for its future decisions on consolidating facilities. The issues and concerns raised by the city’s representatives—the reliability of cost data and the safety and efficiency of the airport—were similar to those raised in 1994 by a Yakima, Washington, citizens’ group that also questioned an FAA remoting decision. In both the Grand Junction and the Yakima projects, FAA took a relatively ad hoc approach in deciding whether to remote radar data. In both cases, our review showed that while FAA chose the most cost-effective option, it did not include all relevant cost factors in its savings computation and did little to communicate the rationale for its decision to the affected communities, thereby contributing to subsequent misperceptions by community representatives. We did not find any standard FAA guidance for officials to follow or analytical model for them to use when deciding what costs to include, how to compute those costs, and what documentation to maintain when analyzing candidate facilities for consolidation. In June 1996, FAA issued a report that identifies the types of information to be considered in deciding whether to establish or consolidate TRACON facilities; however, the report does not specify how the various factors will be computed in the decision-making process. In the absence of standard guidance or an analytical model, FAA patterned its Grand Junction studies after earlier remoting efforts. Officials in FAA’s Air Traffic Plans and Requirements Program said that the agency uses this approach because each potential consolidation and remoting situation is unique. However, this approach has led to the agency’s omitting certain telecommunications costs and not reflecting the more realistic scenarios for staffing facilities and has raised concerns in the affected communities. These types of process problems can have the effect of undermining the agency’s credibility, discouraging the community from accepting FAA’s decision, and delaying implementation plans and the realization of projected cost savings. While FAA chose the most cost-effective way to handle radar data for Grand Junction and Yakima, in both instances it overlooked relevant cost factors. Furthermore, in both cases FAA’s decisions were challenged by the affected communities, thereby contributing to delays in implementing the decisions. A more structured decision-making process, based on formal guidance and an analytical model, could ensure that FAA considers all relevant factors when making a remoting decision. A more structured decision-making process could also help FAA defend its decisions to communities that protest the closure of an FAA-staffed facility. As FAA continues to remote radar data and consolidate facilities, it is to FAA’s advantage to develop and implement a more structured decision-making process in conjunction with key stakeholders. We recommend that the Secretary of Transportation direct the Administrator, Federal Aviation Administration, to develop formal guidance and an analytical model for making its remoting decisions. The guidance should outline what costs to include, how those costs should be computed, and what documentation is required to support the analysis. It should also provide for early and continuous involvement of the major stakeholders, especially the affected communities. We provided a draft of this report to the Department of Transportation for review and comment. We met with officials of the Department, including FAA’s Program Director for Air Traffic Plans and Requirements Program, who agreed with the draft report’s conclusions and recommendation. The Program Director said that FAA does not normally conduct the level of analysis we recommended because of the wide difference in costs between remoting radar data and establishing a local terminal radar approach control facility. Nevertheless, FAA recognized that improvements can be made in its decision-making process. In our view, FAA’s June 1996 report that identifies the types of information to be considered when deciding whether to establish or consolidate TRACON facilities is a step in the right direction for improving its decision-making process. However, the report does not specify how the various factors will be computed in the decision-making process. We interviewed FAA officials in Washington, D.C., and the Northwest Mountain Region and obtained specific documentation on the cost of each option and the associated safety information. To verify the figures FAA used in its most recent cost analysis, we conducted an independent cost analysis. We also met with representatives of the city of Grand Junction and officials from major aviation associations to discuss their concerns and obtain their opinions on the potential operational and safety impacts associated with remoting and contracting out the Grand Junction tower. We discussed our findings with FAA officials, including the Program Director, Air Traffic Plans and Requirements Program. We performed this review from October 1995 through October 1996 in accordance with generally accepted government auditing standards. We are sending copies of this report to the Secretary of Transportation; the Administrator, Federal Aviation Administration; and representatives of the city of Grand Junction. We will also make copies available to others on request. Please call me at (202) 512-4803 if you or your staff have any questions about this report. Major contributors to this report are listed in appendix II. Cost savings for remote option ($1,189) ($1,470,824) (1 position) $1,457,690 (2 positions) (1 position) (1 position) $4,864,936(8 positions) $3,321,156 (6 positions) $1,044,017(1 position) ($1,044,017) $1,824,351(3 positions) $1,216,234 (2 positions) ($2,700,000) (Table notes on next page) The costs for telecommunication, salary, and savings from the contract tower program were discounted over 20 years. We believe $50,000 per move is reasonable because FAA now projects $56,200 as the average cost per move for its Northwest Mountain Region. Because we eliminated one technician under the TRACAB option, we reduced the cost of training by $23,900. FAA training academy officials told us that this is the cost for training one technician. Linda S. Garcia Dana E. Greenberg Robert E. Levin Peter G. Maristch The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO reviewed: (1) whether the Federal Aviation Administration (FAA) chose the most cost-effective option for handling radar-based air traffic control activities at the Grand Junction, Colorado, airport; (2) whether the safety and efficiency of the air traffic control system would be compromised by remoting radar data and contracting out tower operations at Grand Junction; and (3) what can be done to improve the FAA process for determining when and where to remote radar data. GAO found that: (1) it agreed with the FAA determination that remoting the Grand Junction radar signal to a terminal radar approach control (TRACON) facility in Denver is the most cost-effective option for handling radar data from the site; (2) the FAA 20-year projected savings attributable to the remote option should be reduced by about $500,000, from $5.9 million to $5.4 million, since FAA overlooked certain telecommunications costs and did not utilize more realistic staffing scenarios; (3) GAO analysis of the available data disclosed no valid concerns about the safety and efficiency of remoting radar data or contracting out a tower's operation; (4) the FAA process for deciding when and where to remote radar signals was generally sound, but relatively ad hoc; and (5) a formal methodology for making such decisions would have helped FAA to ensure that all relevant factors were properly considered and communicate to the affected communities how its decision was made. |
The Justice Assistance Act of 1984 (P.L. 98-473) created OJP to provide federal leadership in developing the nation’s capacity to prevent and control crime, administer justice, and assist crime victims. OJP carries out its responsibilities by providing grants to various organizations, including state and local governments, Indian tribal governments, nonprofit organizations, universities, and private foundations. OJP comprises five bureaus, including BJA, and seven program offices, including VAWO. In fulfilling its mission, BJA provides grants for programs and for training and technical assistance to combat violent and drug-related crime and help improve the criminal justice system. VAWO administers grants to help prevent and stop violence against women, including domestic violence, sexual assault, and stalking. During fiscal years 1995 through 2001, BJA and VAWO awarded about $943 million to fund 700 Byrne and 1,264 VAWO discretionary grants. One of BJA’s major grant programs is the Byrne Program. BJA administers the Byrne program, just as its counterpart, VAWO, administers its programs. Under the Byrne discretionary grants program, BJA provides federal financial assistance to grantees for educational and training programs for criminal justice personnel; for technical assistance to state and local units of government; and for projects that are replicable in more than one jurisdiction nationwide. During fiscal years 1995 through 2001, Byrne discretionary grant programs received appropriations of about $385 million. VAWO was created in 1995 to carry out certain programs created under the Violence Against Women Act of 1994. The Victims of Trafficking and Violence Prevention Act of 2000 reauthorized most of the existing VAWO programs and added new programs. VAWO programs seek to improve criminal justice system responses to domestic violence, sexual assault, and stalking by providing support for law enforcement, prosecution, courts, and victim advocacy programs across the country. During fiscal years 1995 through 2001, VAWO’s five discretionary grant programs that were subject to program evaluation were (1) STOP (Services, Training, Officers, and Prosecutors) Violence Against Indian Women Discretionary Grants, (2) Grants to Encourage Arrest Policies, (3) Rural Domestic Violence and Child Victimization Enforcement Grants, (4) Domestic Violence Victims’ Civil Legal Assistance Grants, and (5) Grants to Combat Violent Crimes Against Women on Campuses. During fiscal years 1995 through 2001, about $505 million was appropriated to these discretionary grant programs. As already mentioned, NIJ is the principal research and development agency within OJP, and its duties include developing, conducting, directing, and supervising Byrne and VAWO discretionary grant program evaluations. Under 42 U.S.C. 3766, NIJ is required to “conduct a reasonable number of comprehensive evaluations” of the Byrne discretionary grant program. In selecting programs for review under section 3766, NIJ is to consider new and innovative approaches, program costs, potential for replication in other areas, and the extent of public awareness and community involvement. According to NIJ officials, the implementation of various types of evaluations, including process and impact evaluations, fulfills this legislative requirement. Although legislation creating VAWO does not require evaluations of the VAWO discretionary grant programs, Justice’s annual appropriations for VAWO during fiscal years 1998 through 2002 included monies for NIJ research and evaluations of violence against women. In addition, Justice has promulgated regulations requiring that NIJ conduct national evaluations of two of VAWO’s discretionary grant programs. As with the Byrne discretionary programs, NIJ is not required by statute or Justice regulation to conduct specific types of program evaluations, such as impact or process evaluations. The Director of NIJ is responsible for making the final decision on which Byrne and VAWO discretionary grant programs to evaluate; this decision is based on the work of NIJ staff in coordination with Byrne or VAWO program officials. Once the decision has been made to evaluate a particular program, NIJ issues a solicitation for proposals for grant funding from potential evaluators. When applications or proposals are received, an external peer review panel comprising members of the research and relevant practitioner communities is convened. Peer review panels identify the strengths, weaknesses, and potential methodologies to be derived from competing proposals. When developing their consensus reviews, peer review panels are to consider the quality and technical merit of the proposal; the likelihood that grant objectives will be met; the capabilities, demonstrated productivity, and experience of the evaluators; and budget constraints. Each written consensus review is reviewed and discussed with partnership agency representatives (e.g., staff from BJA or VAWO). These internal staff reviews and discussions are led by NIJ’s Director of the Office of Research and Evaluation who then presents the peer review consensus reviews, along with agency and partner agency input, to the NIJ Director for consideration and final grant award decisions. The NIJ Director makes the final decision regarding which application to fund. To meet our objectives, we conducted our work at OJP, BJA, VAWO, and NIJ headquarters in Washington, D.C. We reviewed applicable laws and regulations, guidelines, reports, and testimony associated with Byrne and VAWO discretionary grant programs and evaluation activities. In addition, we interviewed responsible OJP, NIJ, BJA, and VAWO officials regarding program evaluations of discretionary grants. As agreed with your offices, we focused on program evaluation activities associated with the Byrne and VAWO discretionary grant programs. In particular, we focused on the program evaluations of discretionary grants that were funded during fiscal years 1995 through 2001. To address our first objective, regarding the number, type, status of completion, and award amount of Byrne and VAWO discretionary grant program evaluations, we interviewed NIJ, BJA, and VAWO officials and obtained information on Byrne and VAWO discretionary grant programs and program evaluations. Because NIJ is responsible for carrying out program evaluations of Byrne and VAWO discretionary grant programs, we also obtained and analyzed NIJ data about specific Byrne and VAWO discretionary grant program evaluations, including information on the number of evaluations as well as the type, cost, source of funding, and stages of implementation of each evaluation for fiscal years 1995 through 2001. We did not independently verify the accuracy or completeness of the data that NIJ provided. To address the second objective, regarding the methodological rigor of the impact evaluation studies of Byrne and VAWO discretionary grant programs during fiscal years 1995 through 2001, we initially identified the impact evaluations from the universe of program evaluations specified by NIJ. We excluded from our analysis any impact evaluations that were in the formative stage of development—that is, the application had been awarded but the methodological design was not yet fully developed. As a result, we reviewed four program evaluations. For the four impact evaluations that we reviewed, we asked NIJ to provide any documentation relevant to the design and implementation of the impact evaluation methodologies, such as the application solicitation, the grantee’s initial and supplemental applications, progress notes, interim reports, requested methodological changes, and any final reports that may have become available during the data collection period. We also provided NIJ with a list of methodological issues to be considered in our review and requested them to submit any additional documentation that addressed these issues. We used a data collection instrument to obtain information systematically about each program being evaluated and about the features of the evaluation methodology. We based our data collection and assessments on generally accepted social science standards. We examined such factors as whether evaluation data were collected before and after program implementation; how program effects were isolated (i.e., the use of nonprogram participant comparison groups or statistical controls); and the appropriateness of sampling and outcome measures. Two of our senior social scientists with training and experience in evaluation research and methodology separately reviewed the evaluation documents and developed their own assessments before meeting jointly to discuss the findings and implications. This was done to promote a grant evaluation review process that was both independent and objective. To obtain information on the approaches that BJA, VAWO, and NIJ used to disseminate program evaluation results, we requested and reviewed, if available, relevant handbooks and guidelines on information dissemination, including, for example, NIJ’s guidelines. We also reviewed BJA, VAWO, and NIJ’s available print and electronic products as related to their proven programs and evaluations, including two NIJ publications about Byrne discretionary programs and their evaluation methodologies and results. We conducted our work between February 2001 and December 2001 in accordance with generally accepted government auditing standards. We requested comments from Justice on a draft of this report in January 2002. The comments are discussed near the end of this letter and are reprinted as appendix III. During fiscal years 1995 through 2001, NIJ awarded about $6 million to carry out five Byrne and five VAWO discretionary grant program evaluations. NIJ awarded evaluation grants using mostly funds transferred from BJA and VAWO. Specifically, of the approximately $1.9 million awarded for one impact and four process evaluations of the Byrne discretionary program, NIJ contributed about $299,000 (16 percent) and BJA contributed about $1.6 million (84 percent). VAWO provided all of the funding (about $4 million) to NIJ for all program evaluations of five VAWO discretionary grant programs. According to NIJ, the five VAWO program evaluations included both impact and process evaluations. Our review of information provided by NIJ showed that 6 of the 10 program evaluations—all 5 VAWO evaluations and 1 Byrne evaluation—included impact evaluations. The remaining four Byrne evaluations were exclusively process evaluations that measured the extent to which the programs were working as intended. As of December 2001, only one of these evaluations, the impact evaluation of the Byrne CAR Program, had been completed. The remaining evaluations were in various stages of implementation. Table 1 lists each of the five Byrne program evaluations and shows whether it was a process or an impact evaluation, its stage of implementation, the amount awarded during fiscal years 1995 through 2001, and the total amount awarded since the evaluation was funded. Table 2 lists each of the five VAWO program evaluations and shows that it was both a process and an impact evaluation, its stage of implementation, and the amount awarded during fiscal years 1995 through 2001, which is the total amount awarded. Our review showed that methodological problems have adversely affected three of the four impact evaluations that have progressed beyond the formative stage. All three VAWO evaluations that we reviewed demonstrated a variety of methodological limitations, raising concerns as to whether the evaluations will produce definitive results. The one Byrne evaluation was well designed and used appropriate data collection and analytic methods. We recognize that impact evaluations, such as the type that NIJ is managing, can encounter difficult design and implementation issues. In the three VAWO evaluations that we reviewed, program variation across sites has added to the complexity of designing the evaluations. Sites could not be shown to be representative of the programs or of particular elements of these programs, thereby limiting the ability to generalize results; the lack of comparison groups hinders the ability to minimize the effects of factors external to the program. Furthermore, data collection and analytical problems compromise the ability of evaluators to draw appropriate conclusions from the results. In addition, peer review committees found methodological problems in two of the three VAWO evaluations that we considered. The four program evaluations are multiyear, multisite impact evaluations. Some program evaluations used a sample of grants, while others used the entire universe of grants. For example, the Grants to Encourage Arrests Policies Program used 6 of the original 130 grantee sites. In contrast, in the Byrne Children at Risk impact evaluation, all five sites participated. As of December 2001, NIJ had already received the impact findings from the Byrne Children at Risk Program evaluation but had not received impact findings from the VAWO discretionary grant program evaluations. An impact evaluation is an inherently difficult task, since the objective is to isolate the effects of a particular program or factor from all other potential contributing programs or factors that could also effect change. Given that the Byrne and VAWO programs are operating in an ever changing, complex environment, measuring the impact of these specific Byrne and VAWO programs can be arduous. For example, in the evaluation of VAWO’s Rural Domestic Violence Program, the evaluator’s responsibility is to demonstrate how the program affected the lives of domestic violence victims and the criminal justice system. Several other programs or factors besides the Rural Domestic Violence Program may be accounting for all or part of the observed changes in victims’ lives and the criminal justice system (e.g., a co-occurring program with similar objectives, new legislation, a local economic downturn, an alcohol abuse treatment program). Distinguishing the effects of the Rural Domestic Violence Program requires use of a rigorous methodological design. All three VAWO programs permitted their grantees broad flexibility in the development of their projects to match the needs of their local communities. According to the Assistant Attorney General, this variation in projects is consistent with the intent of the programs’ authorizing legislation. We recognize that the authorizing legislation provides VAWO the flexibility in designing these programs. Although this flexibility may make sense from a program perspective, the resulting project variation makes it more difficult to design and implement a definitive impact evaluation of the program. Instead of assessing a single, homogeneous program with multiple grantees, the evaluation must assess multiple configurations of a program, thereby making it difficult to generalize about the entire program. Although all of the grantees’ projects under each program being evaluated are intended to achieve the same or similar goals, an aggregate analysis could mask the differences in effectiveness among individual projects and thus not result in information about which configurations of projects work and which do not. The three VAWO programs exemplify this situation. The Arrest Policies Program provided grantees with the flexibility to develop their respective projects within six purpose areas: implementing mandatory arrest or proarrest programs and policies in police departments, tracking domestic violence cases, centralizing and coordinating police domestic violence operations, coordinating computer tracking systems, strengthening legal advocacy services, and educating judges and others about how to handle domestic violence cases. Likewise, the STOP Grants Program encouraged tribal governments to develop and implement culture-specific strategies for responding to violent crimes against Indian women and provide appropriate services for those who are victims of domestic abuse, sexual assault, and stalking. Finally, the Rural Domestic Violence Program was designed to provide sites with the flexibility to develop projects, based on need, with respect to the early identification of, intervention in, and prevention of woman battering and child victimization; with respect to increases in victim safety and access to services; with respect to enhancement of the investigation and prosecution of crimes of domestic violence; and with respect to the development of innovative, comprehensive strategies for fostering community awareness and prevention of domestic abuse. Because participating grant sites emphasized different project configurations, the resulting evaluation may not provide information that could be generalized to a broader implementation of the program. The sites participating in the three VAWO evaluations were not shown to be representative of their programs. Various techniques are available to help evaluators choose representative sites and representative participants within those sites. Random sampling of site and participant selection are ideal, but when this is not feasible, other purposeful sampling methods can be used to help approximate the selection of an appropriate sample (e.g., choosing the sample in such proportions that it reflects the larger population--stratification). At a minimum, purposeful selection can ensure the inclusion of a range of relevant sites. As discussed earlier, in the case of the Arrest Policies Program, six purpose areas were identified in the grant solicitation. The six grantees chosen for participation in the evaluation were not however, selected on the basis of their representativeness of the six purpose areas or the program as a whole. Rather, they were selected on the basis of factors related solely to program “stability;” that is; they were considered likely to receive local funding after the conclusion of federal grant funding, and key personnel would continue to participate in the coordinated program effort. Similarly, the 10 Rural Domestic Violence impact evaluation grantees were not selected for participation on the basis of program representativeness or the specific purpose areas discussed earlier. Rather, sites were selected by the grant evaluator on the basis of “feasibility”; specifically, whether the site would be among those participants equipped to conduct an evaluation. Similarly, the STOP Violence Against Indian Women Program evaluation used 3 of the original 14 project sites for a longitudinal study; these were not shown to be representative of the sites in the overall program. For another phase of the evaluation, the principal investigator indicated that grantee sites were selected to be geographically representative of American Indian communities. While this methodology provides for inclusion of a diversity of Indian tribes in the sample from across the country, geography as a sole criterion does not guarantee representativeness in relation to many other factors. Each of the three VAWO evaluations was designed without comparison groups—a factor that hinders the evaluator’s ability to isolate and minimize external factors that could influence the results of the study. Use of comparison groups is a standard practice employed by evaluators to help determine whether differences between baseline and follow-up results are due to the program under consideration or to some other programs or external factors. For example, as we reported in 1997, to determine whether a drug court program has been effective in reducing criminal recidivism and drug relapse, it is not sufficient to merely determine whether those participating in the drug court program show changes in recidivism and relapse rates. Changes in recidivism and relapse variables between baseline and program completion could be due to other external factors, irrespective of the drug court program (e.g., the state may have developed harsher sentencing procedures for those failing to meet drug court objectives). If, however, the drug court participant group is matched at baseline against another set of individuals, “the comparison group” who are experiencing similar life circumstances but who do not qualify for drug court participation (e.g., because of area of residence), then the comparison group can help in isolating the effects of the drug court program. The contrasting of the two groups in relation to recidivism and relapse can provide an approximate measure of the program’s impact. All three VAWO program impact evaluations lacked comparison groups. One issue addressed in the Arrest Policies Program evaluation, for example, was the impact of the program on the safety and protection of the domestic violence victim. The absence of a comparison group, however, makes it difficult to firmly conclude that change in the safety and protection of participating domestic abuse victims is due to the Arrest Policies Program and not to some other external factors operating in the environment (e.g., economic changes, nonfederal programs such as safe houses for domestically abused women, and church-run support programs). Instead of using comparison groups, the Arrest Policies Program evaluation sought to eliminate potential competing external factors by collecting and analyzing extensive historical and interview data from subjects and by conducting cross-site comparisons; the latter method proved unfeasible. The STOP Violence Against Indian Women Discretionary Grant Program has sought in part, to reduce violent crimes against Indian women by changing professional staff attitudes and behaviors. To do this, some grantees created and developed domestic violence training services for professional staff participating in site activities. Without comparison groups, however, assessing the effect of the STOP training programs is difficult. Attitudes and behaviors may change for myriad reasons unrelated to professional training development initiatives. If a treatment group of professional staff receiving the STOP training had been matched with a comparison group of professional staff that was similar in all ways except receipt of training, there would be greater confidence that positive change could be attributed to the STOP Program. Similarly, the lack of comparison groups in the Rural Domestic Violence evaluation makes it difficult to conclude that a reduction in violence against women and children in rural areas can be attributed entirely, or in part, to the Rural Domestic program. Other external factors may be operating. All three VAWO impact evaluations involved data collection and analytical problems that may affect the validity of the findings and conclusions. For example, we received documentation from NIJ on the STOP Grant Program for Reducing Violence Against Indian Women showing that only 43 percent of 127 grantees returned a mail survey. In addition, only 25 percent of 127 tribes provided victim outcome homicide and hospitalization rates—far less than the percentage needed to draw broad- based conclusions about the intended goal of assessing victim well being. In the Arrest Policies evaluation, NIJ reported that the evaluators experienced difficulty in collecting pre-grant baseline data from multiple sites and the quality of the data was oftentimes inadequate, which hindered their ability to statistically analyze change over time. In addition, evaluators were hindered in several work areas by lack of automated data systems; data were missing, lost, or unavailable; and the ability to conduct detailed analyses of the outcome data was sometimes limited. For the Rural Domestic Violence evaluation, evaluators proposed using some variables (e.g., number and type of awareness messages disseminated to the community each month, identification of barriers to meeting the needs of women and children, and number of police officers who complete a training program on domestic violence) that are normally considered to relate more to a process evaluation than an impact evaluation. NIJ noted that outcome measurement indicators varied by site, complicating the ability to draw generalizations. NIJ further indicated that the evaluation team did not collect baseline data prior to the start of the program, making it difficult to identify change resulting from the program. NIJ does not require applicants to use particular evaluation methodologies. NIJ employs peer review committees in deciding which evaluation proposals to fund. The peer review committees expressed concerns about two of the three VAWO program evaluation proposals (i.e., those for the Arrest Policies and Rural Domestic Violence programs) that were subsequently funded by NIJ. Whereas NIJ funded the Arrest Policies evaluation as a grant, NIJ funded the Rural Domestic Violence evaluation as a cooperative agreement so that NIJ could provide substantial involvement in conducting the evaluation. A peer-review panel and NIJ raised several concerns about the Arrest Policies Program evaluation proposal. These concerns included issues related to site selection, victim interviewee selection and retention in the sample, and the need for additional impact measures and control variables. The grant applicant’s responses to these issues did not remove concerns about the methodological rigor of the application, thus calling into question the ability of the grantee to assess the impact of the Arrest Policies Program. For example, the grantee stated that victim interviewee selection was to be conducted through a quota process and that the sampling would vary by site. This would not allow the evaluators to generalize program results. Also, the evaluators said that they would study communities at different levels of “coordination” when comparison groups were not feasible, but they did not adequately explain (1) how the various levels of coordination would be measured, (2) the procedures used to select the communities compared, and (3) the benefits of using this method as a replacement for comparison groups. NIJ subsequently funded this evaluation, and it is still in progress. A peer review committee for the Rural Domestic Violence and Child Victimization Enforcement Grant Program evaluation also expressed concerns about whether the design of the evaluation application, as proposed, would demonstrate whether the program was working. In its consensus review notes, the peer review committee indicated that the “ability to make generalizations about what works and does not work will be limited.” The peer review committee also warned of outside factors (e.g., unavailability of data, inaccessibility of domestic violence victims) that could imperil the evaluation efforts of the applicant. Based on the peer review committee’s input, NIJ issued the following statement to the applicant: “As a national evaluation of a major programmatic effort we hope to have a research design and products on what is working, what is not working, and why. We are not sure that the proposed design will get us to that point.” We reviewed the grant applicant’s response to NIJ’s concern in its application addendum and found that the overall methodological design was still not discussed in sufficient detail or depth to determine whether the program was working. Although the Deputy Director of NIJ’s Office of Research and Evaluation asserted that this initial application was only for process evaluation funding, our review of available documents showed that the applicant had provided substantial information about both the process and impact evaluation methodologies in the application and addendum. We believe that the methodological rigor of the addendum was not substantially improved over that of the original application. The Deputy Director told us that, given the “daunting challenge faced by the evaluator,” NIJ decided to award the grant as a cooperative agreement. Under this arrangement, NIJ was to have substantial involvement in helping the grantee conduct the program evaluation. The results of that evaluation have not yet been submitted. The evaluator’s draft final report is expected no earlier than April 2002. In contrast to the three VAWO impact evaluations, the Byrne impact evaluation employed methodological design and implementation procedures that met a high standard of methodological rigor, fulfilling each of the criteria indicated above. In part, this may reflect the fact that Byrne’s CAR demonstration program, unlike the VAWO programs, was according to the Assistant Attorney General, intended to test a research hypothesis, and the evaluation was designed accordingly. CAR provided participants with the opportunity to use a limited number of program services (e.g., family services, education services, after-school activities) that were theoretically related to the impact variables and the prevention and reduction of drug use and delinquency. As a result, the evaluation was not complicated by project heterogeneity. All five grantees participated in the evaluation. High-risk youths within those projects were randomly selected from targeted neighborhood schools, providing student representation. Additionally, CAR evaluators chose a matched comparison group of youths with similar life circumstances (e.g., living in distressed neighborhoods and exposed to similar school and family risk factors) and without access to the CAR Program. Finally, no significant data collection implementation problems were associated with the CAR Program. The data were collected at multiple points in time from youths (at baseline, at completion of program, and at one year follow-up) and their caregivers (at baseline and at completion of program). Self-reported findings from youths were supplemented by the collection of more objective data from school, police, and court records on an annual basis, and rigorous test procedures were used to determine whether changes over time were statistically significant. Additionally, CAR’s impact evaluation used control groups, a methodologically rigorous technique not used in the three VAWO evaluations. To further eliminate the effects of external factors, youths in the targeted neighborhood schools were randomly assigned either to the group receiving the CAR Program or to a control group that did not participate in the program. Since the CAR Program group made significant gains over the same-school group and the matched comparison group not participating in the program, there was good reason to conclude that the CAR Program was having a beneficial effect on the targeted audience. Appendix I provides summaries of the four evaluations. Despite great interest in assessing results of OJP’s discretionary grant programs, it can be extremely difficult to design and execute evaluations that will provide definitive information. Our in-depth review of one Byrne and three VAWO impact evaluations that have received funding since fiscal year 1995 has shown that, in some cases, the flexibility that can be beneficial to grantees in tailoring programs to meet their communities’ needs has added to the complexities of designing impact evaluations that will result in valid findings. Furthermore, the lack of site representativeness, appropriate comparison groups, and problems in data collection and analysis may compromise the reliability and validity of some of these evaluations. We recognize that not all evaluation issues that can compromise results are easily resolvable, including issues involving comparison groups and data collection. To the extent that methodological design and implementation issues can be overcome, however, the validity of the evaluation results will be enhanced. NIJ spends millions of dollars annually to evaluate OJP grant programs. More up-front attention to the methodological rigor of these evaluations will increase the likelihood that they will produce meaningful results for policymakers. Unfortunately, the problematic evaluation grants that we reviewed are too far along to be radically changed. However, two of the VAWO evaluation grants are still in the formative stage; more NIJ attention to their methodologies now can better ensure useable results. We recommend that the Attorney General instruct the Director of NIJ to assess the two VAWO impact evaluations that are in the formative stage to address any potential methodological design and implementation problems and, on the basis of that assessment, initiate any needed interventions to help ensure that the evaluations produce definitive results. We further recommend that the Attorney General instruct the Director of NIJ to assess its evaluation process with the purpose of developing approaches to ensure that future impact evaluation studies are effectively designed and implemented so as to produce definitive results. We provided a copy of a draft of this report to the Attorney General for review and comment. In a February 13, 2002, letter, the Assistant Attorney General commented on the draft. Her comments are summarized below and presented in their entirety in appendix III. The Assistant Attorney General agreed with the substance of our recommendations and said that NIJ has begun, or plans to take steps, to address them. Although it is still too early to tell whether NIJ’s actions will be effective in preventing or resolving the problems we identified, they appear to be steps in the right direction. With regard to our first recommendation—that NIJ assess the two VAWO impact evaluations in the formative stage to address any potential design and implementation problems and initiate any needed intervention to help ensure definitive results—the Assistant Attorney General noted that NIJ has begun work to ensure that these projects will provide the most useful information possible. She said that for the Crimes Against Women on Campus Program evaluation, NIJ is considering whether it will be possible to conduct an impact evaluation and, if so, how it can enhance its methodological rigor with the resources available. For the Civil Legal Assistance Program evaluation, the Assistant Attorney General said that NIJ is working with the grantee to review site selection procedures for the second phase of the study to enhance the representativeness of sites. The Assistant Attorney General was silent about any additional steps that NIJ would take during the later stages of the Civil Legal Assistance Program process evaluation to ensure the methodological rigor of the impact phase of the study. However, it seems likely that as the process evaluation phase of the study continues, NIJ may be able to take advantage of additional opportunities to address any potential design and implementation problems. With regard to our second recommendation—that NIJ assess its evaluation process to develop approaches to ensure that future evaluation studies are effectively designed and implemented to produce definitive results—the Assistant Attorney General stated that OJP has made program evaluation, including impact evaluations of federally funded programs, a high priority. The Assistant Attorney General said that NIJ has already launched an examination of NIJ’s evaluation process. She also noted that, as part of its reorganization, OJP plans to measurably strengthen NIJ’s capacity to manage impact evaluations with the goal of making them more useful for Congress and others. She noted as an example that OJP and NIJ are building measurement requirements into grants at the outset, requiring potential grantees to collect baseline data and track the follow-up data through the life of the grant. We have not examined OJP’s plans for reorganizing, nor do we have a basis for determining whether OJP’s plans regarding NIJ would strengthen NIJ’s capacity to manage evaluations. However, we believe that NIJ and its key stakeholders, such as Congress and the research community, would be well served if NIJ were to assess what additional actions it could take to strengthen its management of impact evaluations regardless of any reorganization plans. In her letter, the Assistant Attorney General pointed out that the report accurately describes many of the challenges facing evaluators when conducting research in the complex environment of criminal justice programs and interventions. However, she stated that the report could have gone further in acknowledging these challenges. The Assistant Attorney General also stated that the report contrasts the Byrne evaluation with the three VAWO evaluations and obscures important programmatic differences that affect an evaluator’s ability to achieve “GAO’s conditions for methodological rigor.” She pointed out that the Byrne CAR Program was intended to test a research hypothesis and that the evaluation was designed accordingly, i.e., the availability of baseline data were ensured; randomization of effects were stipulated as a precondition of participation; and outcome measures were determined in advance on the basis of the theories to be tested. She further stated that, in contrast, all of the VAWO programs were (1) highly flexible funding streams, in keeping with the intention of Congress, that resulted in substantial heterogeneity at the local level and (2) well into implementation before the evaluation started. The Assistant Attorney General went on to say that it is OJP’s belief that evaluations under less than optimal conditions can provide valuable information about the likely impact of a program, even though the conditions for methodological strategies and overall rigor of the CAR evaluation were not available. We recognize that there are substantive differences in the intent, structure, and design of the various discretionary grant programs managed by OJP and its bureaus and offices. And, as stated numerous times in our report, we acknowledge not only that impact evaluation can be an inherently difficult and challenging task but also that measuring the impact of these specific Byrne and VAWO programs can be arduous, given that they are operating in an ever changing, complex environment. We agree that not all evaluation issues that can compromise results are easily resolvable, but we firmly believe that, with more up-front attention to design and implementation issues, there is a greater likelihood that NIJ evaluations will provide meaningful results for policymakers. Absent this up-front attention, questions arise as to whether NIJ is (1) positioned to provide the definitive results expected from an impact evaluation and (2) making sound investments given the millions of dollars spent on these evaluations. The Assistant Attorney General also commented that although our report discussed “generally accepted social science standards,” it did not specify the document that articulates these standards or describe our elements of rigor. As a result, the Assistant Attorney General said, OJP had to infer that six elements had to be met to achieve what “GAO believes” is necessary to “have a rigorous impact evaluation.” Specifically, she said that she would infer that, for an impact evaluation to be rigorous would require (1) selection of homogenous programs, (2) random or stratified site sampling procedures (or selection of all sites), (3) use of comparison groups, (4) high response rates, (5) available and relevant automated data systems that will furnish complete and accurate data to evaluators in a timely manner, and (6) funding sufficient to accomplish all of the above. Furthermore, the Attorney General said that it is rare to encounter all of these conditions or be in a position to engineer all of these conditions simultaneously; and when all of these conditions are present, the evaluation would be rigorous. She also stated that it is possible to glean useful, if not conclusive, evidence of the impact of a program from an evaluation that does not rise to the standard recommended by GAO because of the unavoidable absence of “one or more elements.” We agree that our report did not specify particular documents that articulate generally accepted social science standards. However, the standards that we applied are well defined in scientific literature. All assessments of the impact evaluations we reviewed were completed by social scientists with extensive experience in evaluation research. Throughout our report, we explain our rationale and the criteria we used in measuring the methodological rigor of NIJ’s impact evaluations. Furthermore, our report does not suggest that a particular standard or set of standards is necessary to achieve rigor, nor does it suggest that other types of evaluations, such as comprehensive process evaluations, are any less useful in providing information on how a program is operating. In this context, it is important to point out that the scope of our work covered impact evaluations of Byrne and VAWO discretionary grant programs— those designed to assess the net effect of a program by comparing program outcomes with an estimate of what would have happened in the absence of the program. We differ with the Assistant Attorney General with respect to the six elements cited as necessary elements for conducting an impact evaluation. Contrary to the Assistant Attorney General’s assertion, our report did not state that a single homogeneous program is a necessary element for conducting a rigorous impact evaluation. Rather, we pointed out that heterogeneity or program variation is a challenge that adds to the complexity of designing an evaluation. In addition, contrary to her assertion, the report did not assert that random sampling or stratification was a necessary element for conducting a rigorous evaluation; instead it stated that when random sampling is not feasible, other purposeful sampling methods can be used. With regard to comparison groups, the Assistant Attorney General’s letter asserted that GAO standards required using groups that do not receive program benefits as a basis of comparison with those that do receive such benefits. In fact, we believe that the validity of evaluation results can be enhanced through establishing and tracking comparison groups. If other ways exist to effectively isolate the impacts of a program, comparison groups may not be needed. However, we saw no evidence that other methods were effectively used in the VAWO impact evaluations we assessed. The Assistant Attorney General also suggested that we used a 75 percent or greater response rate for evaluation surveys as a standard of rigor. In fact, we did not—we simply pointed out that NIJ documents showed a 43 percent response rate on one of the STOP Grant Program evaluation surveys. This is below OMB’s threshold response rate level—the level below which OMB particularly believes nonresponse bias and statistical problems could affect surveys. Given OMB guidance, serious questions could be raised about program conclusions drawn from the results of a survey with a 43 percent response rate. In addition, the Assistant Attorney General suggested that, by GAO standards, she would have to require state, local, or tribal government officials to furnish complete and accurate data in a timely manner. In fact, our report only points out that NIJ reported that evaluators were hindered in carrying out evaluations because of the lack of automated data systems or because data were missing, lost, or unavailable—again, challenges to achieving methodologically rigorous evaluations that could produce meaningful and definitive results. Finally, the Assistant Attorney General’s letter commented that one of the elements needed to meet “all of GAO’s conditions” of methodological rigor is sufficient funding. She stated that more rigorous impact evaluations cost more than those that provide less scientific findings, and she said that OJP is examining the issue of how to finance effective impact evaluations. We did not assess whether funding is sufficient to conduct impact evaluations, but we recognize that designing effective and rigorous impact evaluations can be expensive—a condition that could affect the number of impact evaluations conducted. However, we continue to believe that with more up-front attention to the rigor of ongoing and future evaluations, NIJ can increase the likelihood of conducting impact evaluations that produce meaningful and definitive results. In addition to the above comments, the Assistant Attorney General made a number of suggestions related to topics in this report. We have included the Assistant Attorney General’s suggestions in the report, where appropriate. Also, the Assistant Attorney General provided other comments in response to which we did not make changes. See appendix III for a more detailed discussion of the Assistant Attorney General’s comments. We are sending copies of this report to the Chairman and the Ranking Minority Member of the Senate Judiciary Committee; to the Chairman and Ranking Minority Member of the House Judiciary Committee; to the Chairman and Ranking Minority Member of the Subcommittee on Crime, House Committee on the Judiciary; to the Chairman and the Ranking Minority Member of the House Committee on Education and the Workforce; to the Attorney General; to the OJP Assistant Attorney General; to the NIJ Director; to the BJA Director; to the VAWO Director; and to the Director, Office of Management and Budget. We will also make copies available to others on request. If you or your staff have any questions about this report, please contact John F. Mortin or me at (202) 512-8777. Key contributors to this report are acknowledged in appendix IV. Evaluation findings Assessment of evaluation This evaluation has several limitations. (1) The choice of the 10 impact sites is skewed toward the National Evaluation of the Rural Domestic Violence and Child Victimization Grant Program COSMOS Corporation The Violence Against Women Office’s (VAWO) Rural Domestic Violence Program, begun in fiscal year 1996, has funded 92 grantees through September 2001. The primary purpose of the program is to enhance the safety of victims of domestic abuse, dating violence, and child abuse. The program supports projects that implement, expand, and establish cooperative efforts between law enforcement officers, prosecutors, victim advocacy groups, and others in investigating and prosecuting incidents of domestic violence, dating violence, and child abuse; provide treatment, counseling, and assistance to victims; and work with the community to develop educational and prevention strategies directed toward these issues. The impact evaluation began in July 2000, with a final report expected no earlier than April 2002. Initially, 10 grantees were selected to participate in the impact evaluation; 9 remain in the evaluation. Two criteria were used in the selection of grant participants: the “feasibility” of earlier site-visited grantees to conduct an outcome evaluation and VAWO recommendations based on knowledge of grantee program activities. Logic models were developed, as part of the case study approach, to show the logical or plausible links between a grantee’s activities and the desired outcomes. The specified outcome data were to be collected from multiple sources, using a variety of methodologies during 2- to- 3-day site visits (e.g., multiyear criminal justice, medical, and shelter statistics were to be collected from archival records; community stakeholders were to be interviewed; and grantee and victim service agency staff were to participate in focus groups). At the time of our review, this evaluation was funded at $719,949. The National Institute of Justice (NIJ) could not separate the cost of the impact evaluation from the cost of the process evaluation. Too early to assess. technically developed evaluation sites and is not representative of either all Rural Domestic Violence Program Grantees, particular types of projects, or delivery styles. (2) The lack of comparison groups will make it difficult to exclude the effect of external factors, such as victim safety and improved access to services, on perceived change. (3) Several so-called short-term outcome variables are in fact process variables (e.g., number of police officers who complete a training program on domestic violence, identification of barriers to meeting the needs of women and children). (4) It is not clear how interview and focus group participants are to be selected, (5) Statistical procedures to be used in the analyses have not been sufficiently identified. The NIJ peer review committee had concerns about whether the evaluation could demonstrate that the program was working. NIJ funded the application as a cooperative agreement because a substantial amount of agency involvement was deemed necessary to meet the objectives of the evaluation. Evaluation findings Assessment of evaluation This evaluation has several limitations: the absence of a representative sampling frame for site National Evaluation of the Arrest Policies Program Institute for Law and Justice (ILJ) The purpose of VAWO’s Arrest Policies Program is to encourage states, local governments, and Indian tribal governments to treat domestic violence as a serious violation of criminal law. The program received a 3-year authorization (fiscal years 1996 through 1998) at approximately $120 million to fund grantees under six purpose areas: implementing mandatory arrest or proarrest programs and policies in police departments, tracking domestic violence cases, centralizing and coordinating police domestic violence operations, coordinating computer tracking systems, strengthening legal advocacy services, and educating judges and others about how to handle domestic violence cases. Grantees have flexibility to work in several of these areas. At the time the NIJ evaluation grant was awarded, 130 program grantees had been funded; the program has since expanded to 190 program grantees. The impact evaluation began in August 1998, with a draft final report due in March 2002. Six grantees were chosen to participate in the impact evaluation. Each of the six sites was selected on the basis of program “stability,” not program representativeness. Within sites, both quantitative and qualitative data were to be collected and analyzed to enable better understanding of the impact of the Arrest Program on offender accountability and victim well being. This process entailed reviewing data on the criminal justice system’s response to domestic violence; tracking a random sample of 100 offender cases, except in rural areas, to determine changes in offender accountability; conducting content analyses of police incident reports to assess change in police practices and documentation; and interviewing victims or survivors at each site to obtain their perceptions of the criminal justice system’s response to domestic violence and its impact on their well-being. ILJ had planned cross-site comparisons and the collection of extensive historical and interview data to test whether competing factors could be responsible for changes in arrest statistics. At the time of our review, this evaluation was funded at $1,130,574. NIJ could not separate the cost of the impact evaluation from the cost of the process evaluation. Too early to assess. selection, the lack of comparison groups, the inability to conduct cross-site comparisons, and the lack of a sufficient number of victims in some sites to provide a perspective on the changes taking place in domestic violence criminal justice response patterns and victim well-being. In addition, there was difficulty collecting pre-grant baseline data, and the quality of the data was oftentimes inadequate, limiting the ability to measure change over time. Further, automated data systems were not available in all work areas, and data were missing, lost, or unavailable. An NIJ peer review committee also expressed some concerns about the grantee’s methodological design. Evaluation findings Assessment of evaluation Methodological design and implementation issues may cause difficulties in attributing program impact. A Impact Evaluation of STOP Grant Programs for Reducing Violence Against Indian Women The University of Arizona VAWO’s STOP (Services, Training, Officers, and Prosecutors) Grant Programs for Reducing Violence Against Indian Women Discretionary Grant Program was established under Title IV of the Violent Crime Control and Law Enforcement Act of 1994. The program’s principal purpose is to reduce violent crimes against Indian women. The program, which began in fiscal year 1995 with 14 grantees, encourages tribal governments to develop and implement culture-specific strategies for responding to violent crimes against Indian women and providing appropriate services for those who are victims of domestic abuse, sexual assault, and stalking. In this effort, the program provided funding for the services and training, and required the joint coordination, of nongovernmental service providers, law enforcement officers, and prosecutors hence the name, the STOP Grant Programs for Reducing Violence Against Indian Women. The University of Arizona evaluation began in October 1996 with an expected final report due in March 2002. The basic analytical framework of this impact evaluation involves the comparison of quantitative and qualitative pre-grant case study histories of participating tribal programs with changes taking place during the grant period. Various data collection methodologies have been adopted (at least in part, to be sensitive to the diverse Indian cultures): 30-minute telephone interviews, mail surveys, and face-to-face 2- to 3 day site visits. At the time of our review, this evaluation was funded at $468,552. NIJ could not separate the cost of the impact evaluation from the cost of the process evaluation. Too early to assess. number of methodological aspects of the study remain unclear: the site selection process for “in-depth case study evaluations;” the methodological procedures for conducting the longitudinal evaluation; the measurement, validity, and reliability of the outcome variables; the procedures for assessing impact; and the statistical tests to be used for determining significant change. Comparison groups are not included in the methodological design. In addition, only 43 percent of the grantees returned the mail survey, only 25 percent could provide the required homicide and hospitalization rates; and only 26 victims of domestic violence and assault could be interviewed (generally too few to measure statistical change). Generalization of evaluation results to the entire STOP Grant Programs for Reducing Violence Against Indian Women will be difficult, given these problems. Longitudinal Impact Evaluation of the Strategic Intervention for High Risk Youth (a.k.a. The Children at Risk Program) The Urban Institute The Children at Risk (CAR) Program, a comprehensive drug and delinquency prevention initiative funded by the Bureau of Justice Assistance (BJA), the Office of Juvenile Justice and Delinquency Prevention (OJJDP), the Center on Addiction and Substance Abuse, and four private foundations, was established to serve as an experimental demonstration program from 1992 to 1996 in five grantee cities. Low-income youths (11 to 13 years old) and their families, who lived in severely distressed neighborhoods at high-risk for drugs and crime, were targeted for intervention. Eight core service components were identified: case management, family services, education services, mentoring, after- school and summer activities, monetary and nonmonetary incentives, community policing, and criminal justice and juvenile intervention (through supervision and community service opportunities). The goals of the program were to reduce drug use among targeted families and improve the safety and overall quality of life in the community. The evaluation actually began in 1992, and the final report was submitted in May 1998. The study used both experimental and quasi-experimental evaluation designs. A total of 671 youths in target neighborhood schools were randomly assigned to either a treatment group (which received CAR services and the benefit of a safer neighborhood) or to a control group (which received only a safer neighborhood). Comparison groups (n=203 youths) were selected from similar high-risk neighborhoods by means of census tract data; comparison groups did not have access to the CAR Program. Interviews were conducted with youth participants at program entry (baseline), program completion (2 years later), and 1-year after program completion. A parent or caregiver was interviewed at program entry and completion. Records from schools, police, and courts were collected annually for each youth in the sample as a means of obtaining more objective data. The total evaluation funding was $1,034,732. Youths participating in CAR were significantly less likely than youths in the control group to have used gateway and serious drugs, to have sold drugs, or to have committed violent crimes in the year after the program ended. CAR youths were more likely than youths in the control and comparison groups to report attending drug and alcohol abuse programs. CAR youths received more positive peer support than controls, associated less frequently with delinquent peers, and were pressured less often by peers to behave in antisocial ways. CAR households used more services than control group households, but the majority of CAR households did not indicate using most of the core services available. Assessment of evaluation CAR is a methodologically rigorous evaluation in both its design and implementation. The evaluation findings demonstrate the value of the program as a crime and drug prevention initiative. NIJ has the primary role of disseminating Byrne and VAWO discretionary grant program evaluation results of evaluations managed by NIJ, according to NIJ, BJA, and VAWO officials, because NIJ is responsible for conducting these types of evaluations. NIJ is authorized to share the results of its research with federal, state, and local governments. NIJ also disseminates information on methodology designs. NIJ’s practices for disseminating program evaluation results are specified in its guidelines. According to the guidelines, once NIJ receives a final evaluation report from the evaluators and the results of peer reviews have been incorporated, NIJ grant managers are to carefully review the final product and, with their supervisor, recommend to the NIJ Director which program results to disseminate and the methods for dissemination. Before making a recommendation, grant managers and their supervisors are to consider various criteria, including policy implications, the nature of the findings and research methodology, the target audience and their needs, and the cost of various forms of dissemination. Upon receiving the recommendation, the Director of NIJ is to make final decisions about which program evaluation results to disseminate. NIJ’s Director of Planning and Management said that NIJ disseminates program evaluation results that are peer reviewed, are deemed successful, and add value to the field. Once the decision has been made to disseminate program evaluation results and methodologies with researchers and practitioners, NIJ can choose from a variety of publications, including its Research in Brief; NIJ Journal–At a Glance: Recent Research Findings; Research Review; NIJ Journal–Feature Article; and Research Report. In addition, NIJ provides research results on its Internet site and at conferences. For example, using its Research in Brief publication, NIJ disseminated impact evaluation results on the Byrne Children at Risk (CAR) program to 7,995 practitioners and researchers, including state and local government and law enforcement officials; social welfare and juvenile justice professionals; and criminal justice researchers. In addition, using the same format, NIJ stated that it distributed the results of its process evaluation of the Byrne Comprehensive Communities Program (CCP) to 41,374 various constituents, including local and state criminal and juvenile justice agency administrators, mayors and city managers, leaders of crime prevention organizations, and criminal justice researchers. NIJ and other OJP offices and bureaus also disseminated evaluation results during NIJ’s annual conference on criminal justice research and evaluation. The July 2001 conference was attended by 847 public and nonpublic officials, including criminal justice researchers and evaluation specialists from academic institutions, associations, private organizations, and government agencies; federal, state, and local law enforcement, court, and corrections officials; and officials representing various social service, public housing, school, and community organizations. In addition to NIJ’s own dissemination activities, NIJ’s Director of Planning and Management said that it allows and encourages its evaluation grantees to publish their results of NIJ-funded research via nongovernmental channels, such as in journals and through presentations at professional conferences. Although NIJ requires its grantees to provide advance notice if they are publishing their evaluation results, it does not have control over its grantees’ ability to publish these results. NIJ does, however, require a Justice disclaimer that the “findings and conclusions reported are those of the authors and do not necessarily reflect the official position or policies of the U.S. Department of Justice.” For example, although NIJ has not yet disseminated the program evaluation results of the three ongoing VAWO impact evaluations that we reviewed, one of the evaluation grantees has already issued, on its own Internet site, 9 of 20 process evaluation reports on the Arrests Policies evaluation grant. The process evaluations were a component of the NIJ grantee’s impact evaluation of the Arrest Policies Program. Because the evaluations were not completed, NIJ required that the grantee’s publication of the process evaluations be identified as a draft report pending final NIJ review. As discussed earlier, NIJ publishes the results of its evaluations in several different publications. For example, NIJ used the Research in Brief format to disseminate evaluation results for two of the five Byrne discretionary grant programs Comprehensive Communities Program (CCP) and Children at Risk Program (CAR) that were evaluated during fiscal years 1995 through 2001. Both publications summarize information including each program’s evaluation results, methodologies used to conduct the evaluations, information about the implementation of the programs themselves, and services that the programs provided. CCP’s evaluation results were based on a process evaluation. Although a process evaluation does not assess the results of the program being evaluated, it can provide useful information that explains the extent to which a program is operating as intended. The NIJ Research in Brief on the Byrne CAR Discretionary Grant Program provides a summary of issues and findings regarding the impact evaluation. That summary included findings reported one year after the end of the program, in addition to a summary of the methodology used to conduct the evaluation, the outcomes, the lessons learned, and a major finding from the evaluation. Following are GAO’s comments on the Department of Justice’s February 13, 2002, letter. 1. We have amended the text to further clarify that BJA administers the Byrne program, just as its counterpart, VAWO, administers its programs (see page 4). However it is important to point out that regardless of the issues raised by OJP, the focus of our work was on the methodological rigor of the evaluations we reviewed, not the purpose and structure of the programs being evaluated. As discussed in our Scope and Methodology section, our work focused on program evaluation activities associated with Byrne and VAWO discretionary grant programs generally and the methodological rigor of impact evaluation studies associated with those programs in particular. To make our assessment, we relied on NIJ officials to identify which of the program evaluations of Byrne and VAWO grant programs were, in fact, impact evaluation studies. We recognize that there are substantial differences among myriad OJP programs that can make the design and implementation of impact evaluations arduous. But, that does not change the fact that impact evaluations, regardless of differences in programs, can benefit from stronger up-front attention to better ensure that they provide meaningful and definitive results. 2. We disagree with OJP’s assessment of our report’s treatment of program variation. As discussed earlier, the scope of our review assessed impact evaluation activities associated with Byrne and VAWO discretionary grant programs, not the programs themselves. We examined whether the evaluations that NIJ staff designated as impact evaluations were designed and implemented with methodological rigor. In our report we observe that variations in projects funded through VAWO programs complicate the design and implementation of impact evaluations. According to the Assistant Attorney General, this variation in projects is consistent with the intent of the programs’ authorizing legislation. We recognize that the authorizing legislation provides VAWO the flexibility in designing these programs. In fact, we point out that although such flexibility may make sense from a program perspective, project variation makes it much more difficult to design and implement a definitive impact evaluation. This poses sizable methodological problems because an aggregate analysis, such as one that might be constructed for an impact evaluation, could mask the differences in effectiveness among individual projects and therefore not result in information about which configurations of projects work and which do not. 3. We have amended the Results in Brief to clarify that peer reviews evaluated proposals. However, it is important to note that while the peer review committees may have found the two VAWO grant applications to be the most superior, this does not necessarily imply that the impact evaluations resulting from these applications were well designed and implemented. As discussed in our report, the peer review panel for each of the evaluations expressed concerns about the proposals that were submitted, including issues related to site selection and the need for additional impact measures and control variables. Our review of the documents NIJ made available to us, including evaluators’ responses to peer review comments, led to questions about whether the evaluators’ proposed methodological designs were sufficient to allow the evaluation results to be generalized and to determine whether the program was working. 4. We have amended the Background section of the report to add this information (see page 6). 5. As discussed in OJP’s comments, we discussed external factors that could account for changes that the Rural Program evaluation observed in victims’ lives and the criminal justice system. We did so not to critique or endorse activities that the program was or was not funding, but to demonstrate that external factors may influence evaluation findings. To the extent that such factors are external, the Rural Program evaluation methodology should account for their existence and attempt to establish controls to minimize their affect on results (see page 14). We were not intending to imply that alcohol is a cause for domestic violence, as suggested by the Assistant Attorney General, but we agree that it could be an exacerbating factor that contributes to violence against women. 6. As discussed earlier, we recognize that there are substantive differences in the intent, structure, and design of the various discretionary grant programs managed by OJP and its bureaus and offices. Also, as stated numerous times in our report, we acknowledge not only that impact evaluation can be an inherently difficult and challenging task but also that measuring the impact of these specific Byrne and VAWO programs can be arduous given that they are operating in an ever changing, complex environment. We agree that not all evaluation issues that can compromise results are easily resolvable, but we firmly believe that with more up- front attention to design and implementation issues, there is a greater likelihood that NIJ impact evaluations will provide meaningful results for policymakers. Regarding the representativeness of sites, NIJ documents that were provided during our review indicated that sites selected during the Rural Program evaluation were selected on the basis of feasibility, as discussed in our report— specifically, whether the site would be among those participants equipped to conduct an evaluation. In its comments, OJP stated that the 6 sites selected for the impact evaluation were chosen to maximize geographical and purpose area diversity while focusing on sites with high program priority. OJP did not provide any additional information that would further indicate that the sites were selected on a representative basis. OJP did, however, point out that the report does not address how immensely expensive the Arrest evaluation would have become if it had included all 130 sites. We did not address specific evaluation site costs because we do not believe that there is a requisite number of sites needed for any impact evaluation to be considered methodologically rigorous. Regarding OJP’s comment about the flexibility given to grantees in implementing VAWO grants, our report points out that project variation complicates evaluation design and implementation. Although flexibility may make sense from a program perspective, it makes it difficult to generalize about the impact of the entire program. 7. We used the drug court example to illustrate, based on our past work, how comparison groups can be used in evaluation to isolate and minimize external factors that could influence the study results. We did not, nor would we, suggest that any particular unit of analysis is appropriate for VAWO evaluations since the appropriate unit of analysis is dependent upon the specific circumstances of the evaluation. We were only indicating that since comparison groups were not utilized in the studies, the evaluators were not positioned to demonstrate that change took place as a result of the program. 8. We do not dispute that VAWO grant programs may provide valuable outputs over the short term. However, as we have stated previously, the focus of our review was on the methodological rigor of impact evaluations--those evaluations that are designed to assess the net effect of a program by comparing program outcomes with an estimate of what would have happened in the absence of the program. Given the methodological issues we found, it is unclear whether NIJ will be able to discern long-term effects due to the program. 9. As stated in our report, we acknowledge not only that impact evaluation can be an inherently difficult and challenging task, but that measuring the impact of Byrne and VAWO programs can be arduous given the fact that they are operating in an ever changing, complex environment. We agree that not all evaluation issues that can compromise results are easily resolvable, but we firmly believe that, with more up-front attention to design and implementation issues, there is a greater likelihood that NIJ evaluations will provide meaningful results for policymakers. As we said before, absent this up-front attention, questions arise as to whether NIJ is (1) positioned to provide the definitive results expected from an impact evaluation and (2) making sound investments given the millions of dollars spent on these evaluations. If NIJ believes that the circumstances of a program are such that it cannot be evaluated successfully (in relation to impact) they should not proceed with an impact evaluation. 10. We have amended the footnote to state that from fiscal year 1995 through fiscal year 1999, this program was administered by VAWO. As of fiscal year 2000, responsibility for the program was shifted to OJP’s Corrections Program Office (see page 5). 11. In regard to the number of grants, we have amended the text to reflect that the information NIJ provided during our review is the number of grantees, not the number of grants (see pages 25 and 26). We have also amended our report to reflect some of the information provided in VAWO’s description of the Rural Domestic Violence Program to further capture the essence of the program (see page 25). 12. We disagree. We believe that separating the cost of the impact and process evaluations is more than a matter of bookkeeping. Even though the work done during the process phase of an evaluation may have implications for the impact evaluation phase of an evaluation, it would seem that, given the complexity of impact evaluations, OJP and NIJ would want to have in place appropriate controls to provide reasonable assurance that the evaluations are being effectively and efficiently carried out at each phase of the evaluation. Tracking the cost of these evaluation components would also help reduce the risk that OJP’s, NIJ’s, and, ultimately, the taxpayer’s investment in these impact evaluations is not wasted. 13. As discussed earlier, we recognize that there are substantive differences in the intent, structure, and design of the various discretionary grant programs managed by OJP and its bureaus and offices, including those managed by VAWO. Our report focuses on the rigor of impact evaluations of grant programs administered by VAWO and not on the program’s implementing legislations. Although flexibility may make sense from a program perspective, it makes it difficult to develop a well designed and methodologically rigorous evaluation that produces generalizeable results about the impact of the entire program. 14. Our report does not suggest that other types of evaluations, such as comprehensive process evaluations, are any less useful in providing information about how well a program is operating. The scope of our review covered impact evaluations of Byrne and VAWO discretionary grant programs—those designed to assess the net effect of a program by comparing program outcomes with an estimate of what would have happened in the absence of the program. In addition to the above, Wendy C. Simkalo, Jared A. Hermalin, Chan My J. Battcher, Judy K. Pagano, Grace A. Coleman, and Ann H. Finley made key contributions to this report. | Discretionary grants awarded under the Bureau of Justice Assistance's (BJA) Byrne Program help state and local governments make communities safe and improve criminal justice. Discretionary grants awarded under BJA's Violence Against Women Office (VAWO) programs are aimed at improving criminal justice system responses to domestic violence, sexual assault, and stalking. The National Institute of Justice (NIJ) awarded $6 million for five Byrne Program and five VAWO discretionary grant program evaluations between 1995 and 2001. Of the 10 programs evaluated, all five VAWO evaluations were designed to be both process and impact evaluations of the VAWO programs. Only one of the five Byrne evaluations was designed as an impact evaluation and the other four evaluations were process evaluations. GAO's in-depth review of the four impact evaluations since fiscal year 1995 showed that only one of these--the evaluation of the Byrne Children at Risk Program--was methodologically sound. The other three evaluations, all of which examined VAWO programs, had methodological problems. |
The United States is the largest consumer of crude oil and petroleum products. In 2007, the U.S. share of world oil consumption was approximately 24 percent. While DOE projects that U.S. demand for oil will continue to grow, domestic production has generally been in decline for decades, leading to greater reliance on imported oil. U.S. imports of oil have increased from 32 percent of domestic demand in 1985 to 58 percent in 2007. In managing the SPR, the Secretary of Energy is authorized by the Energy Policy and Conservation Act, as amended, to place in storage, transport, or exchange, (1) crude oil produced from federal lands; (2) crude oil which the United States is entitled to receive in kind as royalties from production on federal lands; and (3) petroleum products acquired by purchase, exchange, or otherwise. The act also states that the Secretary shall, to the greatest extent practicable, acquire petroleum products for the SPR ina manner that minimizes the cost of the SPR and the nation’s vulnerability to a severe energy supply interruption, among other things. until being repealed in 2000, the act provided the Secretary discretion authority to require importers and refiners of petroleum products to store and maintain readily available inventories, and it directed the Secretary to establish and maintain regional petroleum reserves under certain circumstances. 42 U.S.C. § 6240(b). SPR has sold or exchanged oil on several other occasions, including providing small quantities of oil to refiners to help them through short- term localized oil shortages. Oil markets have changed substantially in the 34 years since the establishment of the SPR. At the time of the Arab oil embargo, price controls in the United States prevented the prices of oil and petroleum products from increasing as much as they otherwise might have, contributing to a physical oil shortage that caused long lines at gasoline stations throughout the United States. Now that the oil market is global, the price of oil is determined in the world market primarily on the basis of supply and demand. In the absence of price controls, scarcity is generally expressed in the form of higher prices, as purchasers are free to bid as high as they want to secure oil supply. In a global market, an oil supply disruption anywhere in the world raises prices everywhere. Releasing oil reserves during a disruption provides a global benefit by reducing oil prices in the world market. In response to various congressional directives, DOE has studied the issue of including refined petroleum products at various times since 1975. After the initial SPR plan was developed, the issue was reviewed again in whole, or in part, in 1977, 1982, 1989, and 1998. Except for the 1998 report, DOE concluded that including refined petroleum products as part of the SPR was unnecessary and too expensive. The 1998 study dealt with establishing a home heating oil reserve and while it did not conclude that a reserve should or should not be established, it did find the construction of such a reserve would have net negative benefits. The 2000 amendments to the Energy Policy and Conservation Act authorized the Secretary to establish a Northeast Home Heating Oil Reserve, which was created and filled that same year. Although this reserve is considered separate from the SPR, it is authorized to contain 2 million barrels of heating oil and currently holds nearly that amount. The Reserve is an emergency source of heating oil to address a severe energy supply interruption in the Northeast. According to DOE, the intent was to create a reserve large enough to allow commercial companies to compensate for interruptions in supply of heating oil during severe winter weather, but not so large as to dissuade suppliers from responding to increasing prices as a sign that more supply is needed. To date, the Northeast Home Heating Oil Reserve has not been used to address an emergency winter shortage situation. Some of the arguments for including refined petroleum products in the SPR are: (1) the United States’ increased reliance on foreign imports and resulting exposure to supply disruptions or unexpected increases in demand elsewhere in the world, (2) possible reduced refinery capacity during weather related supply disruptions, (3) time needed for petroleum product imports to reach all regions of the United States in case of an emergency, and (4) port capacity bottlenecks in the United States which limit the amount of petroleum products that can be imported quickly during emergencies. Some of the arguments against including refined petroleum products in the SPR are: (1) the surplus of gasoline in Europe, (2) the high storage costs of refined products, (3) the use of ‘boutique’ fuels in the United States, and (4) policy alternatives may diminish U.S. reliance on oil. First, in our December 2007 report, we found that while the United States was largely self-sufficient in gasoline in 1970, in fiscal year 2007, we imported over 10 percent of our annual consumption of gasoline and smaller percentages of jet fuel and some other products. We also found that along with an increased reliance on imports the United States is exposed to supply disruptions or unexpected increases in demand anywhere else in the world. Because the SPR contains only crude oil, if an unexpected supply disruption occurs in a supply center for the United States, the government’s emergency strategy would rely on sufficient volumes of the SPR and a refinery sector able to turn out products at a pace necessary to meet consumer demands in a crisis. Any growth in demand in the United States would put increasing pressure on this policy, and for much of the past 25 years, demand for refined petroleum products in the United States and internationally has outpaced growth in refining capacity. Second, in our August 2006 report, we found that the ability of the SPR to reduce economic damage may be impaired if refineries are not able to operate at capacity or transport of oil to refineries is delayed. For example, petroleum product prices still increased dramatically following Hurricanes Katrina and Rita, in part because many refineries are located in the Gulf Coast region and power outages shut down pipelines that refineries depend upon to supply their crude oil and to transport their refined petroleum products to consumers. DOE reported that 21 refineries in affected states were either shut down or operating at reduced capacity in the aftermath of the hurricane. In total, nearly 30 percent of the refining capacity in the United States was shut down, disrupting supplies of gasoline and other products. Two pipelines that send petroleum products from the Gulf coast to the East Coast and the Midwest were also shut down as a result of Hurricane Katrina. For example, Colonial Pipeline, which transports petroleum products to the Southeast and much of the East Coast, was not fully operational for a week after Hurricane Katrina. Consequently, average retail gasoline prices increased 45 cents per gallon between August 29 and September 5, short-term gasoline shortages occurred in some places, and the media reported gasoline prices greater than $5 per gallon in Georgia. The hurricane came on the heels of a period of high crude oil prices and a tight balance worldwide between petroleum demand and supply, and illustrated the volatility of gasoline prices given the vulnerability of the gasoline infrastructure to natural or other disruptions. Third, because some foreign suppliers are farther from the U.S. demand centers they serve than the relevant domestic supply center, the time it takes to get additional product to a demand center experiencing a supply shortfall may be longer than it would be if the United States had its own product reserves. For example, imports of gasoline to the West Coast may come from as far away as Asia or the Middle East, and the transport time and therefore cost is greater. To the extent that imported gasoline or other petroleum products come from far away, the lengthening of the supply chain has implications for the ability to respond rapidly to domestic supply shortfalls. Specifically, if supplies to relieve a domestic regional supply shortfall must come from farther away, the price increases associated with such shortfalls may be greater and/or last longer. In this sense, the West Coast and the middle of the country are more vulnerable to price increases or volatility than is the Northeast, which can receive shipments of gasoline from Europe, often on voyages of less than a week. Fourth, the receipt of petroleum products may be delayed because port facilities are operating at or near capacity. For example, one-fourth of the ports in a U.S. Maritime Administration (MARAD) survey described their infrastructure impediments as “severe.” Officials from the interagency U. S. Committee on the Maritime Transportation System, which includes MARAD, the National Oceanic and Atmospheric Administration, and the U.S. Army Corps of Engineers, told us that U.S. ports and waterways are constrained in capacity and utilization, and anticipate marine supply infrastructure will become more constrained in the future. Officials at the Ports of Los Angeles, Long Beach, Oakland, Houston, Savannah, and Charleston reported congestion and emphasized in a 2005 report that they are experiencing higher than projected growth levels. In fact, one European product transporter we spoke with said that the European response to Hurricanes Rita and Katrina were hindered because East Coast ports in the United States could not handle the number of oil tankers carrying petroleum products from Europe, with some tankers waiting for as long as 2 weeks at port. First, a key impetus for global trade in petroleum products has been a structural surplus in production of gasoline and a deficit in production of diesel in Europe. This surplus of gasoline is largely the result of a systematic switch in European countries toward automobiles with diesel- powered engines, which are more fuel efficient than gasoline-powered engines. European regulators promoted diesel fuel use in Europe by taxing diesel at a lower rate, and European demand for diesel vehicles rose. The European refining and marketing sector responded to this change in demand by importing increasing amounts of diesel, and exporting a growing surplus of gasoline to the United States and elsewhere. The United States has purchased increasing amounts of gasoline, including gasoline blendstocks, from Europe in recent years. These imports have generally had a strong seasonal component, with higher levels of imports during the peak summer driving months and lower imports during the fall and winter. The major exception to this seasonality came in the months of October 2005 through January 2006, when imports surged in response to U.S. shortfalls resulting from Hurricanes Katrina and Rita in August and September 2005, respectively. Experts and company representatives told us they believe this structural imbalance within the European Union will continue for the foreseeable future, and perhaps widen, resulting in more exports of European gasoline and blending components to the United States. Second, in its prior reports on the subject, DOE found that refined petroleum product reserves are more costly than crude oil to store and must be periodically used and replaced to avoid deterioration of the products. Although DOE officials said some refined products can be stored in salt caverns just as the SPR crude oil is currently stored, these caverns are predominantly found on the Gulf Coast. In order to store refined product in other parts of the United States, storage tanks may need to be built, which is costlier than centralized salt cavern storage. According to DOE, stockpiling oil in salt caverns costs about $3.50 per barrel in capital costs. Storing oil in aboveground tanks, by comparison, can cost $15 to $18 per barrel. One of the maintenance costs of refined petroleum products that is not associated with crude oil storage is turnover, or replacement costs, because refined products deteriorate more quickly. Turnover of the product is required to ensure quality. For example, DOE found that when gasoline is stored in above-ground tanks, the turnover time is 18 to 24 months. Conversely, DOE found that crude oil could be stored for prolonged periods without losing quality. The more frequent the turnover, the higher the throughput and administrative costs. Third, while the language in the Energy Policy and Conservation Act addresses refined petroleum products as well as crude oil, DOE conducted a study in 1977 that found geographically dispersed, small reserves of a variety of petroleum products would be more costly than a centralized crude oil reserve. For example, many states have adopted the use of special gasoline blends—or ‘boutique’ fuels, which could pose a challenge in incorporating refined products in the SPR. Unless requirements to use these fuels were waived during emergencies, as they were in the aftermath of Hurricanes Katrina and Rita, boutique fuels could need to be strategically stored at multiple regional, state, or local locations due to reduced product fungibility. Conversely, crude oil provides flexibility in responding to fluctuations in refined product market needs as regional fuel specifications and environmental requirements change over time. Furthermore, the switching of seasonal blends to meet environmental requirements and product degradation would require inventory turnover as compared to crude oil storage, which does not require the same level of turnover. Fourth, there are several policy choices that might diminish the growth in U.S. demand for oil. First, research and investment in alternative fuels might reduce the growth of U.S. oil demand. Vehicles that use alternative fuels, including ethanol, biodiesel, liquefied coal, and fuels made from natural gas, are now generally more expensive or less convenient to own than conventional vehicles, because of higher vehicle and fuel costs and a lack of refueling infrastructure. Alternative-fuel vehicles could become more viable in the marketplace if their costs and fuel delivery infrastructure become more comparable to vehicles fueled by petroleum products. Second, greater use of advanced fuel-efficient vehicles, such as hybrid electric and advanced diesel cars and trucks, could reduce U.S. oil demand. The Energy Policy Act of 2005, as amended, directs the Secretary of Energy to establish a program that includes grants to automobile manufacturers to encourage domestic production of these vehicles. Third, improving the Corporate Average Fuel Economy (CAFE) standards could curb demand for petroleum fuels. After these standards were established in 1975, the average fuel economy of new light-duty vehicles improved from 13.1 miles per gallon in 1975 to a peak of 22.1 miles per gallon in 1987. More recently, the fuel economy of new vehicles in the United States has stagnated at approximately 21 miles per gallon. However, CAFE standards have recently been raised to require auto manufacturers to achieve a combined fuel economy average of 35 miles per gallon for both passenger and non-passenger vehicles beginning in model year 2020. Any future increases could further decrease the U.S. oil demand. The following three lessons learned from the management of the existing crude oil SPR highlight some of the issues that may need to be considered in acquiring refined petroleum products. Select a cost-effective mix of products. To fill the SPR in a more cost- effective manner, we recommended in August 2006 that DOE include in the SPR at least 10 percent heavy crude oils, which are generally cheaper to acquire than the lighter oils that comprise the SPR’s volume. Including heavier oil in the SPR could significantly reduce fill costs because heavier oil is generally less expensive than lighter grades. For example, if DOE included 10 percent heavy oil in the SPR as it expands to 1 billion barrels, that would require DOE to add 100 million barrels of heavy oil, or about one-third of the total new fill. From 2003 through 2007, Maya—a common heavy crude oil—has traded for about $12 less per barrel on average than West Texas Intermediate—a common light crude oil. If this price difference were to persist over the duration of the new fill period, DOE would save about $1.2 billion in nominal terms by filling the SPR with 100 million barrels of heavy oil. Similarly, refined petroleum products included as part of the SPR may comprise a number of different types of products (e.g., gasoline, diesel, and jet fuel) and possibly different blends of products (e.g., different grades and mixtures of gasoline); DOE will need to determine the most cost-effective mix of products in light of existing legal and regulatory requirements to use specific blends of fuels. Consider using a dollar-cost-averaging acquisition approach. Also in our August 2006 report, we recommended that DOE consider filling the SPR by acquiring a steady dollar value of oil over time, rather than a steady volume as has occurred in recent years. This “dollar-cost-averaging” approach would allow DOE to take advantage of fluctuations in oil prices and ensure that more oil would be acquired when prices are low and less when prices are high. In August 2006, we reported that if DOE had used this approach from October 2001 through August 2005, it could have saved approximately $590 million in fill costs. We also ran simulations to estimate potential future cost savings from using a dollar-cost-averaging approach over 5 years and found that DOE could save money regardless of the price of oil as long as there is price volatility, and that the savings would be generally greater if oil prices were more volatile. We would expect a dollar-cost-averaging acquisition method to also provide positive benefits when acquiring refined petroleum products. Maximize cost-effective storage options. According to DOE, salt formations offer the lowest cost, most environmentally secure way to store crude oil for long periods of time. Stockpiling oil in artificially created caverns, deep within rock-hard salt, has historically cost about $3.50 per barrel in capital costs. In comparison, storing oil in above-ground tanks can cost $15 to $18 per barrel. Similarly, for those refined petroleum products that can be stored below ground, salt formations may offer a cost-effective storage option. However, possible storage options would need to be evaluated hand-in-hand with the need to (1) turn over the refined stocks periodically because their stability deteriorates over time, and (2) transport the refined petroleum products quickly to major population centers where the products will be used. Mr. Chairman, this concludes my prepared statement. I would be pleased to answer any questions that you or other Members of the Committee may have at this time. For further information about this testimony, please contact Frank Rusco at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this testimony include Jeffery D. Malcolm, Assistant Director, and Holly Sasso. Also contributing to this testimony were Josey Ballenger, Philip Farah, Quindi Franco, Michelle Munn, Benjamin Shouse, Karla Springer, and Barbara Timmerman. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | The possibility of storing refined petroleum products as part of the Strategic Petroleum Reserve (SPR) has been contemplated since the SPR was created in 1975. The SPR, which currently holds about 700 million barrels of crude oil, was created to help insulate the U.S. economy from oil supply disruptions. However, the SPR does not contain refined products such as gasoline, diesel fuel, or jet fuel. The Energy Policy Act of 2005 directed the Department of Energy (DOE) to increase the SPR's capacity from 727 million barrels to 1 billion barrels, which it plans to do by 2018. With the possibility of including refined products as part of the expansion of the SPR, this testimony discusses (1) some of the arguments for and against including refined products in the SPR and (2) lessons learned from the management of the existing crude oil SPR that may be applicable to refined products. To address these issues, GAO relied on its 2006 report on the SPR (GAO-06-872), 2007 report on the globalization of petroleum products (GAO-08-14), and two 2008 testimonies on the cost-effectiveness of filling the SPR ( GAO-08-512T and GAO-08-726T). GAO also reviewed prior DOE and International Energy Agency studies on refined product reserves. Since the SPR, the largest emergency crude oil reserve in the world, was created in 1975 a number of arguments have been made for and against including refined petroleum products. Some of the arguments for including refined products in the SPR are: (1) the United States' increased reliance on imports and resulting exposure to supply disruptions or unexpected increases in demand elsewhere in the world, (2) possible reduced refinery capacity during weather related supply disruptions, (3) time needed for petroleum product imports to reach all regions of the United States in case of an emergency, and (4) port capacity bottlenecks in the United States, which limit the amount of petroleum products that can be imported quickly during emergencies. For example, the damage caused by Hurricane Katrina demonstrated that the concentration of refineries on the Gulf Coast and resulting damage to pipelines left the United States to rely on imports of refined product from Europe. Consequently, regions experienced a shortage of gasoline and prices rose. Conversely, some of the arguments against including refined products in the SPR are: (1) the surplus of refined products in Europe, (2) the high storage costs of refined products, (3) the use of a variety of different type of blends of refined products--"boutique" fuels--in the United States, and (4) policy alternatives that may diminish reliance on oil. For example, Europe has a surplus of gasoline products because of a shift to diesel engines, which experts say will continue for the foreseeable future. Europe's surplus of gasoline is available to the United States in emergencies and provided deliveries following Hurricanes Katrina and Rita in 2005. The following three lessons learned from the management of the existing SPR may have some applicability in dealing with refined products. (1) Select a cost-effective mix of products. In 2006, GAO recommended that DOE include at least 10 percent heavy crude oil in the SPR. If DOE bought 100 million barrels of heavy crude oil during its expansion of the SPR it could save over $1 billion in nominal terms, assuming a price differential of $12 between the price of light and heavy crude, the average differential from 2003 through 2007. Similarly, if directed to include refined products as part of the SPR, DOE will need to determine the most cost-effective mix of products. (2) Consider using a dollar-cost-averaging acquisition approach. Also in 2006, GAO recommended that DOE consider acquiring a steady dollar value--rather than a steady volume--of oil over time when filling the SPR. This would allow DOE to acquire more oil when prices are low and less when prices are high. GAO expects that a dollar-cost-averaging acquisition method would also provide benefits when acquiring refined products. (3) Maximize cost-effective storage options. According to DOE, below ground salt formations offer the lowest cost approach for storing crude oil for long periods of time--$3.50 per barrel in capital cost versus $15 to $18 per barrel for above ground storage tanks. Similarly, DOE will need to explore the most cost-effective storage options for refined products. |
Education is one of VA’s four core missions, and in fiscal year 2002, VA paid approximately $383 million to residents training at about 130 VA health care facilities. For the 2002/2003 academic year, VA supported almost 8,800 residency slots, about 9 percent of all residency training positions in the United States. Moreover, because several residents typically rotate through each slot, VA estimates that it provides graduate medical training to more than 28,000 residents each year, or as many as one-third of the nation’s residents. The number of residency slots VA allocates to individual medical centers involved in GME ranges from less than 1 to more than 200. Although about half of VA’s residency positions are in primary care, VA supports GME in 45 recognized medical specialties and subspecialties; individual medical centers provide training in from 1 to more than 30 specialties. VA headquarters officials have ultimate oversight responsibility for the activities of residents within VA medical centers, and several different headquarters offices have monitoring functions that relate to resident supervision. VA’s Office of Academic Affiliations (OAA) has responsibility for developing and overseeing policies for resident supervision, monitoring VA’s GME activities, and allocating residency slots. Under the Patient Safety Program VA implemented in January 2002, VA’s National Center for Patient Safety collects and analyzes information from VA medical centers about patient risk events and their causes. Medical centers are required to report all patient safety events—including adverse events and close calls—to the National Center for Patient Safety. In addition, medical centers are required to determine the root causes of patient safety incidents with severe or potentially severe outcomes and develop plans to prevent them in the future. The success of this program will depend on the extent to which VA is able to establish a culture in which employees feel safe to make these reports. VA’s Office of Patient Care Services establishes and monitors health care programs. For example, its National Surgical Quality Improvement Program (NSQIP) examines postoperative outcomes. Additional oversight of resident supervision is provided by VA’s Office of Inspector General. Because VA’s health care system is decentralized, responsibilities for implementing VA’s national policy for resident supervision are assigned to networks and medical centers. Network officials are to provide medical centers with the resources necessary to ensure that residents are supervised in accordance with VA’s national policy and are to evaluate the strengths and weaknesses of medical centers’ GME activities. Medical center directors are responsible for establishing facility policies for resident supervision that fulfill the requirements of VA’s national policy, and medical center chiefs of staff are responsible for the educational and patient care activities of all residents within the facility. In addition, a physician in each medical specialty is responsible for ensuring that the residents training in that specialty are supervised as required. VA medical centers typically also share responsibility for the oversight of residents with affiliated institutions that sponsor GME programs. VA participates in more than 1,900 distinct GME programs, 29 of which are sponsored by VA medical centers. The rest are sponsored by about 120 medical schools and teaching hospitals with which VA medical centers are affiliated. The majority of VA medical centers work with one GME sponsoring institution, but individual VA medical centers participate in the GME programs of up to four different sponsors. When a VA medical center serves as a training site for residents, but is not the sponsoring institution, it is known as a participating institution. GME accrediting bodies hold sponsoring institutions responsible for all aspects of their educational programs, including aspects conducted within participating institutions. GME accrediting bodies do not separately accredit participating institutions and do not evaluate the extent to which supervision that occurs within participating institutions, such as VA medical centers, meets requirements set by those participating institutions. VA requires accreditation of each GME program through which its residents obtain training. More than 98 percent of VA’s residency slots are filled by residents in GME programs that are subject to accreditation review by the Accreditation Council for Graduate Medical Education (ACGME); the remaining slots are filled by residents in osteopathic programs that are subject to accreditation review by the American Osteopathic Association. GME accreditation status indicates an overall assessment of the quality of an educational program in a particular medical specialty. Accrediting bodies evaluate several aspects of each GME program, including provisions for the supervision and safety of residents, the adequacy of institutional resources, educational curriculum, and the extent to which the program meets that specialty’s specific training requirements. A program can be fully accredited, or a program can be granted accreditation with notification of problems that must be corrected. Accreditation can also be withdrawn. A program’s accreditation status is made public, but to safeguard confidential information, specific problems with the program or its training sites are described in letters sent only to the sponsoring institution. Accrediting bodies have not been sending these letters to participating institutions. Accrediting bodies state that the quality of patient care must remain the highest priority of GME programs. Health care organizations that provide GME must ensure that qualified staff physicians supervise residents and that the same standards for the quality and safety of patient care apply when residents are involved in health care delivery as when they are not. GME accrediting bodies require that supervising physicians adjust the level of supervision to meet the educational goal of increasing residents’ competence by giving them appropriate opportunities to assume greater independence in their patient-care activities, that is, allowing residents to assume graduated responsibilities. The supervising physician relies on his or her professional judgment and knowledge of the patient’s medical condition and the resident’s level of mastery to determine the degree of independence of the resident’s patient-care responsibilities. VA’s national policy on resident supervision is detailed in a handbook that establishes specific requirements for (1) the involvement of supervising physicians in the care provided by residents who diagnose, treat, or discharge patients and (2) the documentation of that involvement. These specific requirements apply to four domains of residents’ clinical activity—inpatient care, outpatient care, diagnostic and therapeutic procedures, and consultations—and provide guidelines for putting into practice GME accrediting bodies’ principles of resident supervision and graduated levels of responsibility. (See table 1 for an example of VA’s requirements for supervision in each of the four domains.) Experts on GME told us that the requirements in VA’s handbook are reasonable and appropriately consider the role of supervision in ensuring the quality of patient care and of resident education. Some of these experts described it as a best practice model. VA does not have adequate procedures to determine whether residents at VA medical centers are supervised in accordance with its national requirements. For example, VA does not check whether each medical center involved in GME has adopted policies that are consistent with VA’s requirements for resident supervision. To learn what medical centers and networks do to monitor whether supervision is consistent with VA’s national requirements, VA requires that medical centers and networks submit annual reports on residency training. Medical centers’ reports filed for the 2000/2001 academic year indicate that most medical centers review some documentation of resident supervision, but few conduct comprehensive reviews. To obtain more complete information about the supervision residents receive, VA is planning to use external peer review to assess adherence to its requirements for documenting resident supervision. These plans have not been finalized. For example, as of May 2003, VA had not decided whether reviewers would examine records from VA’s new outpatients. VA does not know whether all its medical centers have adopted policies that are consistent with the specific requirements in its resident supervision handbook for the supervision of residents’ diagnosis, treatment, and discharge of patients. The director of each medical center involved in GME is to establish facility policies for resident supervision that fulfill the requirements in VA’s handbook, but VA requires a review of only one requirement involving the supervision of diagnostic and therapeutic procedures—the medical centers’ requirements for the minimal acceptable level of supervision for diagnostic and therapeutic procedures. Specifically, in situations in which the supervising physician is not in the operating or procedural suite, VA requires that the supervisor must, at a minimum, be immediately available in the facility or campus to provide direct supervision of the procedure if necessary. Network GME managers are supposed to review and approve this requirement; they are not required to report the results of their reviews to OAA. There is no separate OAA review of any of the requirements in medical centers’ supervision policies. We found that not all networks have completed the one required review and that medical centers’ policies are not always consistent with VA’s national policy. Of the 11 network GME managers we interviewed, 7 told us that they had completed this required review of the minimal requirements for supervision of procedures in medical center policies, but 4 told us that they had not. We found that the requirement of a medical center in one of the four networks that had not conducted this review was less stringent than the requirement in VA’s handbook for supervision of diagnostic and therapeutic procedures. The written policy at this medical center stated that the supervising physician can be immediately available by telephone rather than requiring him or her to be immediately available in the facility or on campus. One network GME manager who did review this requirement for supervision of diagnostic and therapeutic procedures told us that in 2002, he identified three medical centers that had written requirements for supervision of these procedures that were less stringent than the requirement in VA’s handbook and that he instructed each of these facilities to change its policy to be consistent with VA’s national requirement. To learn what medical centers and networks do to monitor whether supervision is consistent with VA’s resident supervision handbook, VA has required annual reports on residency training programs beginning with the 1999/2000 academic year. Medical center managers are to provide narrative answers to specific open-ended questions about their monitoring processes as well as about the problems they identified and actions they took to address them for each of three areas of oversight. (See table 2.) These medical center reports are channeled through VA’s networks to OAA. Network officials are to review them and summarize the strengths and weaknesses of the medical centers’ GME programs in network-level annual reports, which are also submitted to OAA. These annual reports can provide managers with limited, but useful, information about the extent and quality of monitoring performed by medical centers, including whether medical centers monitor documentation or some other indication of supervision. Some medical centers and networks provided little detail in response to the annual reports’ open-ended questions. For example, not all medical centers described which specific aspects of resident supervision they monitored. OAA used open-ended questions in part to accommodate differences among medical centers in the number and type of residents they train. VA officials have used information from annual reports to monitor medical center oversight of resident supervision. For example, one network GME manager followed up on a problem identified through a medical center annual report by requiring the medical center to submit an action plan for improving supervision of ophthalmology residents by the beginning of the 2002/2003 academic year. An OAA official told us that analysis of these annual reports not only helped identify areas of vulnerability with residency programs, but also pointed to possible best practices. VA does not require its medical centers or networks to conduct systematic reviews of the documentation of resident supervision, and medical centers differ in the extent to which they monitor adherence to VA’s requirements for supervision. More than three-fourths of medical centers’ annual reports included a description of an independent review of the documentation of supervision of at least one aspect of care provided by residents, but most medical centers did not describe reviews of all four domains of residents’ health care activities. For each of three domains— inpatient care, outpatient care, and diagnostic and therapeutic procedures—over half the medical centers described a process for an independent review of at least one element of the documentation of resident supervision, that is, a review by someone other than a physician with related supervisory responsibilities (see table 3). For example, the quality management office at one medical center reviews medical records each month to determine whether documentation indicates that inpatients were seen by supervising physicians within 24 hours of admission. As shown in table 3, few medical centers, however, described such a process for review of supervisory documentation when residents provide consultations to patients’ primary physicians. In addition, medical centers’ annual reports did not always include clear, detailed descriptions of the documentation requirements they monitor. Few specifically mentioned monitoring particular VA-wide requirements, such as the requirement for documentation of supervisory involvement at the time of each new outpatient’s first visit. In some instances, medical centers described a less systematic review process or one that was used for only some services provided by residents. For diagnostic and therapeutic procedures, for example, some medical centers described processes for reviewing only selected procedures, such as endoscopies or major surgeries. About half of the 91 medical centers that reported having an independent review process indicated they found deficiencies with the documentation of resident supervision, and all but one discussed actions they took to correct these problems. For example, officials from one medical center told us that they implemented a program to discipline individual physicians who consistently do not meet the medical center’s requirements for documenting supervision. The acting chief of staff there told us that during the 2001/2002 academic year, three physicians had each been suspended without pay for 1 day for not consistently meeting documentation requirements and that there had been significant improvement in the documentation of resident supervision since this disciplinary program went into effect. This medical center has also developed a strategy for linking contract physicians’ pay to their provision and documentation of supervision. Documentation reviews have proven useful in identifying inadequate supervision. We identified three medical centers that described in their annual reports finding evidence of inadequate resident supervision through their documentation reviews. In their annual reports, two of these three medical centers stated that there were no adverse patient events involving resident supervision. The third did not state whether there had been any adverse patient outcomes. In the first instance, the medical center reported that its review of documentation indicated that some staff physicians provided a “low level” of supervision to residents in the inpatient surgical setting. Medical center officials responded by meeting with those physicians and conducting a follow-up review to monitor the level of supervision. In the second instance, the medical center reported that its supervision of residents was generally satisfactory, but that it had found through its documentation review one episode in which the attending surgeon had left the city during a procedure that he was supposed to be supervising. This medical center reported that the surgeon was formally reprimanded. In the third instance, a medical center reported that through its documentation review, it identified two specialties— urology and plastic surgery—for which it wanted to increase the number of procedures performed with the staff physician physically present and directly involved in the surgery. The medical center reported that its management was working with the surgery service chief to achieve this goal. We also identified a few medical centers that described independent processes for monitoring resident supervision that went beyond reviewing documentation. One medical center, for example, reported that staff in its intensive care unit are required to report to the nurse manager any situation they observe in which the supervision of a resident was inappropriate. In addition to monitoring processes established by medical centers, five of VA’s networks indicated in their 2000/2001 annual reports that they had a networkwide process for assessing adherence to one or more VA requirements for documentation of resident supervision. For example, two networks stated that they monitor the documentation of supervising physicians’ involvement in the care of inpatients within 24 hours of admission and another network assesses documentation of the supervision of high-risk procedures. Two other networks reported they are developing networkwide monitoring processes. To obtain more complete information about the extent to which its requirements for supervision are being followed, VA has begun to test its plans to monitor adherence through external peer review of the documentation of supervision. External peer reviewers would examine a sample of medical records from each medical center involved in GME to determine whether they include required documentation of supervision. Although documentation does not provide full information about the extent or quality of supervision, it can provide VA oversight officials with important information about whether supervisors were involved in patient care. We compared the instructions that external reviewers would follow with the requirements for supervision in VA’s handbook and found that the instructions would allow reviewers to assess adherence to most of VA’s key documentation requirements in the four domains of residents’ health care activities. For example, if a resident participated in the care of an inpatient or an outpatient during the current academic year, the external reviewer is to determine whether documentation of supervision in the patient’s medical record met the requirements in VA’s national handbook. Reviewers are also to assess documentation of the supervision of residents who performed diagnostic or therapeutic procedures or provided consultations to other physicians. Results from each medical center are to be provided to that medical center, as well as to headquarters managers. External peer review of documentation of supervision in medical records will be facilitated by features of VA’s computerized patient record system. For example, the system automatically records the date and time of notes; it also has the capacity to require that notes written by a resident be co-signed by the supervising physician, in which case the note is not considered complete until the required co-signature has been entered. In addition, supervising physicians with whom we spoke noted that immediate and easy access to legible information facilitates supervisors’ review of residents’ activities. VA is in the early stages of testing its procedures for external peer review of the documentation of resident supervision, and a VA official told us that this effort is a high priority. A pilot test of portions of the inpatient assessment methodology was conducted from October 2001 through June 2002 on a sample of almost 10,000 medical records. That pilot test indicated that the central database used to select the sample of medical records does not include information about which patients were seen by residents. As a result, reviewers were unable to select an appropriate sample of medical records. Until this problem is resolved, VA cannot implement its plans for external peer review of resident supervision. OAA has worked with other headquarters offices to revise VA’s information technology software to ensure that this database contains information about whether patients’ physicians were residents. VA expects to implement this revision to its software by July 2003. The pilot test did not indicate any other obstacles to implementing the portion of the plan for reviewing documentation of resident supervision in inpatient settings. Pilot tests of methods for assessing documentation of outpatient care, diagnostic and therapeutic procedures, and consultations will not begin until patients seen by residents can be clearly identified through the central database. One unresolved issue that will affect the usefulness of the external review of supervision documentation in the outpatient setting involves selection of the sample of medical records. The two options under consideration are relying on the main outpatient sample used for VA’s other external peer reviews or developing a sample specifically for review of the documentation of supervision. The main outpatient sample in any given year includes only patients who have received primary health care from VA in the past and excludes most new patients who began obtaining health care through VA within the preceding year—a group that has greatly expanded in recent years. Without a sample of records from new patients, it will not be possible to assess adherence to VA’s requirement for supervisory involvement during a veteran’s first outpatient visit. An OAA official told us that developing an additional sample of outpatient records for review of documentation of supervision, distinct from the main outpatient sample used for other purposes, would add to the expense of the review. As of May 2003, VA had not made a decision about which sample to use. VA is making efforts to obtain consistent access to information provided by accrediting bodies and residents about the quality of resident supervision in VA medical centers. VA has taken steps to gain direct access to the letters accrediting bodies send to sponsoring institutions to describe concerns about GME programs. VA headquarters also developed a survey to obtain feedback from residents, but cannot send it to a random sample of residents because VA does not have a complete list of its residents. VA is improving its ability to obtain that information. According to their annual reports for the 2000/2001 academic year, most VA medical centers that provide GME have some procedure for obtaining feedback from residents. VA does not currently have direct access to accreditation letters that contain reviews of the GME programs sponsored by VA medical centers’ affiliates. These letters document concerns about residents’ education or clinical experience that the GME program must address to retain accreditation. Timely access to the information in these letters can allow medical centers to take corrective actions. Until early 2000, ACGME sent copies of its accreditation letters to OAA, and OAA made VA support for residency slots contingent on VA medical centers’ taking action to correct identified problems. In 2000, however, ACGME adopted new policies to safeguard confidential accreditation information. As a result, ACGME stopped sending the letters to VA, instead sending these letters only to the institution that sponsors the GME program. Without direct access to ACGME accreditation letters, VA medical centers are dependent on sponsoring institutions to inform them of concerns about the GME programs in which VA participates, and we learned of one instance in which a sponsoring institution did not do so when ACGME notified it of problems. Officials from a medical center told us that the sponsoring institution of a thoracic surgery program did not tell them that ACGME had previously identified multiple problems with the program until ACGME decided, in September 2002, to withdraw the program’s accreditation. ACGME did not cite any problems with the VA rotation. Nonetheless, unanticipated withdrawal of a program’s accreditation can affect a medical center’s educational and patient care missions. In this case, the VA medical center will lose one full-time advanced surgical resident in July 2003 and had to hire a physician’s assistant to provide some of the services that had been provided by the resident. Most medical centers indicated in their 2000/2001 annual reports that their GME sponsors had shared information from accreditation letters, and these annual reports provided network and headquarters officials with information about accrediting bodies’ concerns and medical centers’ corrective actions. Fifty-six medical centers stated that accrediting bodies had identified concerns about VA rotations in 145 of the more than 1,900 GME programs in which VA is involved. Concerns about 17 of these programs related to resident supervision. For example, according to one medical center’s annual report, ACGME concluded that residents required more direct supervision during certain oncology rotations. Medical centers reported that they had taken corrective action in response in all but one instance. In this case, the accrediting body expressed concern that the VA medical center had provided inadequate supervision and teaching in its physical medicine and rehabilitation rotation, but the medical center did not describe a corrective action in its annual report. We found that when OAA had direct access to ACGME accreditation letters—through early 2000—it took action to ensure that VA medical centers knew of and responded to ACGME concerns about VA rotations. Our review of OAA’s correspondence about accreditation issues covering a period from late 1998 through early 2000 indicated that ACGME mentioned concerns that were specific to VA rotations in its letters about 17 GME programs. In 6 of these cases, ACGME cited a concern about the adequacy of resident supervision. For example, ACGME determined that ophthalmology residents at one VA medical center had not been given clear information about lines of supervisory responsibility. On receipt of these letters, OAA contacted the participating VA medical center. Three of the medical centers submitted documents to substantiate a resolution to the problem within 2 months of hearing from OAA. In the other three cases, OAA asked VA’s chief consultant for the relevant medical specialty (such as the Chief Consultant for Ophthalmology) to assess the situation. In each case, the consultant reported to OAA that a resolution had been achieved. For example, the consultant reported that the ophthalmology program cited for unclear lines of supervision was preparing a written document to clarify supervisory responsibilities. OAA has taken steps to arrange for renewed direct access to ACGME accreditation letters. As part of that effort, VA issued a revised policy on confidential documents in July 2002 to make sure that accreditation reviews would be treated confidentially. In February 2003, VA signed a memorandum of understanding with ACGME that lays the foundation for OAA to receive copies of accreditation letters. According to this memorandum, VA must now obtain revised affiliation agreements between VA medical centers and GME sponsors that authorize ACGME to provide OAA with its accreditation letters. VA is taking steps to ensure that these revised agreements will be in place by July 2004. OAA has come to a similar agreement with the American Osteopathic Association. As a further step to obtain information about, and monitor responses to, GME issues—including accreditation concerns—OAA reissued a policy requiring VA medical centers to establish an affiliation partnership council and submit minutes of council meetings to OAA. The council is to include representatives of the medical center and its academic affiliate or affiliates and is to advise VA managers as they work to meet educational accreditation requirements and correct deficiencies or resolve problems. A mechanism OAA uses to obtain standardized information about residents’ views on the quality of their supervision and other aspects of their training is its Learners’ Perceptions Survey, which was first distributed in March 2001. The survey asks residents to indicate their satisfaction with the supervision they received from VA faculty by rating supervising physicians’ teaching ability, accessibility/availability, and approachability/openness, as well as overall satisfaction with VA clinical faculty. Residents are also asked to evaluate their satisfaction with the degree of supervision and degree of autonomy they experienced. In 2001 and 2002, VA headquarters could not send the survey to a random, representative sample of residents from each of its medical centers involved in GME because it did not have a complete list of its trainees. OAA was able to obtain feedback from many residents who did receive the survey and gave those results to medical centers and networks. OAA is taking steps to capture each trainee’s name and address in its automated and centrally accessible information system and expects to implement this procedure in July 2003. Once VA has a full registry of its trainees, OAA plans to send the survey to a representative sample of residents in different medical specialties that will include residents from all VA medical centers involved in GME. Medical centers’ annual reports can provide network and headquarters officials with additional information about concerns expressed by residents and steps taken to address those concerns. According to the annual reports for the 2000/2001 academic year, most VA medical centers used VA’s nationwide Learners’ Perceptions Survey or another mechanism, such as residents’ confidential evaluations obtained by sponsoring institutions, to obtain feedback about supervision. About half of the 109 medical centers whose annual reports indicate that they had a process for obtaining residents’ feedback said that residents had concerns about their VA rotations. None of these concerns, however, involved the adequacy of supervision. VA headquarters, network, and medical center officials use information from VA’s programs for monitoring the quality and outcomes of patient care to identify and correct problems with resident supervision. VA’s monitoring programs include its new Patient Safety Program and NSQIP. Reviews of paid tort claims by VA’s Chief Patient Care Officer provide another mechanism for identifying problems with resident supervision. OAA monitors medical centers’ use of these programs through the annual reports on residency training. In their annual reports for the 2000/2001 academic year, most medical centers indicated that they monitor patient care information to determine whether resident supervision affected the quality or outcomes of patient care. The system for reporting adverse events and close calls established by VA’s Patient Safety Program has the potential to capture information about instances in which inadequate resident supervision contributed to heightened risk of adverse health care outcomes. Based on analysis of the 17,000 reports of adverse events and close calls filed with VA’s National Center for Patient Safety as of April 2002, its director estimated that resident supervision was mentioned—in any context—in less than 0.1 percent of the incidents reported by VA medical centers and that inadequate supervision was a causal factor in very few of those cases. Analyses of postoperative outcomes recorded in the NSQIP database, including mortality and morbidity, provide VA with a way to study the effects of residents’ involvement in surgical procedures. NSQIP personnel analyze nationwide data from major surgeries, provide site-specific reports to medical centers and networks, and conduct site visits at medical centers. A NSQIP official told us that these data are routinely examined for signs that supervision of residents might be inadequate. For example, NSQIP analysts review the data to ensure that residents are not performing surgeries that are more advanced than would be appropriate for their level of training. In addition to reviewing NSQIP reports, headquarters officials who oversee VA’s surgical services monitor the frequency with which supervising physicians are in the operating or procedural suite when residents perform surgeries. Medical center and network officials have used NSQIP reports to help monitor resident supervision. For example, a team of experts selected by NSQIP visited one medical center at its request in February 2002 to help it evaluate the efficiency of its operating rooms. During its visit, the team noted inadequate supervision of surgeries performed by urology residents. The medical center corrected this problem by arranging for urologists to spend more time at the medical center and ensuring that they understood VA’s requirements for supervision. In another instance, a network GME manager observed that NSQIP data indicated that orthopedic surgery outcomes at a particular medical center were less favorable than expected. After a site visit, network officials concluded that the medical center could not support complex surgeries and determined that continued training of orthopedic residents at that medical center would require a decrease in the complexity of cases and greater involvement by supervising physicians. When the sponsoring institution decided that the medical center would not meet its training needs under those conditions, VA officials chose to transfer patients with complex surgical needs to VA’s tertiary hospitals in the network and shift its two VA-funded residency slots in orthopedic surgery to a different VA medical center. Researchers using the NSQIP database have studied ways in which participation in GME affects postoperative outcomes. To determine whether residency training places surgical patients at risk for worse outcomes, researchers using the NSQIP database compared risk-adjusted mortality rates in VA’s teaching and nonteaching hospitals and found that they did not differ, although the patients who underwent surgeries at teaching hospitals had a higher prevalence of risk factors, underwent more complex operations, and had longer operation times. Morbidity rates were higher in teaching than nonteaching hospitals for some surgical specialties that were studied. On the basis of their analyses, the authors suggested that differences in morbidity rates could reflect incomplete adjustment for risks, such as severity of illness, or the more complex systems of managing and coordinating care that characterize teaching hospitals, and not necessarily the involvement of residents. Another study begun in September 2001 is designed to use the NSQIP database to clarify the relationship between residents’ working conditions and surgical outcomes, with data from 90 VA hospitals and 3 nonfederal hospitals in which surgical residents are trained. Tort claims also provide information that VA uses to identify problems with resident supervision that affected patient care. Review of paid tort claims by VA’s Chief Patient Care Services Officer resulted in clarification of VA’s written requirements for resident supervision when patients are admitted to inpatient units. In the specific case that led to this change, a supervising physician did not come to the hospital during a weekend to see a patient who had been admitted by a resident; the patient died on Monday. At that time, the resident supervision policy of the VA hospital in which the incident occurred did not specifically require supervising physicians to come in on weekends. As a result of this case, in October 2001 an explicit reference to weekends and holidays was added to the handbook’s requirement that each new inpatient be seen by the supervising physician within 24 hours of admission. OAA monitors incidents in which resident supervision contributed to adverse events or patient risks through the annual reports it requires from medical centers. In their 2000/2001 annual reports on residency training, all but 11 of 114 medical centers indicated that they monitored patient safety events associated with residents. They used a variety of processes to collect this information, including root cause analyses and tort claim reviews, as well as additional processes such as mortality and morbidity conferences and reviews triggered by unexpected events, such as readmission within 10 days of discharge from the medical center. Annual reports indicated that reviews of at least 18 actual or potential adverse patient outcomes at a total of 14 medical centers identified resident supervision as a possible contributing factor or led medical center officials to strengthen supervision to minimize the chance of future problems. For example, one medical center established a requirement for greater involvement by supervising physicians before a resident initiates chemotherapy orders. Medical centers described taking corrective actions in response to these reviews. VA cannot assure that the residents who provide care in its facilities receive adequate supervision because its current procedures for monitoring supervision are insufficient. To oversee the supervision of its residents, VA needs various types of information, including information regarding supervising physicians’ adherence to VA’s requirements for resident supervision, accrediting bodies’ and residents’ concerns about supervision, and whether the quality or outcomes of patient care indicate problems with supervision. Systematic monitoring of each of these types of information would help ensure that problems with resident supervision are detected and corrected by the various officials of VA medical centers and affiliated institutions who have responsibilities for residents’ activities. Although VA issued a handbook that established specific standards for resident supervision, VA does not know what its medical centers’ supervision requirements are and does not ensure that its national requirements are adopted at each medical center where residents train. Moreover, VA does not know whether the supervision its residents receive adheres to its national requirements. VA’s current plans for external peer review of documentation have the potential to enhance its oversight capability, but these plans have not been finalized. For example, as of May 2003, VA had not decided whether external reviewers would examine documentation of supervision for VA’s new outpatients, who make up a significant and growing number of VA’s patients. Including these new outpatients in the external review could help ensure adequate supervision of residents during a patient’s first visit to VA. To further improve its oversight of resident supervision, VA will need to complete its initiatives to obtain timely access to evaluations by accrediting bodies and residents. VA will also need to continue to take advantage of its programs for monitoring the quality and outcomes of patient care. VA officials have generally acted to improve supervision when faced with evidence of problems, and better access to information will enhance their ability to monitor and improve resident supervision. By strengthening its oversight capabilities, VA could help promote both the quality of the health care in its facilities and the education its residents receive. As the largest provider of residency training sites in the United States, VA’s actions to enhance the quality of resident supervision and its oversight will have benefits beyond the VA health care system. We recommend that the Secretary of Veterans Affairs direct the Under Secretary for Health to take steps to improve VA’s oversight of the supervision of residents by ensuring that all VA medical centers that provide GME adopt and adhere to the requirements for resident supervision established in VA’s handbook and ensuring that external peer review of documentation of resident supervision includes examination of records from VA’s new outpatients. In written comments on a draft of this report, VA agreed with our findings and our recommendations. VA said our report described many steps it has already taken that would help assure systematic implementation of its national resident supervision policies and adequate headquarters oversight of resident supervision. In concurring with our recommendation to ensure that all VA medical centers that provide GME adopt and adhere to requirements for resident supervision established in its handbook, VA indicated its intention to monitor compliance with policy requirements and highlight those requirements in a memorandum to network officials. In concurring with our recommendation to ensure that external peer review of documentation of resident supervision includes examination of records from its new outpatients, VA indicated that it would develop a strategy to identify new outpatients who were seen by a resident. It stated that it expects to draw its first sample of records from outpatients, including new outpatients, in the second quarter of fiscal year 2004. VA also reported that it completed a revision of its centralized patient information database. This revision was necessary to allow selection of an appropriate sample of inpatient records for external peer review. VA’s comments are in appendix II. We are sending copies of this report to the Secretary of Veterans Affairs, appropriate congressional committees, and other interested parties. We will also make copies available to others who are interested upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions, please contact me at (202) 512-7101. An additional contact and the names of other staff members who made contributions to this report are listed in appendix III. To do our work, we examined oversight of resident supervision at each of the Department of Veterans Affairs (VA) Veterans Health Administration’s three organization levels—headquarters, networks, and medical centers. Our work covered VA’s oversight of resident supervision and did not include an evaluation of the quality of care provided by residents or the quality of the supervision provided to residents. To assess oversight by VA’s headquarters officials, we reviewed documents and interviewed officials from VA’s Office of Academic Affiliations (OAA), Office of Patient Care Services, National Center for Patient Safety, Office of Quality and Performance, and Office of Information. We analyzed VA’s plans to have external peer reviewers examine documentation of supervision and compared the instructions the reviewers are to be given with VA’s requirements for supervision. To assess oversight of resident supervision by network officials, we analyzed each network’s annual report to OAA on resident supervision covering the 2000/2001 academic year. These were the most recent annual reports available at the time. We did not assess the accuracy of information provided in these reports. We also interviewed network GME managers (known as network academic affiliations officers) from a sample of 11 of VA’s 21 regional networks of health care facilities and analyzed documents they provided (see table 4). We used a stratified random sampling strategy to ensure variation in the number of VA-funded residency slots among the selected networks. Network 19 was included in our sample prior to randomization because it is the only network that did not summarize the information in its medical centers’ reports. Another network was excluded from our sample because it had been formed by the merger of two former networks in January 2002. Our results from these 11 networks cannot be generalized to other networks. To assess oversight of resident supervision by medical center officials, we reviewed and analyzed 2000/2001 academic year annual reports to OAA on resident supervision. OAA provided us with 114 annual reports from the approximately 130 VA medical centers that were involved in GME during the 2000/2001 academic year after it removed identifying information, such as the names of medical centers, affiliates, and specific individuals. These were all the medical center annual reports for the 2000/2001 academic year that OAA had received as of June 18, 2002. We did not assess the accuracy of information in the annual reports. We also interviewed GME managers at 11 VA medical centers (see table 5) and analyzed their 2000/2001 academic year annual reports on resident supervision (without redaction) and other documents. We used a stratified random sampling strategy to ensure that the medical centers we selected varied in the number of VA- funded residency slots they were allocated for the 2001/2002 academic year. We also ensured that our sample included one medical center from each of the networks we had sampled and that the medical centers differed in the number of medical specialties in which their residents train. We did not review a systematically selected sample of medical centers’ resident supervision policies. Our results from these 11 medical centers cannot be generalized to other medical centers. We also reviewed documentary and testimonial evidence from four medical centers that participate in internal medicine or general surgery GME programs that had received adverse accreditation decisions as of May 2002. One of these—the Fresno VA Medical Center—was part of our sample of medical centers. Of the others, we visited the medical centers in West Haven, Connecticut and Gainesville, Florida and interviewed officials of the medical center in Albuquerque, New Mexico. We also spoke to officials of the institutions that sponsor these three GME programs. To obtain additional information about GME and VA’s residency training, we analyzed accreditation requirements of the Accreditation Council for Graduate Medical Education, American Osteopathic Association, and Joint Commission on Accreditation of Healthcare Organizations and interviewed officials of those bodies. We also interviewed representatives of professional associations that are involved in GME, including the American Board of Medical Specialties, American College of Surgeons, American Hospital Association, American Medical Association, American Medical Student Association, Association of American Medical Colleges and its Council of Deans, Association of Professors of Medicine, Committee of Interns and Residents, and Council on Graduate Medical Education, and we reviewed relevant documents issued by these groups. We interviewed representatives of physicians who teach internal medicine, ophthalmology, psychiatry, general surgery, orthopedic surgery, and urology—specialties for which a large number of VA medical centers provide residency slots. We also interviewed representatives of veterans’ service organizations. We reviewed published literature regarding the quality of care provided by residents. We conducted our work from September 2001 through June 2003 in accordance with generally accepted government auditing standards. In addition to the person named above, key contributors to this report were Kristen J. Anderson, William D. Hadley, Martha Fisher, Krister Friday, and Donald Morrison. | The Department of Veterans Affairs (VA) provides graduate medical education (GME) to as many as one-third of U.S. resident physicians, but oversight responsibilities spread across VA's organizational components and multiple affiliated hospitals and medical schools could allow supervision problems to go undetected or uncorrected. GAO was asked to examine VA's procedures for (1) monitoring VA medical centers' adherence to VA's requirements for resident supervision, (2) using evaluations of supervision by GME accrediting bodies and residents, and (3) using information about resident supervision drawn from VA's programs for monitoring the quality and outcomes of patient care. VA cannot assure that the resident physicians who provide care in its facilities receive adequate supervision because its procedures for monitoring supervision are insufficient. VA does not know whether medical centers have adopted VA's national requirements for supervision of residents' diagnosis, treatment, or discharge of patients. VA officials require a review of only one specific requirement that is intended to ensure availability of supervision when a supervising physician does not need to be in the operating or procedural suite while a resident performs a diagnostic or therapeutic procedure. Four of 11 network officials we interviewed had not conducted this review, and the requirement at one medical center in one of these four networks was less stringent than VA's national requirement. To obtain more complete information about adherence to its national supervision requirements, VA plans to have external peer reviewers examine documentation of supervision in patients' medical records. VA's plans for this review have not been finalized. For example, as of May 2003, VA had not decided whether reviewers would examine records from VA's new outpatients. Without records from new patients, reviewers will not be able to assess documentation of residents' supervision during a veteran's first outpatient visit. To improve its oversight, VA is making efforts to obtain information from accrediting bodies and residents about the quality of resident supervision. For example, VA has taken steps to obtain direct access to letters from accrediting bodies that contain evaluations of the GME programs in which its medical centers participate. To solicit feedback from residents, VA implemented a national survey, but was unable to send this survey to a representative sample of residents from each VA medical center because it does not have a complete central list of its residents. VA is taking action to obtain this information. In addition, VA uses information from its broader programs for monitoring the quality and outcomes of patient care, such as its patient safety and surgical quality improvement programs, to identify and correct problems with resident supervision. Information from these programs has served as the basis for corrective actions by VA officials. |
Poorly defined requirements and processes for extending injured and ill reserve component soldiers on active duty have caused soldiers to be inappropriately dropped from their active duty orders. For some, this has led to significant gaps in pay and health insurance, which has created financial hardships for these soldiers and their families. Based on our analysis of Army Manpower data during the period from February 1, 2004, through April 7, 2004, almost 34 percent of the 867 soldiers who applied to be extended on active duty orders fell off their orders before their extension requests were granted. This placed them at risk of being removed from active duty status in the automated systems that control pay and access to benefits, including medical care and access to the Commissary and Post Exchange—which allows soldiers and their families to purchase groceries and other goods at a discount. While the Army Manpower Office began tracking the number of soldiers who have applied for ADME and fell off their active duty orders during that process, the Army does not keep track of the number of soldiers who have lost pay or other benefits as a result. Although, logically, a soldier who is not on active duty orders would also not be paid, as discussed later, many of the Army installations we visited had developed ad hoc procedures to keep these soldiers in pay status even though they were not on official, approved orders. However, many of the ad hoc procedures used to keep soldiers in pay status circumvented key internal controls in the Army payroll system—exposing the Army to the risk of significant overpayment, did not provide for medical and other benefits for the soldiers dependents, and sometimes caused additional financial problems for the soldier. Because the Army did not maintain any centralized data on the number, location, and disposition of mobilized reserve component soldiers who had requested ADME orders but had not yet received them, we were unable to perform statistical sampling techniques that would allow us to estimate the number of soldiers affected. However, through our case study work, we have documented the experiences of 10 soldiers who were mobilized to active duty for military operations in Afghanistan and Iraq. Figure 1 provides an overview of the pay problems experienced by the 10 case study soldiers we interviewed and the resulting impact the disruptions in pay and benefits had on the soldiers and their families. According to the soldiers we interviewed, many were living from paycheck to paycheck; therefore, missing pay for even one pay period created a financial hardship for these soldiers and their families. While the Army ultimately addressed these soldiers’ problems, absent our efforts and consistent pressure from the requesters of the report, it would likely have taken longer for the Army to address these soldiers’ problems. Further details on these case studies are included in our related report. The Army has not provided (1) clear and comprehensive guidance needed to develop effective processes to manage and treat injured and ill reserve component soldiers, (2) an effective means of tracking the location and disposition of injured and ill soldiers, and (3) adequate training and education programs for Army officials and injured and ill soldiers trying to navigate their way through the ADME process. The Army’s implementing guidance related to the extension of active duty orders is sometimes unclear or contradictory—creating confusion and contributing to delays in processing ADME orders. For example, the guidance states that the Army Manpower Office is responsible for approving extensions beyond 179 days but does not say what organization is responsible for approving extensions that are less than 179 days. In practice, we found that all applications were submitted to Army Manpower for approval regardless of the number of days requested. At times, this created a significant backlog at the Army Manpower Office and resulted in processing delays. In addition, the Army’s implementing guidance does not clearly define organizational responsibilities, how soldiers will be identified as needing an extension, how ADME orders are to be distributed, and to whom they are to be distributed. Finally, according to the guidance, the personnel costs associated with soldiers on ADME orders should be tracked as a base operating cost. However, we believe the cost of treating injured and ill soldiers—including their pay and benefits—who fought in operations supporting the Global War on Terrorism should be accounted for as part of the contingency operation for which the soldier was originally mobilized. This would more accurately allocate the total cost of these wartime operations. As we have reported in the past, the Army’s visibility over mobilized reserve component soldiers is jeopardized by stovepiped systems serving active and reserve component personnel. Therefore, the Army has had difficulty determining which soldiers are mobilized and/or deployed, where they are physically located, and when their active duty orders expire. In the absence of an integrated personnel system that provides visibility when a soldier is transferred from one location to another, the Army has general personnel regulations that are intended to provide some limited visibility over the movement of soldiers. However, when a soldier is on ADME orders, the Army does not follow these or any other written procedures to document the transfer of soldiers from one location to another—thereby losing even the limited visibility that might otherwise be achievable. Further, although the Army has a medical tracking system, the Medical Operational Data System (MODS), that could be used to track the whereabouts and status of injured and ill reserve component soldiers, we found that, for the most part, the installations we visited did not use or update that system. Instead, each of the installations we visited had developed its own stovepiped tracking system and databases. Although MODS, if used and updated appropriately, could provide some visibility over injured and ill active and reserve component soldiers— including soldiers who are on ADME orders—8 of the 10 installations we visited did not routinely use MODS. MODS is an Army Medical Department (AMEDD) system that consolidates data from over 15 different major Army and DOD databases. The information contained in MODS is accessible at all Army Military Treatment Facilities (MTF) and is intended to help Army medical personnel administer patient care. For example, as soldiers are approved for ADME orders, the Army Manpower Office enters data indicating where the soldier is to receive treatment, to which unit he or she will be attached, and when the soldier’s ADME orders will expire. However, as discussed previously, the Army has not established written standard operating procedures on the transfer and tracking of soldiers on ADME orders. Therefore, the installations we visited were not routinely looking to MODS to determine which soldiers were attached to them through ADME orders. When officials at one installation did access MODS, the data in MODS indicated that the installation had at least 105 soldiers on ADME orders. However, installation officials were only aware of 55 soldiers who were on ADME orders. According to installation officials, the missing soldiers never reported for duty and the installation had no idea that they were responsible for these soldiers. The Army has not adequately trained or educated Army staff or reserve component soldiers about ADME. The Army personnel responsible for preparing and processing ADME applications at the 10 installations we visited received no formal training on the ADME process. Instead, these officials were expected to understand their responsibilities through on-the- job training. However, the high turnover caused by the rotational nature of military personnel, and especially reserve component personnel who make up much of the garrison support units that are responsible for processing ADME applications, limits the effectiveness of on-the-job training. Once these soldiers have learned the intricacies of the ADME process, their mobilization is over and their replacements must go through the same on- the-job learning process. For example, 9 of the 10 medical hold units at the locations we visited were staffed with reserve component soldiers. In the absence of education programs based on sound policy and clear guidance, soldiers have established their own informal methods—using Internet chat rooms and word-of-mouth—to educate one another on the ADME process. Unfortunately, the information they receive from one another is often inaccurate and instead of being helpful, further complicates the process. For example, one soldier was told by his unit commander that he did not need to report to his new medical hold unit after receiving his ADME order. While this may have been welcome news at the time, the soldier could have been considered absent without leave. Instead, the soldier decided to follow his ADME order and reported to his assigned case manager at the installation. The Army lacks customer-friendly processes for injured and ill soldiers who are trying to extend their active duty orders so that they can continue to receive medical care. Specifically, the Army lacks clear criteria for approving ADME orders, which may require applicants to resubmit paperwork multiple times before their application is approved. This, combined with inadequate infrastructure for efficiently addressing the soldiers’ needs, has resulted in significant processing delays. Finally, while most of the installations we reviewed took extraordinary steps to keep soldiers in pay status, these steps often involved overriding required internal controls in one or more systems. In some cases, the stopgap measures ultimately caused additional financial hardships for soldiers or put the Army at risk of significantly overpaying soldiers in the long run. Although the Army Manpower Office issued procedural guidance in July of 2000 for ADME and the Army Office of the Surgeon General issued a field operating guide in early 2003, neither provides adequate criteria for what constitutes a complete ADME application package. The procedural guidance lists the documents that must be submitted before an ADME application package is approved; however, the criteria for what information is to be included in each document are not specified. In the absence of clear criteria, officials at both Army Manpower and the installations we visited blamed each other for the breakdowns and delays in the process. For example, according to installation officials, the Army Manpower Office will not accept ADME requests that contain documentation older than 30 days. However, because it often took Army Manpower more than 30 days to process ADME applications, the documentation for some applications expired before approving officials had the opportunity to review it. Consequently, applications were rejected and soldiers had to start the process all over again. Although officials at the Army Manpower Office denied these assertions, the office did not have policies or procedures in place to ensure that installations were notified regarding the status of soldiers’ applications or clear criteria on the sufficiency of medical documentation. For example, one soldier we interviewed at Fort Lewis had to resubmit his ADME applications three times over a 3-month period— each time not knowing whether the package was received and contained the appropriate information. According to the soldier, weeks would go by before someone from Fort Lewis was able to reach the Army Manpower Office to determine the status of his application. He was told each time that he needed more current or more detailed medical information. Consequently, it took over 3 months to process his orders, during which time he fell off his active duty orders and missed three pay periods totaling nearly $4,000. The Army has not consistently provided the infrastructure needed— including convenient support services—to accommodate the needs of soldiers trying to navigate their way through the ADME process. This, combined with the lack of clear guidance discussed previously and the high turnover of the personnel who are responsible for helping injured and ill solders through the ADME process, has resulted in injured and ill soldiers carrying a disproportionate share of the burden for ensuring that they do not fall off their active duty orders. This has left many soldiers disgruntled and feeling like they have had to fend for themselves. For example, one injured soldier we interviewed whose original mobilization orders expired in January 2003 recalls making over 40 trips to various sites at Fort Bragg during the month of January to complete his ADME application. Over time, the Army has begun to make some progress in addressing its infrastructure issues. At the time of our visits, we found that some installations had added new living space or upgraded existing space to house returning soldiers. For example, Walter Reed Army Hospital has contracted for additional quarters off base for ambulatory soldiers to alleviate the overcrowding pressure, and Fort Lewis had upgraded its barracks to include, among other things, wheelchair accessible quarters. Also, installations have been adding additional case managers to handle their workload. Case managers are responsible for both active and reserve component soldiers, including injured and ill active duty soldiers, reserve component soldiers still on mobilization orders, reserve component soldiers on ADME orders, and reserve component soldiers who have inappropriately fallen off active duty orders. As of June 2004, according to the Army, it had 105 case managers, and maintained a soldier-to-case- manager-ratio of about 50-to-1 at 8 of the 10 locations we visited while conducting fieldwork. Finally, to the extent possible, several of the sites we visited co-located administrative functions that soldiers would need— including command and control functions, case management, ADME application packet preparation, and medical treatment. They also made sure that Army administrative staff, familiar with the paperwork requirements, filled out all the required paperwork for the soldier. Centralizing document preparation reduces the risk of miscommunication between the soldier and unit officials, case managers, and medical staff. It also seemed to reduce the frustration that soldiers would feel when trying to prepare unfamiliar documents in an unfamiliar environment. The financial hardships discussed previously that were experienced by some soldiers would have been more widespread had individuals within the Army not taken it upon themselves to develop ad hoc procedures to keep these soldiers in pay status. In fact, 7 of the 10 Army installations we visited had created their own ad hoc procedures or workarounds to (1) keep soldiers in pay status and (2) provide soldiers with access to medical care when soldiers fell off active duty orders. In many cases, the installations we visited made adjustments to a soldier’s pay records without valid orders. While effectively keeping a soldier in pay status, this work- around circumvented key internal controls—putting the Army at risk of making improper and potentially fraudulent payments. In addition, because these soldiers are not on official active duty orders they are not eligible to receive other benefits to which they are entitled, including health coverage for their families. One installation we visited issued official orders locally to keep soldiers in pay status. However, in doing so, they created a series of accounting problems that resulted in additional pay problems for soldiers when the Army attempted to straighten out its accounting. Further details on these ad hoc procedures are included in our related report. Manual processes and nonintegrated order-writing, pay, personnel, and medical eligibility systems also contribute to processing delays which affect the Army’s ability to update these systems and ensure that soldiers on ADME orders are paid in an accurate and timely manner. Overall, we found that the current stovepiped, nonintegrated systems were labor- intensive and require extensive error-prone manual data entry and reentry. Therefore, once Army Manpower approves a soldier’s ADME application and the ADME order is issued, the ADME order does not automatically update the systems that control a soldier’s access to pay and medical benefits. In addition, as discussed previously, the Army’s ADME guidance does not address the distribution of ADME orders or clearly define who is responsible for ensuring that the appropriate pay, personnel, and medical eligibility systems are updated, so soldiers and their families receive the pay and medical benefits to which they are entitled. As a result, ADME orders were sent to multiple individuals at multiple locations before finally reaching individuals who have the access and authority to update the pay and benefits systems, which further delays processing. As shown in figure 2, once Army Manpower officials approve a soldier’s ADME application, they e-mail a memorandum to HRC-St. Louis authorizing the ADME order. The Army Personnel Center Orders and Resource System (AORS), which is used to write the order, does not directly interface nor automatically update the personnel, pay, or medical eligibility systems. Instead, once HRC-St. Louis cuts the ADME order it e-mails a copy of the order to nine different individuals—four at the Army Manpower Office, four at the National Guard Bureau (NGB) headquarters, and one at HRC in Alexandria Virginia—none of which are responsible for updating the pay, personnel, or medical eligibility systems. As shown in figure 2, Army Manpower, upon receipt of ADME orders, e-mails copies to the soldier, the medical hold unit to which the soldier is attached, and the RMC. Again, none of these organizations has access to the pay, personnel, or medical eligibility systems. Finally, NGB officials e-mail copies of National Guard ADME orders to one of 54 state-level Army National Guard personnel offices and HRC-Alexandria e-mails copies of Reserve ADME orders to the Army Reserve’s regional personnel offices. HRC-Alexandria also sends all Reserve orders to the medical hold unit at Walter Reed. When asked, the representative at HRC-Alexandria who forwards the orders did not know why orders were sent to Walter Reed when many of the soldiers on ADME orders were not attached or going to be attached to Walter Reed. The medical hold unit at Walter Reed that received the orders did not know why they were receiving them and told us that they filed them. At this point in the process, of the seven organizations that receive copies of ADME orders, only two—the ANG personnel office and the Army Reserve personnel office—use the information to initiate a pay or benefit- related transaction. Specifically, the Guard and Reserve personnel offices initiate a transaction that should ultimately update the Army’s medical eligibility system, the Defense Enrollment Eligibility Reporting System (DEERS). To do this, the Army National Guard personnel office manually inputs a new active duty order end date into the Army National Guard personnel system, the Standard Installation Division Personnel Reporting System (SIDPERS). In turn, the data from SIDPERS are batch processed into the Total Army Personnel Database-Guard (TAPDB-G), and then batch processed to the Reserve Components Common Personnel Data System (RCCPDS). The data from RCCPDS are then batch processed into DEERS—updating the soldier’s active duty status and active duty order end date. Once the new date is posted to DEERS, soldiers and family members can get a new ID card at any DOD ID Card issuance facility. The Army Reserve finance office initiates a similar transaction by entering a new active duty order end date into the Regional Level Application System (RLAS), which updates Total Army Personnel Database-Reserve (TAPDB- R), RCCPDS, and DEERS through the same batch process used by the Guard. As discussed previously, the Army does not have an integrated pay and personnel system. Therefore, information entered into the personnel system (TAPDB) is not automatically updated in the Army’s pay system, the Defense Joint Military Pay System-Reserve Component (DJMS-RC). Instead, as shown in figure 2, after receiving a copy of the ADME orders from Army Manpower, the medical hold unit and/or the soldier provide a hard copy of the orders to their local finance office. Using the Active Army pay input system, the Defense Military Pay Office system (DMO), installation finance office personnel update DJMS-RC. Not only is this process vulnerable to input errors, but it is time consuming and further delays the pay and benefits to which the soldier is entitled. The Army’s new MRP program, which went into effect May 1, 2004, and takes the place of ADME for soldiers returning from operations in support of the Global War on Terrorism, has resolved many of the front-end processing delays experienced by soldiers applying for ADME by simplifying the application process. In addition, unlike ADME, the personnel costs associated with soldiers on MRP orders are appropriately linked to the contingency operation for which they served, and, therefore, will more appropriately capture the costs related to the Global War on Terrorism. While the front-end approval process appears to be operating more efficiently than the ADME approval process, due to the fact that the first wave of 179-day MRP orders did not expire until October 27, 2004, after we completed our work, we were unable to assess how effectively the Army identified soldiers who required an additional 179 days of MRP and whether those soldiers experienced pay problems or difficulty obtaining new MRP orders. In addition, the Army has no way of knowing whether all soldiers who should be on MRP orders are actually applying and getting into the system. Further, MRP has not resolved the underlying management control problems that plagued ADME, and, in some respects, has worsened problems associated with the Army’s lack of visibility over injured soldiers. Finally, because the MRP program is designed such that soldiers may be treated and released from active duty before their MRP orders expire, weaknesses in the Army’s processes for updating its pay system to reflect an early release date have resulted in overpayments to soldiers. According to Army officials at each of the 10 installations we visited, unlike ADME, they have not experienced problems or delays in obtaining MRP orders for soldiers in their units. In fact some installation officials have said that the process now takes 1 or 2 days instead of 1 or 2 months. Because there is no mechanism in place to track application processing times, we have no way of substantiating these assertions. We are not aware of any soldier complaints regarding the process, which were commonplace with ADME. The MRP application and approval process, which rests with HRC-Alexandria instead of the Army Manpower Office, is a simplified version of the ADME process. As with ADME orders, the soldier must request that this process be initiated and voluntarily request an extension of active duty orders. Both the MRP and ADME request packets include the soldier’s request form, a physician’s statement, and a copy of the soldier’s original mobilization orders. However, with MRP, the physician’s statement need only state that the soldier needs to be treated for a service- connected injury or illness and does not require detailed information about the diagnosis, prognosis, and medical treatment plan as it does with ADME. As discussed previously, assembling this documentation was one of the primary reasons ADME orders were not processed in a timely manner. In addition, because all MRP orders are issued for 179 days, MRP has alleviated some of the workload on officials who were processing AMDE orders and who were helping soldiers prepare application packets by eliminating the need for a soldier to reapply every 30, 60, or 90 days as was the case with ADME. While MRP has expedited the application process, MRP guidance, like that of ADME, does not address how soldiers who require MRP will be identified in a timely manner, how soldiers requiring an additional 179 days of MRP will be identified in a timely manner, or how soldiers and Army staff will be trained and educated about the new process. Further, because the Army does not maintain reliable data on the current status and disposition of injured soldiers, we could not test or determine whether all soldiers who should be on MRP orders are actually applying and getting into the system. In addition, because MRP authorizes 179 days of pay and benefits regardless of the severity of the injury, the Army faces a new challenge—to ensure that soldiers are promptly released from active duty or placed in a medical evaluation board process upon completion of medical care or treatment in order to avoid needlessly retaining and paying these soldiers for the full 179 days. However, MRP guidance does not address how the Army will provide reasonable assurance that upon completion of medical care or treatment soldiers are promptly released from active duty or placed in a medical evaluation board process. MRP has also contributed to the Army’s difficulty maintaining visibility over injured reserve component soldiers. Although the Army’s MRP implementation guidance requires that installations provide a weekly report to HRC-Alexandria that includes the name, rank, and component of each soldier currently on MRP orders, according to HRC officials, they are not consistently receiving these reports. Consequently, the Army cannot say with certainty how many soldiers are currently on MRP orders, how many have been returned to active duty, or how many soldiers have been released from active duty before their 179-day MRP orders expired. As discussed previously, if the Army used and appropriately updated the agency’s medical tracking system (MODS), the system could provide some visibility over injured and ill active and reserve component soldiers— including soldiers on ADME or MRP orders. However, the Army MRP implementation guidance is silent on the use of MODS and does not define responsibilities for updating the system. According to officials at HRC- Alexandria, they do not update MODS or any other database when they issue MRP orders. They also acknowledged that the 1,800 soldiers reflected as being on MRP orders in MODS, as of September 2004, was probably understated given that, between May 2004 and September 2004, HRC-Alexandria processed approximately 3,300 MRP orders. Further, as was the case with ADME, 8 of the 10 installations we visited did not routinely use or update MODS but instead maintained their own local tracking systems to monitor soldiers on MRP orders. Not surprisingly, the Army does not know how many soldiers have been released from active duty before their 179-day MRP orders had expired. This is important because our previous work has shown that weaknesses in the Army’s process for releasing soldiers from active duty and stopping the related pay before their orders have expired—in this case before their 179 days is up—often resulted in overpayments to soldiers. According to HRC- Alexandria officials, as of October 2004, a total of 51 soldiers had been released from active duty before their 179-day MRP orders expired. At the same time, Fort Knox, one of the few installations that tracked these data, reported it had released 81 soldiers from active duty who were previously on MRP orders—none of whom were included in the list of 51 soldiers provided by HRC-Alexandria. Concerned that some of these soldiers may have inappropriately continued to receive pay after they were released from active duty, we verified each soldier’s pay status in DJMS-RC and found that 15 soldiers were improperly paid past their release date— totaling approximately $62,000. A complete and lasting solution to the pay problems and overall poor treatment of injured soldiers that we identified will require that the Army address the underlying problems associated with its all-around control environment for managing and treating reserve component soldiers with service-connected injuries or illnesses and deficiencies related to its automated systems. Accordingly, in our related report (GAO-05-125) we made 20 recommendations to the Secretary of the Army for immediate action to address weaknesses we identified including (1) establishing comprehensive policies and procedures, (2) providing adequate infrastructure and resources, and (3) making process improvements to compensate for inadequate, stovepiped systems. We also made 2 recommendations, as part of longer term system improvement initiatives, to integrate the Army’s order-writing, pay, personnel, and medical eligibility systems. In its written response to our recommendations, DOD briefly described its completed, ongoing, and planned actions for each of our 22 recommendations. The recent mobilization and deployment of Army National Guard and Reserve soldiers in connection with the Global War on Terrorism is the largest activation of reserve component troops since World War II. As such, in recent years, the Army’s ability to take care of these soldiers when they are injured or ill has not been tested to the degree that it is being tested now. Unfortunately, the Army was not prepared for this challenge and the brave soldiers fighting to defend our nation have paid the price. The personal toll this has had on these soldiers and their families cannot be readily measured. But clearly, the hardships they have endured are unacceptable given the substantial sacrifices they have made and the injuries they have sustained. While the Army’s new streamlined medical retention application process has improved the front-end approval process, it also has many of the same limitations as ADME. To its credit, in response to the recommendations included in our companion report, DOD has outlined some actions already taken, others that are underway, and further planned actions to address the weaknesses we identified. For further information about this testimony please contact Gregory D. Kutz at (202) 512-9095 or [email protected]. Individuals making key contributions to this testimony were Gary Bianchi, Francine DelVecchio, Carmen Harris, Diane Handley, Jamie Haynes, Kristen Plungas, John Ryan, Maria Storts, and Truc Vo. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | In light of the recent mobilizations associated with the Global War on Terrorism, GAO was asked to determine if the Army's overall environment and controls provided reasonable assurance that soldiers who were injured or became ill in the line of duty were receiving the pay and other benefits to which they were entitled in an accurate and timely manner. This testimony outlines pay deficiencies in the key areas of (1) overall environment and management controls, (2) processes, and (3) systems. It also focuses on whether recent actions the Army has taken to address these problems will offer effective and lasting solutions. Injured and ill reserve component soldiers--who are entitled to extend their active duty service to receive medical treatment--have been inappropriately removed from active duty status in the automated systems that control pay and access to medical care. The Army acknowledges the problem but does not know how many injured soldiers have been affected by it. GAO identified 38 reserve component soldiers who said they had experienced problems with the active duty medical extension order process and subsequently fell off their active duty orders. Of those, 24 experienced gaps in their pay and benefits due to delays in processing extended active duty orders. Many of the case study soldiers incurred severe, permanent injuries fighting for their country including loss of limb, hearing loss, and back injuries. Nonetheless, these soldiers had to navigate the convoluted and poorly defined process for extending active duty service. The Army's process for extending active duty orders for injured soldiers lacks an adequate control environment and management controls--including (1) clear and comprehensive guidance, (2) a system to provide visibility over injured soldiers, and (3) adequate training and education programs. The Army has also not established user-friendly processes--including clear approval criteria and adequate infrastructure and support services. Many Army locations have used ad hoc procedures to keep soldiers in pay status; however, these procedures often circumvent key internal controls and put the Army at risk of making improper and potentially fraudulent payments. Finally, the Army's nonintegrated systems, which require extensive errorprone manual data entry, further delay access to pay and benefits. The Army recently implemented the Medical Retention Processing (MRP) program, which takes the place of the previously existing process in most cases. MRP, which authorizes an automatic 179 days of pay and benefits, may resolve the timeliness of the front-end approval process. However, MRP has some of the same issues and may also result in overpayments to soldiers who are released early from their MRP orders. Out of 132 soldiers the Army identified as being released from active duty, 15 improperly received pay past their release date--totaling approximately $62,000. |
The DI and SSI programs are the two largest federal programs providing cash assistance to people with disabilities. Established in 1956, DI is an insurance program that provides monthly cash benefits to workers who are unable to work because of severe long-term disability. Workers who have worked long enough and recently enough are insured for coverage under the DI program. In addition to cash assistance, DI beneficiaries receive Medicare coverage after they have received cash benefits for 24 months. In 2002, SSA paid about $60 billion to 5.5 million disabled workers, with average monthly cash benefits amounting to $834 per person. DI cash benefits are paid from the Federal Disability Insurance Trust Fund. SSI, created in 1972, is a means-tested income assistance program that provides a financial safety net for disabled, blind, or aged individuals who have low income and limited resources. Unlike the DI program, SSI has no prior work requirement and no waiting period for cash or medical benefits. Eligible SSI applicants generally begin receiving cash benefits immediately upon entitlement and, in most cases, receipt of cash benefits makes them eligible for Medicaid benefits. In 2002, about 5.5 million people with disabilities received SSI benefits. In the same year, federal SSI cash benefits paid to SSI beneficiaries with disabilities equaled $26 billion, and average monthly federal SSI cash benefits amounted to about $398 per person. SSI cash benefits are paid from general tax revenues. The DI and SSI programs use the same statutory definition of disability. To meet the definition of disability under these programs, an individual must have a medically determinable physical or mental impairment that (1) has lasted or is expected to last at least 1 year or to result in death and (2) prevents the individual from engaging in substantial gainful activity (SGA). Individuals are considered to be engaged in SGA if they have countable earnings above a certain dollar level. Moreover, for a person to be determined to be disabled, the impairment must be of such severity that the person not only is unable to do his or her previous work but, considering his or her age, education, and work experience, is unable to do any other kind of substantial work that exists in the national economy. SSA contracts with state DDS agencies to determine whether applicants are disabled. To help ensure that only eligible beneficiaries remain on the rolls, SSA is required by law to conduct CDRs for all DI beneficiaries and some SSI disability recipients to determine whether they continue to meet the disability requirements of the law. In 1980, because of concerns about the effectiveness of the CDR process and growing disability rolls, the Congress enacted a law requiring that CDRs be conducted at least once every 3 years for all DI beneficiaries whose disabilities are not considered permanent and at intervals determined appropriate by SSA for DI beneficiaries whose impairments are considered permanent. SSA issued regulations in 1986 stating its policy of conducting CDRs for SSI disability beneficiaries with the same frequency as it conducts CDRs for DI beneficiaries. In 1994, the Congress established the first statutory requirement for SSI CDRs, requiring that CDRs be conducted for a relatively small proportion of SSI beneficiaries. Welfare reform legislation enacted in August 1996 focused on CDRs for SSI children. This legislation required that SSA (1) conduct CDRs at least once every 3 years for SSI children under age 18 if their impairments are not considered permanent and for infants during their first year of life if they are receiving SSI benefits due to low birth weight and (2) review the cases of all SSI children beginning on their 18th birthdays to determine whether they are eligible for disability benefits under adult disability criteria. The redeterminations for 18-year-olds are considered part of the CDR workload. At the time beneficiaries enter the DI or SSI programs, DDSs determine when beneficiaries will be due for CDRs on the basis of their potential for medical improvement. Based on SSA regulations, DDSs classify individuals into one of three medical improvement categories, called “diary categories”: “medical improvement expected” (MIE), “medical improvement possible” (MIP), or “medical improvement not expected” (MINE). Based on the diary categories, DDSs select a “diary date” for each beneficiary, which is the date that the beneficiary is scheduled to have a CDR. The diary date is generally within 6 to 18 months if the beneficiary is classified as MIE; once every 3 years if classified as MIP; and once every 5 to 7 years if classified as MINE. Upon completion of a CDR, DDSs reassess the medical improvement potential of beneficiaries who remain eligible for benefits to determine the most appropriate medical improvement category and time frame for conducting the next CDR. Beneficiaries classified as MIE are not eligible to receive Ticket to Work services until either the completion of their first CDR, or until they have received benefits for 3 years. While SSA uses diary categories to determine the timing of CDRs, it has developed another method, called profiling, to determine the most cost- effective method of conducting a CDR. Profiling involves the application of statistical formulas that use data on beneficiary characteristics contained in SSA’s computerized records—such as age, impairment type, length of time on disability rolls, previous CDR activity, and reported earnings—to predict the likelihood of medical improvement and, therefore, of benefit cessation. For example, SSA found that the longer an individual is on the disability rolls, the less likely he or she is to have benefits terminated. In addition, once an individual undergoes a CDR, the chance that a new CDR will result in benefit termination is reduced substantially. Reported earnings, on the other hand, greatly increase the likelihood of termination. Through its profiling formulas, SSA assigns a “score” to beneficiaries indicating whether there is a high, medium, or low likelihood of medical improvement. In general, beneficiaries with a high score are referred for full medical reviews—an in-depth assessment of a beneficiaries’ medical and vocational status—while beneficiaries with lower scores are, at least initially, sent a questionnaire, known as a “mailer.” The mailer consists of a short list of questions asking beneficiaries to report information on their medical conditions, treatments, and work activities. If beneficiaries’ responses to a mailer indicate possible improvement in medical condition or vocational status, SSA may refer these individuals for a full medical review. However, in most cases, SSA decides that a full medical review is not warranted and that benefits should be continued. In contrast to mailers, full medical reviews are labor intensive and expensive. These reviews generally involve the following steps: (1) SSA headquarters personnel determine that a CDR is due and notify the SSA processing center; (2) personnel at the processing center locate the beneficiary’s file and send it to the appropriate SSA field office; (3) field office personnel contact the beneficiary, conduct a lengthy interview, and send the file to the appropriate DDS; (4) the DDS requests medical records from the beneficiary’s physicians and other medical sources and, if these sources cannot provide sufficient evidence, schedules medical or psychological examinations with consulting physicians outside the DDS; and (5) a DDS team, consisting of a disability examiner and a physician or psychologist, determines whether the beneficiary continues to meet SSA disability criteria. As of fiscal year 1996, about 4.3 million CDRs were due or overdue. In response, SSA and the Congress focused on providing funding to conduct overdue CDRs and new CDRs as they became due. SSA developed a plan for a 7-year initiative to conduct about 8.2 million CDRs during fiscal years 1996 through 2002. In the Contract with America Advancement Act of 1996 (Pub. L. No. 104-121), the Congress authorized a total of about $4.1 billion to fund the 7-year CDR plan. In addition, The Personal Responsibility and Work Opportunity Reconciliation Act of 1996 (Pub. L. No. 104-193) required SSA to conduct CDRs on several beneficiary groups, such as low birth weight babies and authorized an additional $250 million for CDRs in fiscal years 1997 and 1998. The actual amount appropriated during the 7-year period, about $3.68 billion, was less than the amount authorized in 1996. SSA reported to the Congress in its fiscal year 2000 CDR report that in that year, the agency became current with the backlog of CDRs for all DI beneficiaries. SSA officials indicated to us that although they are in the midst of preparing the final statistics for its fiscal year 2002 CDR report, it became current with the backlog of CDRs for all SSI beneficiaries by the end of fiscal year 2002. Since first implementing the profiling and mailer processes in the early 1990s, SSA has continued its efforts to improve the cost-effectiveness of the CDR process. Most notably, SSA has refined the statistical formulas used in profiling to identify which method—mailer or full medical review—should be used to conduct the CDR. According to SSA officials and studies of the profiling process, these improvements have led to some beneficiaries receiving a mailer who otherwise would have received a full medical review, thereby allowing SSA to reduce the overall cost of the CDR process. Conversely, by improving SSA’s ability to identify beneficiaries who are likely to medically improve, these refinements have also helped the agency better ensure that it is conducting full medical reviews—and ceasing benefits—when appropriate. In addition to improvements in its profiling process, SSA has also implemented other CDR process improvements such as introducing an automated review of mailers. In the midst of its first year following the cessation of CDR-targeted funds, SSA appears to be developing another CDR backlog; the agency estimates it will cost several billion dollars in total over the next 5 years to keep its workload current. By the end of fiscal year 2003, on the basis of SSA’s current projections, the agency will likely face a backlog of 200,000 CDRs, though the characteristics of the backlog may mitigate its negative effects. SSA attributes the mounting backlog to the management decisions it made at the beginning of the fiscal year during budget deliberations, as well as the need to process a larger than expected workload of initial disability applications. SSA has estimated that it will need a total of about $4 billion to process its projected CDR workload over the next 5 years. However, SSA’s updated estimate, expected to be available later this year, will likely show a higher cost as the disability rolls continue to expand. Aside from funding issues, DDSs reported that challenges associated with processing initial disability applications and maintaining enough disability examiners could jeopardize their ability to stay current with the CDR workload over the next few years. If another large CDR backlog is generated, SSA is at risk of foregoing cost-savings, thereby compromising the integrity of its disability programs as a result of paying benefits to disability beneficiaries who are no longer eligible to receive them. At the end of March 2003—six months after the expiration of separate authorized CDR funding—SSA was on a pace to generate a CDR backlog by the end of the current fiscal year. However, most of the backlogged claims will consist of SSI CDRs, which may make the backlog less problematic than it otherwise would have been because, among other reasons, SSI CDRs have lower long-term savings than DI CDRs. In its fiscal year 2003 budget justification, SSA indicated that it needed to process about 1.38 million CDRs during fiscal year 2003 to stay current with its CDR workload. Yet, SSA expects to process a total of 1.18 million CDRs, if not more, by the end of the fiscal year. By the end of March 2003—the midpoint of the fiscal year—SSA had processed about 539,000 CDRs. To reach the 1.18 million end-year revised total, SSA will need to process CDRs during the second half of the fiscal year at a pace similar to that achieved during the first 6 months of the fiscal year. Nevertheless, while it appears that SSA should be able to achieve this outcome, by the end of fiscal year 2003, it will have accumulated a backlog of 200,000 CDRs. SSA officials attributed the delay in obtaining a fiscal year 2003 budget as the main factor in hampering their ability to conduct all of the planned CDRs for the fiscal year. Because of the uncertainty surrounding the agency’s funding level, SSA reduced the number of CDRs it sent to DDS officials for processing as well as froze DDS hiring and overtime pay. SSA officials told us that they took these actions because they were concerned that the fiscal year 2003 appropriations would not support CDR activity at the fiscal year 2002 level. SSA officials recognize that a hiring freeze can have a longer-term impact because it disrupts the normal replacement of disability examiners lost through attrition. SSA officials explained that disability examiners generally do not increase overall productivity when first hired. In fact, new disability examiners could initially decrease productivity because experienced examiners may devote some of their time to training these new examiners. SSA officials noted that it generally takes 1 to 2 years before disability examiners become proficient. SSA’s management strategy to cut back on the number of CDRs it processed during the delays to the extended fiscal year 2003 budget process reflects the agency’s higher priority for processing of initial applications for disability benefits. Specifically, while SSA cut back on the number of CDRs, no similar action was reported with DI and SSI initial eligibility decision making. SSA officials indicated that the application rate for disability benefits increased during the beginning months of fiscal year 2003, further affecting its ability to stay current with CDRs. SSA officials told us that although SSA sets a goal to process all CDRs and initial applications, initial eligibility decisions are given the highest priority. Officials said that, due to political pressure, getting disability benefits to people in a timely manner is emphasized over reviewing whether current beneficiaries remain eligible for benefits. DDSs, likewise, place a greater priority on processing initial applications. Three-fourths (75 percent) of directors said processing initial disability claims were a top priority relative to CDRs, whereas far fewer directors (23 percent) said that processing initial claims and CDRs were equal priorities. SSA has recently proposed an approach to avoid this competition between CDRs and initial claims. Specifically, in SSA’s fiscal year 2004 budget request, the Commissioner requested that almost $1.5 billion be earmarked for three activities that could provide a return on investment—CDRs, SSI nondisability redeterminations, and overpayment workloads. While we did not review the sufficiency of the level of this request, the earmarking of funds for activities such as CDRs could help SSA keep current with these activities. For example, if the number of initial applications for disability benefits continues to increase over the next several years, holding apart the necessary funds for CDRs could be a prudent measure. SSA has indicated in its annual CDR reports, as well as in its performance and accountability report, that its ability to complete all CDRs as they become due in the future is dependent upon adequate funding. In 2000, SSA estimated that a total of about $4 billion was needed to process the CDR workload during the 5-year period between fiscal year 2004 and 2008 (see table 1). SSA based these “rough estimates” on cost and workload projections available at that time. SSA expects to release updated workload and cost projections in the summer of 2003. While the estimates made in 2000 are not inconsistent with recent years’ authorized CDR funding levels, they rely upon assumptions that may change in the years ahead. For instance, the updated numbers for the fiscal year 2004 to 2008 period will likely be higher than the past estimate for this time period because of the recent growth in the disability rolls. Despite the likely reemergence of a CDR backlog, the characteristics of the backlog may mitigate its negative consequences. During fiscal year 2003, SSA has focused on DI CDRs. SSA officials cite four reasons for this: (1) cessations of beneficiaries receiving DI benefits lead to higher savings than cessations of recipients receiving SSI benefits, (2) SSA desires to protect the DI trust fund, (3) legislation sets out a clearer mandate to complete CDRs on beneficiaries receiving DI benefits than for adult beneficiaries receiving SSI benefits, and (4) external auditors cite SSA for noncompliance with the law when SSA does not complete the required CDRs for DI beneficiaries. As a result, most of the backlog that is expected to reemerge by the end of fiscal year 2003 will likely consist of SSI CDRs and, according to SSA officials, this makes the backlog less problematic than if the backlog consisted of mostly DI cases. SSA maintains that not only do SSI adult CDRs result in lower long-term savings, but also the legislative mandate for conducting SSI CDRs is less prescriptive. Therefore, the negative effects of falling behind on SSI CDRs are less severe. Several of the issues that have contributed to the pending fiscal year 2003 CDR backlog will also appear, in the views of DDS directors, in the future. First, nearly all directors expect to process a higher number of initial disability claims than in the past. Most DDS directors have a strategy in place to deal with this rising initial claims workload, but still expect increased initial claims to negatively affect their ability to process their CDR workload. Second, most directors expect to experience difficulties in maintaining an adequate level of staffing, caused by many examiners leaving and difficulties finding replacements. Most DDSs who anticipate facing these staffing challenges reported that they have strategies in place to manage them. Nevertheless, nearly all believe that these staffing issues will negatively impact their ability to stay current with their expected CDR workloads. Tables 2 and 3 provide more specific results. To the extent that funding, staffing, and other issues limit SSA’s ability to process its CDR workload, the full realization of CDR cost savings could be in jeopardy. SSA maintains that the return on investment from CDR activities is high. In fact, SSA’s most recent annual CDR report to the Congress summarizes its average CDR cost-effectiveness during fiscal year 1996 to 2000 at about $11 returned for every $1 spent on CDRs. SSA has noted, however, that such rates of return are unlikely to be maintained because as SSA works down the backlog and beneficiaries come up for their second and third CDRs, the agency does not expect as many cessations and, therefore, the cost-benefit ratio could decline. Since the Congress’ provision of dedicated CDR funding starting in fiscal year 1996, SSA has reported completing millions of CDRs that resulted in substantial long-term savings. Table 4 shows the number of CDRs processed annually between fiscal year 1996 and 2001, which ranged from about 500,000 to over 1.8 million. SSA has reported that these annual CDRs will lead to long-term savings ranging from about $2 billion to $5.2 billion. In addition to a favorable return on investment, SSA’s CDR activities help protect DI and SSI program integrity. Keeping current with the CDR workload can help build and retain public confidence that only qualified individuals are receiving disability benefits. In addition, it helps protect the programs’ fiscal integrity and allows SSA to meet its financial stewardship responsibilities. To the extent the agency falls behind in conducting CDRs, a CDR backlog undermines these positive outcomes. While SSA has taken a number of actions over the past decade to significantly improve the cost-effectiveness of the CDR process, opportunities remain for SSA to better use program information in CDR decision making. While DDS personnel study available information on beneficiaries to decide when they should undergo a CDR, they do not conduct a systematic analysis of this information. As a result, CDRs may not be conducted at the optimal time. Also, SSA’s process for determining what method to use for a CDR—mailer or full medical review—is not always based on the best information available. In addition, SSA has not fully studied and pursued the use of medical treatment data on beneficiaries available from the Medicare and Medicaid programs despite the potential of these data to improve SSA’s selection of the most appropriate CDR method. Finally, SSA continues to be hampered in its CDR decisions by missing or incomplete information on beneficiaries’ case history, which may prevent SSA from ceasing benefits for some individuals who no longer meet eligibility standards. While DDS personnel review available information on beneficiaries to establish a diary date indicating when beneficiaries should undergo a CDR, they do not conduct a systematic analysis of this information. Diary decisions are inherently complex because DDS personnel must assess a beneficiary’s likelihood of medical improvement and how such medical improvement will affect that person’s ability to work. Based on these judgments, beneficiaries are placed in a diary category indicating either that medical improvement is “expected,” “possible,” or “not expected.” DDS personnel then assign a diary date that corresponds with the diary category; the more likely a beneficiary is to medically improve, the earlier the diary date. Although SSA has established guidance for DDS personnel on diary date decisions, SSA officials told us that, ultimately, such decisions are difficult to make and are based on the judgment of the DDS staff. An SSA contracted study of the diary process found that this process is often subjective and that the setting of diary categories and dates is “almost an afterthought” once the case file is developed and a disability determination has been made. SSA’s study identified shortcomings in the diary date process. For example, most beneficiaries assigned to the diary category indicating they are expected to medically improve are not found to have improved when a CDR is conducted. Our analysis of SSA data indicates that between 1998 and 2002, only about 5 percent of beneficiaries in the MIE category were found to have medically improved to the point of being able to work again. SSA’s diary process study indicated that diary predictions of medical improvement could be substantially improved through the use of statistical modeling techniques similar to those used in the CDR profiling process that SSA uses to determine whether a mailer or a full medical review is needed. The study noted that this systematic, quantitative approach to assigning diary categories and dates would likely enhance disability program efficiency by reducing the number of CDRs that do not result in benefit cessation. Another benefit derived from a more systematic approach to diary categorization, according to SSA’s study, is improved integrity of the diary process. Such integrity improvements will result from more timely CDRs and from actual medical improvement rates that more closely correlate with the diary categories that SSA assigns to beneficiaries. For example, SSA’s study indicates that the actual medical improvement rate for beneficiaries assigned to the MIE diary category would increase to about 29 percent under this improved process. SSA officials told us that, in response to the diary study recommendations, the agency has begun to revise its diary process to introduce a more systematic approach to selecting a CDR date. In particular, SSA is developing a process that will use beneficiary data collected at the time of benefit application, such as impairment type and age, in a statistical formula to help determine when a CDR should be conducted. While this change is likely to result in some improvements in the timing of CDRs, the fundamental diary categorization process used by DDSs will remain the same. Despite the study’s findings and recommendations, SSA officials told us that they will not replace SSA’s current process for assigning diary categories with a statistical process because of what they believe would be significant costs involved in changing this system across DDSs. However, SSA’s study acknowledged the potential cost of implementing a new process in DDSs, and instead recommended that a revised diary process be centrally administered in order to avoid such high costs. The officials also said that such fundamental changes in the diary process would require a change in regulations. SSA’s process for determining what method to use for a CDR is not always based on the best information available. In the 1990s, SSA introduced a system that develops a “profile score” for each beneficiary. The profile score indicates the beneficiary’s likelihood for medical improvement based on a statistical analysis of beneficiary data. The purpose of the profile score is to allow SSA to determine whether it is more cost-effective to send a mailer or to conduct a full medical review. SSA’s own contracted studies indicate that profiling results provide the best available indication of whether a beneficiary is likely to medically improve. Nevertheless, for some beneficiaries, SSA continues to use the diary category that was judgmentally assigned by DDS personnel as the basis for their decision about whether to send a mailer or conduct a full medical review. SSA requires a full medical review for all beneficiaries whose diary category indicates that medical improvement is expected (MIE) and who have not yet undergone a CDR. This is the case even when the profile score indicates that improvement is unlikely. In fiscal year 2002, about 14 percent of beneficiaries in the MIE diary category were assigned to the “low” profile category, which indicates that medical improvement is not likely. SSA officials acknowledged that their policy requiring full medical reviews for all beneficiaries in this diary category departs from their usual practice of using mailers for beneficiaries in the low profile category, but they believe that this policy is reasonable given that these beneficiaries are more likely to medically improve than those assigned to other diary categories. However, SSA’s data from 1998 to 2002 shows that most beneficiaries in this category—about 94 percent—do not medically improve to the point of being able to work. For other CDR cases, SSA may require that a mailer be sent even when the profile score indicates that conducting a full medical review would be most cost-effective. Specifically, SSA’s policy is to send a mailer to all beneficiaries who were assigned a diary category that indicates medical improvement is not expected (MINE), even if the profile score indicates a relatively high likelihood of medical improvement. Whether or not these beneficiaries subsequently receive a full medical review will be based on the results of their mailer. SSA officials said that MINE beneficiaries with a high profile score are more likely to receive a full medical review based on their mailer responses because SSA conducts a more stringent review of their mailer responses. However, it is not clear that sending mailers to beneficiaries in the high profile category is the most cost-effective approach. SSA studies of the mailer process have indicated that, while this process is effective, it does not provide the same assurance as full medical reviews that medical improvement will be identified. As a result, the use of mailers for beneficiaries whose profile scores indicate a high likelihood of improvement could result in SSA identifying fewer benefit cessations. SSA has not fully studied and pursued the use of medical treatment data on beneficiaries available from the Medicare and Medicaid programs despite the potential of these data to improve SSA’s decisions regarding whether to use a mailer or full medical review to complete a CDR. In 2000, an SSA contracted study found that the use of Medicare data from the Center for Medicare and Medicaid Services (CMS)—such as data on hospital admissions and medical treatments—resulted in a significant improvement in SSA’s ability to assess potential medical improvement through CDR profiling. Based on these results, SSA, in fiscal year 2003, implemented a process that uses CMS Medicare data in CDR profiling to determine if DI beneficiaries who are initially identified as candidates to receive a full medical review should instead receive mailers. SSA expects that this will result in administrative savings due to the reduced number of full medical reviews the agency must conduct. SSA has also initiated a study to assess whether CMS Medicaid data can be used in the same way to decide if SSI beneficiaries, scheduled to receive full medical reviews, could instead be sent mailers. But SSA’s efforts to obtain and use CMS Medicare or Medicaid data are incomplete because the data will only be used to reclassify full medical reviews to mailers but not to reclassify mailers to full medical reviews. SSA officials told us that they have no plans to pursue this additional use of the data because they believe their current profiling system is sufficient for identifying beneficiaries who have a low likelihood of medical improvement. While they agreed that the CMS data could potentially be useful for reclassifying mailers to full medical reviews, they noted that they would need to first study this particular use of the data and would need to develop another interagency agreement with CMS to authorize and obtain data for this purpose. Also, they said that any action to reclassify mailers to full medical reviews would require SSA to publish a Federal Register notice describing this action. SSA could potentially achieve substantial program savings from conducting additional full medical reviews in cases where CMS data indicate that beneficiaries originally identified as mailer candidates have a relatively high likelihood of medical improvement. Using CMS Medicare data for this purpose would be consistent with the results of an SSA study that recommended that these data be used whenever it improves the agency’s ability to accurately predict medical improvement. For example, the study noted that the CMS data would be useful for enhancing SSA’s profiling of beneficiaries with mental impairments, including those with a low likelihood of medical improvement for whom SSA would usually send a mailer. To the extent that CMS data improves SSA’s ability to identify beneficiaries for full medical review, the program savings from reduced lifetime benefit payments to those beneficiaries whose benefits are ceased could easily exceed any increased administrative costs resulting from additional full medical reviews. SSA continues to be hampered in its CDR decisions by missing or incomplete information on beneficiaries’ case history, which may prevent SSA from ceasing benefits for some individuals who no longer qualify for benefits. To cease benefits based on a CDR, SSA must determine if the beneficiary has improved by comparing information about the beneficiary’s current condition to information from the agency’s previous decision regarding the beneficiary’s medical condition. This previous decision and the evidence supporting it are recorded by SSA and maintained in case folders that are usually stored in SSA records storage facilities. However, in conducting CDRs, DDSs sometimes have difficulty retrieving the case folders or the key medical evidence that is maintained in these folders. Without the information contained in case folders, DDSs cannot establish a comparison and, therefore, cannot determine if medical improvement has occurred. As a result, SSA is legally required to keep the beneficiary on the disability rolls even though the beneficiary may have been judged to no longer qualify for benefits had the DDS been able to establish a comparison. SSA’s inability to cease benefits in cases where folders are missing or incomplete could result in a substantial cost to the federal government arising from continued payments of benefits—cash and medical—to people who no longer meet eligibility standards. Our discussions with SSA officials, survey of DDSs, and review of SSA studies indicate that missing or incomplete folders present an obstacle to effective processing of CDRs. However, evidence on the extent of this problem is mixed. In responding to our survey on CDRs, about 72 percent of DDSs informed us that missing or incomplete information from case folders negatively impacted the quality or timing of CDR decisions to a moderate or great extent. An August 2002 study of missing or incomplete folders conducted by SSA’s Office of the Inspector General reported that DDSs, as well as other SSA components such as field offices, complained that a large proportion of cases were missing information. This study found that case folder retrieval is a significant problem for SSA. Among the problems identified were untimely receipt of case folders, nonreceipt of requested folders, and folders provided without necessary medical evidence. The report questioned SSA’s oversight of folder inventory and retrieval processes and recommended that SSA take various actions, such as independent quality assurance reviews, to improve management of case folders. A study contracted by SSA also identified problems with disability case folder management, such as misrouted or missing folders. The study noted that “inefficient folder management increases administrative and program costs and risks data integrity” and recommended that SSA “analyze the reasons for missing folders and provide recommendations for process and systems improvements.” SSA headquarters officials we spoke with said that SSA has examined the incidence of missing or incomplete case folders and found that the problem is not as significant as claimed by DDSs. For example, in fiscal year 2000, SSA investigated allegations of substantial numbers of missing case folders in two DDSs. SSA officials told us that they were able to locate many of the folders that had been reported as missing. The officials attribute the discrepancy between their findings and the allegations of DDSs, in part, to staff shortages and workload pressures at field offices, which result in a failure of these offices to take further steps to look for folders. However, our survey of DDSs indicates that regardless of SSA’s ability to locate many case folders upon further investigation, DDSs are still having difficulty obtaining the information they need to make CDR decisions. In a 2002 memorandum to SSA’s Inspector General, the SSA Commissioner acknowledged that missing or incomplete case folders are a problem in the CDR process, but noted that the problem had been overstated. The memorandum cited data indicating a lost folder rate of about 0.5 percent for DI CDRs and about 3 percent for SSI CDRs. The Commissioner also said that SSA had taken a number of actions in recent years to reduce the incidence of lost folders, such as issuance of additional guidance and training on this issue. In addition, the Commissioner noted that the agency was committed to building a system of electronic folders that will “virtually eliminate the incidences of lost folders.” While electronic folders may be a key initiative in resolving SSA’s problems with missing or incomplete case folders, SSA does not plan to fully implement this system until mid-2005. In addition, these electronic folders will be established only for new disability cases; cases established prior to implementation of electronic folders will remain in a paper format. Therefore, problems in handling these older case folders will likely continue. SSA’s rationale for postponing issuance of a ticket to beneficiaries expected to medically improve—those who are assigned an MIE diary category—is not well-supported by program experience. In issuing regulations implementing the ticket act, SSA decided to postpone issuance of tickets to MIE beneficiaries who have not yet had a CDR based on the premise that these beneficiaries could be expected to regain their capacity to work without SSA assistance. However, our analysis of SSA data indicates that the vast majority of MIE beneficiaries in the DI and SSI programs—about 94 percent—are not found to have medically improved upon completion of a CDR. As a result, some beneficiaries who might otherwise benefit from potentially valuable return-to-work assistance must wait up to 3 years to access services through the ticket program. Some disability advocacy groups and SSA’s own Ticket to Work and Work Incentives Advisory Panel have questioned SSA’s policy of delaying the issuance of tickets to MIE beneficiaries. In particular, they have commented that delaying tickets to all MIE beneficiaries when only a small proportion of these beneficiaries return to work underscores the inherent weakness of relying upon the MIE category as a basis for granting access to ticket services. Furthermore, the ticket panel cited research indicating that the sooner a person with recent work history receives employment services, the more likely the person will be to return to work. In our prior work examining DI and SSI return-to-work policies, we also noted that delays in the provision of vocational rehabilitation services can diminish the effectiveness of such return-to-work efforts. Delaying services to some disability beneficiaries, therefore, undermines SSA’s recent efforts to increase its emphasis on helping these beneficiaries return to work. In publishing its final regulations implementing the ticket program, SSA wrote that many commenters on the draft regulations had indicated that the agency should provide tickets to all beneficiaries, regardless of their diary category. The commenters also referred to the MIE diary category as an “administrative convenience” that is “not a sufficiently precise tool to deny beneficiaries immediate access to a ticket.” In responding to these comments, SSA wrote that use of the MIE category to identify which beneficiaries should receive tickets “is the most administratively feasible approach currently available to us.” SSA acknowledged that it might be possible to improve the system for identifying such beneficiaries and wrote that it planned to conduct an evaluation to identify possible improvements. SSA officials told us that they are examining the current policy of issuing tickets to MIE beneficiaries to identify possible alternatives but they are not sure when this assessment will be completed. However, they noted that their policy of limiting ticket issuance reflects congressional interests in striking an appropriate balance between program stewardship and encouraging return to work. Moreover, they explained that reversing the current policy would be costly. SSA’s actuaries have estimated that issuing tickets to all MIE beneficiaries would cost an additional $822 million over 10 years because the ticket law prohibits SSA from conducting CDRs on beneficiaries who are using a ticket. Therefore, SSA would continue to pay DI and SSI benefits to some beneficiaries who might have otherwise had their benefits terminated. The drawbacks of SSA’s current policy of postponing issuance of tickets to MIE beneficiaries and the potential costs associated with an alternative policy that would allow immediate issuance of tickets to these beneficiaries highlights the need for SSA, as part of its policy reexamination, to consider other policy alternatives that might better balance the agency’s program stewardship and return-to-work objectives. While we did not conduct an in-depth assessment of potential alternatives to SSA’s current policy, our review of the CDR program and ticket provisions indicate that other options may exist that would achieve a better balance among SSA’s program objectives. For example, SSA could develop a better means of identifying beneficiaries who are expected to medically improve. Earlier in this report, we noted that an SSA-contracted study of the diary process recommended implementation of an improved system that, among other things, would better identify MIE beneficiaries through statistical modeling of diary decisions. One effect of such improved identification, according to the study, would be to substantially reduce the proportion of beneficiaries with an MIE diary category. For instance, the study found that although SSA, over the past decade, has assigned the MIE diary category to about 9 percent of DI beneficiaries, a statistically-based diary process would result in about 3 percent of DI beneficiaries being assigned to the MIE category. This would potentially minimize the number of beneficiaries initially denied tickets and may also provide more assurance, within and outside SSA, that such beneficiaries can truly be expected to improve. SSA might also consider an option that provides for the issuance of tickets to all MIE beneficiaries while allowing CDRs to be conducted as scheduled for these beneficiaries. This policy would require a legislative change because, as we noted earlier, the Ticket to Work Act currently prohibits SSA from conducting a CDR while a person is using a ticket. While the ticket program’s prohibition on CDRs for ticket users was intended to remove a potential disincentive for beneficiaries to return to work, MIE beneficiaries currently get neither a ticket nor protection from a CDR. A policy allowing CDRs to be conducted on these beneficiaries while they use a ticket would at least give these beneficiaries immediate access to return-to-work services offered under the ticket program. In addition, SSA would still be able to achieve the cost savings that are derived from CDRs for beneficiaries that it considers most likely to medically improve. Failure to cost effectively process CDRs as they become due could negatively affect DI and SSI program integrity. SSA and DDSs are to be commended for bringing the CDR workload current as of the end of 2002. SSA is also to be commended for the improvements it has made in the CDR process. However, a confluence of events, such as the expiration of targeted CDR funding and an increase in initial applications, is increasing the chances of a CDR backlog recurring, which could result in SSA paying out billions of dollars in the long term to beneficiaries who no longer qualify for benefits. In its fiscal year 2004 budget request, SSA has asked the Congress for targeted funding for several program activities, including CDRs, that provide a return on investment. If approved, the targeted funding could increase SSA’s chances of staying current with its CDR workload because this workload would not have to compete internally for funding with the initial determination workload. While SSA has taken a number of steps to improve the CDR process, it has not taken advantage of other opportunities that could further improve the cost-effectiveness of this process and its ability to stay current. In particular, although a more systematic and quantitative process for assigning diary categories and dates would likely improve the timing of CDRs, SSA does not intend to make comprehensive revisions to the diary process based on this more rigorous approach. In addition, despite SSA’s reliance on profiling formulas to improve the agency’s ability to predict medical improvement and benefit cessation, SSA is ignoring or not giving full consideration to information from these formulas in its decisions to conduct mailers or full medical reviews for some beneficiaries. Also, although SSA acknowledges that medical treatment data from Medicare and, possibly, Medicaid improve the agency’s ability to determine when a mailer should be used, it does not see a need to consider the use of these data to help determine when a full medical review might be preferable. Furthermore, despite long-standing concerns, SSA has not fully addressed the problem of missing or incomplete case folders, which limits SSA’s ability to achieve cost savings through the CDR process. Finally, SSA’s initial assessments of which beneficiaries are most likely to improve are not very accurate and, therefore, may not be the most appropriate criteria to use for delaying beneficiary access to a ticket for return-to-work services. The ticket program is relatively new so little program information is available for SSA to draw upon in reexamining its current policy on ticket access for beneficiaries most likely to improve. SSA has the challenge of developing a policy that will make return-to-work assistance available to beneficiaries at the appropriate time while providing adequate mechanisms for ensuring program integrity. To further improve the cost-effectiveness of the CDR process, we recommend that the Commissioner of SSA take the following actions: Pursue more comprehensive enhancements of the CDR diary process— beyond those already being considered—to ensure that the full benefits of a more systematic, quantitative approach to diary setting are attained. Among such key enhancements would be the use of a statistical approach to determine diary categories. Given the significant implications of such changes for the DI and SSI programs, SSA could consider pilot testing the revised diary process before fully implementing it. Given the cost-effectiveness of conducting mailers in cases where there is a low likelihood for benefit cessation, revise SSA’s policy to allow mailers to be sent whenever appropriate—as indicated by the profiling scores—to beneficiaries with a diary category indicating that they are expected to medically improve. For beneficiaries assigned to a diary category indicating that they are not expected to medically improve, SSA should conduct a thorough analysis of its current policy, which allows mailers to be used for all of these beneficiaries regardless of their profile scores. SSA’s analysis should evaluate the overall cost-effectiveness of this policy, taking into account both the potential reduction in administrative costs from conducting fewer full medical reviews and the potential increase in benefit payments from reduced cessations. If this analysis indicates that the current policy results in higher overall costs for SSA’s disability programs, SSA should revise the policy to make it consistent with the agency’s general profiling approach—which prescribes the use of full medical reviews in cases where profiling indicates that a beneficiary has a relatively high likelihood of medical improvement. Study the use of Medicare and Medicaid data for the purpose of deciding whether to use a full medical review in conducting a CDR for beneficiaries who would otherwise receive a mailer. If found to be cost-effective, SSA should incorporate Medicare and Medicaid data into its CDR process for this purpose. In commenting on a draft of this report, SSA agreed with our recommendations. SSA noted that our review represents a comprehensive and accurate assessment of SSA’s accomplishments in improving the CDR process as well as opportunities to improve the process. While agreeing with each of our recommendations, SSA supplied additional information describing its current or planned actions and the basis for such actions. With regard to our recommendation that SSA pursue more comprehensive enhancements of its diary process, SSA said that it is currently studying recommendations made by its contractor regarding the establishment of a statistically-based diary process and that SSA staff will be meeting in the near future to explore implementation options. However, SSA noted it has not yet made a decision regarding implementation. Regarding our recommendation that SSA revise its policies for determining what method to use for a CDR—mailer or full medical review—SSA said that while it generally agreed with our recommendation, it believes we were overly harsh in stating that it is not making the best use of available information. SSA noted that its policy for allowing mailers to be used for all MINE beneficiaries supplements information produced through profiling, thereby improving the process for selecting a CDR method. SSA said that this policy is based on evaluation and analysis of several thousand similar cases and noted that it will verify the cost- effectiveness of this policy through its ongoing integrity reviews. We continue to believe that any departure from SSA’s analytically-based process for using profiling scores to select a CDR method should be based on sound analysis indicating that an alternative process would result in improved cost-effectiveness. We, therefore, are encouraged by SSA’s plans to evaluate the cost-effectiveness of its current policy. However, it is not clear that SSA’s integrity reviews will be adequate for assessing the cost- effectiveness of the agency’s mailer policy for MINE beneficiaries due to the potential limitations of these reviews. For example, an SSA-contracted study identified several problems with the integrity reviews that SSA conducts for beneficiaries in the low profile category, such as the drawing of integrity samples that are not consistently representative of the mailer population. To the extent that such problems remain unresolved, SSA may need to develop an alternative means of evaluating its mailer policy for MINE beneficiaries. With regard to our recommendation on the use of Medicare and Medicaid data for deciding whether to use a full medical review for beneficiaries who would have otherwise received a mailer, SSA said that it intends to contract for such a study in fiscal year 2004 if funding is available. SSA noted that if the concept is found to be feasible, it will develop a pilot for this approach. SSA also provided additional comments intended to update or clarify some information we provide in this report. In particular, SSA noted that, due to its efforts to keep as current as possible, it believes its CDR backlog by the end of fiscal year 2003 will be significantly less than the potential backlog of 200,000 CDRs that we cited. While there is always a certain degree of imprecision associated with any projection, our backlog figure is based on the best information that was available during our review. We developed our potential backlog figure based on extensive discussions with SSA officials and reviews of SSA’s CDR workload and budget projections. SSA did not provide us with any revised official estimates or analyses that would have led us to revise the CDR backlog figure we report. In addition, SSA said that our report implies that it does not take seriously the shortfall in completing CDRs for the SSI program. SSA is apparently referring to our discussion of the CDR backlog where we note that most of the backlog that is expected to develop by the end of fiscal year 2003 will consist of SSI CDRs, which may make the backlog less problematic than it otherwise would have been because, among other reasons, SSI CDR cessations have lower long-term savings than DI CDR cessations. We did not intend to imply that SSA does not take the SSI backlog seriously. Rather, we included this information to more accurately characterize the nature of the potential backlog because that could provide important insights as to how to deal with it. Finally, SSA said that although our survey of DDS directors indicates that attrition among disability examiners is an issue for DDSs, SSA and DDSs are accustomed to dealing with such issues and DDSs are still able to complete their workloads. Although we are aware that DDSs regularly confront multiple challenges to completing their disability program workloads, we cannot ignore the clear implications of DDS directors’ answers to our survey questions. Given that a clear majority of DDS directors indicated that disability examiner attrition is somewhat or very likely to jeopardize their ability to complete their CDR workload, we believe that it is important for us to identify this issue as a potentially significant factor in the possible development of a CDR backlog in the years ahead. SSA’s comments appear in appendix II. SSA also provided additional technical comments that we have incorporated in the report, as appropriate. Copies of this report are being sent to the Commissioner of SSA, appropriate congressional committees, and other interested parties. The report is also available at no charge on GAO’s Web site at http://www.gao.gov. If you have any questions about this report, please contact me at (202) 512-7215. Other contacts and staff acknowledgments are listed in appendix III. To evaluate the impact of the expiration of separate funding for continuing disability review (CDR) processing and the level of funding needed to remain current with the CDR caseload, we interviewed Social Security Administration (SSA) officials from the Office of Budget, the Office of the Chief Actuary, the Office of Disability and Income Security Programs at SSA headquarters in Baltimore. We also reviewed SSA documents, including the agency’s budget request and estimates of the cost and savings from conducting CDRs. In addition, we surveyed Disability Determination Services (DDS) directors to assess the potential effect of the expiration of special CDR funding on DDS operations. To develop the survey, we identified information that would help us address the research questions. We generated specific survey items by reviewing SSA CDR reports submitted annually to the Congress and drawing upon interviews we conducted early in the assignment with SSA officials and the National Association of Disability Examiners. We validated our survey instrument by obtaining feedback from SSA officials and pretesting it with several current DDS directors. In consultation with SSA, we excluded 2 of the 54 DDSs from our survey as well as the federal DDS. The two DDSs excluded were Guam and South Carolina’s DDS serving persons who are blind–both relatively small DDSs that are run by one person. We excluded the federal DDS because responses from this site might skew results as the site (1) is a federal entity and as such it is “different” than other DDSs, (2) is used to process the overflow of CDRs, and (3) serves as SSA’s test unit. The remaining 52 DDSs (essentially 1 for each of the 50 states, plus the District of Columbia and Puerto Rico) comprised the study universe. All 52 of these DDSs responded to our survey. To assess the opportunities for SSA to improve CDR cost-effectiveness and to examine SSA’s rationale for delaying return-to-work services to some beneficiaries under the ticket to work program, we interviewed SSA officials from the Office of Disability and Income Security Programs and the Office of the Chief Actuary at SSA headquarters in Baltimore. We also reviewed legislation, regulations, and SSA policy guidance related to the CDR and the ticket programs. In addition, we examined various studies and reports on the CDR and ticket to work programs, including reports from SSA’s Office of the Inspector General and a wide range of contractor- produced reports analyzing the cost-effectiveness of the CDR process. Finally, we analyzed data from SSA on the number of adult DI and SSI beneficiaries, aged 19-64, assigned to various CDR diary and profiling categories and the CDR outcomes—cessation or continuance—for these beneficiaries. These data were derived from SSA administrative data sets used by the agency to select cases for review and to track the results of CDRs. We did not independently verify these data, but based on comparison to SSA’s previously published data, and discussion of minor discrepancies with SSA officials, we determined that they were sufficiently reliable for our purposes. We performed our work in accordance with generally accepted government auditing standards between August 2002 and May 2003. The following individuals also made important contributions to this report: Mark Trapani, Melinda L. Cordero, and Corinna A. Nicolaou. | The Social Security Administration (SSA) has had difficulty in conducting timely reviews of beneficiaries' cases to ensure they are still eligible for disability benefits. SSA has been taking steps to improve the cost-effectiveness of its review process. SSA has linked the review process to eligibility for a new benefit that provides return-to-work services. This report looks at SSA's ability to stay current with future reviews, identifies potential improvements to the review process, and assesses the review process--return-to-work link. SSA will likely face a backlog of about 200,000 continuing disability review (CDR) cases by the end of fiscal year 2003. SSA officials attribute the pending backlog to its decision to reduce the number of cases reviewed as a result of the delay in obtaining fiscal year 2003 funding. In addition, the pending backlog resulted from putting more emphasis on initial applications over CDRs. To ensure CDRs receive adequate attention, SSA has requested some fiscal year 2004 funds be "earmarked" for these reviews. Given SSA's ability to eliminate its previous CDR backlog using targeted funds, this maneuver could help SSA. Over the next 5 years, SSA has estimated that 8.5 million CDRs, costing about $4 billion, are needed to stay current. If SSA generates another backlog, cost savings and program integrity may be compromised by paying benefits to disability beneficiaries who are no longer eligible to receive them. SSA is not making the best use of available information when conducting its CDRs, leaving opportunities for improvement. First, SSA's decisions on the timing of CDRs are not based on systematic analysis of available information. Second, SSA's process for determining which CDR method to use is not always based on the best available information. For example, SSA requires an in-depth review for all beneficiaries who, upon entering the program, are expected to medically improve even if current information on certain of those beneficiaries indicates that improvement is unlikely and that the review would be better handled through a shorter, less expensive method. Third, SSA has not fully pursued medical treatment data available from the Medicare and Medicaid programs despite their potential to improve SSA's decisions regarding which review method to use. Fourth, SSA's CDRs continue to be hampered by missing or incomplete information on beneficiaries' case history. SSA delays the provision of new return-to-work benefits to beneficiaries expected to medically improve based on the assumption that such beneficiaries are least likely to need them. However, according to SSA data, about 94 percent of such beneficiaries are not found to have medically improved upon completion of a disability review. As a result, some individuals who might benefit from return-to-work services are initially denied access to them. SSA is reviewing this policy and while doing so, will need to consider how to best balance its financial stewardship and return-to-work goals. |
Congress appropriates operations and maintenance funds for DOD, in part, for the purchase of spare and repair parts. DOD distributes operations and maintenance funding to major commands and military units. The latter use operations and maintenance funding to buy spare parts from the Department’s central supply system. By the end of fiscal year 2001, DOD reported in its supply system inventory report that it had an inventory of spare parts valued at about $63.3 billion. Prior GAO reports have identified major risks associated with DOD’s ability to manage spare parts inventories and prompted a need for reporting on spare parts spending and the impact of spare parts shortages on military weapon systems’ readiness. In recent years, Congress has provided increased funding for DOD’s spare parts budget to enable military units to purchase spare parts from the supply system as needed. In addition, beginning with fiscal year 1999, Congress provided supplemental funding totaling $1.5 billion, in part, to address spare parts shortages that were adversely affecting readiness. However, in making supplemental appropriations for fiscal year 2001, the Senate Committee on Appropriations voiced concerns about the Department’s inability to articulate funding levels for spare parts needed to support the training and deployment requirements of the armed services and provide any meaningful history of funds spent for spare parts. In June 2001, we reported that DOD lacked the detailed information needed to document how much the military units were spending to purchase new and repaired spare parts from the central supply system. To increase accountability and visibility over spare parts funding, we recommended that DOD provide Congress with detailed reports on its past and planned spending for spare parts. In making the recommendation, we anticipated that such information, when developed through reliable and consistent data collection methods, would help Congress oversee DOD’s progress in addressing spare parts shortages. In response to our recommendation, in June 2001 and February 2002, DOD provided Congress with Exhibit OP-31 reports as an integral part of the fiscal year 2002 and 2003 budget requests for operations and maintenance funding. These reports, which the services had previously submitted to DOD for internal use only, were to summarize the amounts each military service and reserve component planned to spend on spare parts in the future and the actual amount spent the previous fiscal year. Figure 1 shows the Exhibit OP-31 template as it appears in DOD’s Financial Management Regulation. The regulation requires the military services to report the quantity and dollar values of actual and programmed spending for spare parts in total and by specific commodity groups, such as ships, aircraft engines, and combat vehicles and explain any changes from year to year as well as between actual and programmed amounts. (See apps. I through VI for each service’s June 2001 and February 2002 exhibits.) DOD’s June 2001 and February 2002 reports did not provide Congress with an actual and complete picture of spare parts spending. The actual amounts reported as spent by the Army in total on spare parts and by all services for most of the commodities were estimates. The services’ budget offices had computed these estimates using various methods because they do not have a reliable system to account for and track such information.In addition, all the services did not include information on the supplemental operations and maintenance funding they received in their totals, include the quantities of parts purchased, or explain deviations between planned and actual spending as required on the template. These deficiencies limit the potential value of DOD’s reports to Congress and other decision makers. Some of DOD’s purported actual spending data were estimates. All of the Army’s spending amounts and most of the other services’ commodity amounts for prior years were estimates derived from various service methods—not actual obligations to purchase spare parts. The services’ headquarters budget offices provided these estimates because they did not have a process for tracking and accumulating information on actual spending by commodity in their accounting and logistics data systems. The services’ budget offices were to develop the Exhibit OP-31 data using the guidance shown on the template as published in DOD’s Financial Management Regulation. The Department did not provide the services with any other guidance on how to develop information required for Exhibit OP-31 reports. The guidance directed the services to prepare reports showing planned and actual funding and quantities of repairable and consumable spare parts purchases by commodity for multiple fiscal years. Each service employed its own methodology to estimate the amount of money spent for spare parts as described below: The Army used estimates to report its total spending for spare parts and the breakout of spare parts spending for all commodity groups. The Army based its estimates on computer-generated forecasts of the spare parts needed to support the current and planned operations. Information from cost data files, logistics files, and the Operating and Support Management Information System was used to develop a consumption rate for spare parts on the basis of anticipated usage, considering such factors as miles driven and hours flown. The consumption factor was entered in the Army’s Training Resources Model, which contains force structure, planned training events, and the projected operating tempo. The model used the consumption factor to estimate the total cost and quantities of spare parts that would be consumed. The model also provided the estimated spending for each of the commodities cited in the exhibit. The Navy Department used unaudited actual obligation data from the major commands as its basis for reporting total spending for spare parts and for some commodity groups. However, the breakout of actual spending data for the aircraft engine and airframe commodities were estimates. The Navy Department’s headquarters budget office developed its reports on the basis of information contained in price and program change reports submitted by the major commands. The Navy Department’s accounting system tracked obligations and developed pricing information for spare parts purchased under numerous subactivity groupings, some of which were tied to the categories listed on the OP-31 Exhibit. For example, codes have been established to track obligations for consumable and repairable spare parts purchased to support ship operations. The budget office prepared summary schedules accumulating these obligations from each command and transferred this information to the appropriate line of the OP-31 Exhibit. While the system provided accounting codes to summarize spare parts spending to support air operations and air training exercises, separate codes had not been established to distinguish spare parts purchased for aircraft engines and airframes—two separate and distinct commodity groupings on the exhibit. Lacking a separate breakout for aircraft engines and airframes, the budget office estimated the amounts for each commodity from historical trends. The Air Force used unaudited actual obligation data from its accounting system to identify and report its total spending, but its breakout of spending for the commodity groupings used estimates. The Air Force calculated estimates for each commodity by applying budget factors to the total actual obligation data shown in its accounting system. The accounting system provided these data by expense code, which designated depot-level repairables and consumables by “fly” and “non-fly” obligations. The Air Force allocated all “fly” obligations to airframes and left the engine commodity blank, even though some of the obligations were for engines. The Air Force selected this approach because spare parts for airframes and engines are budgeted together. To estimate the amount spent on the missiles, communications equipment, and other miscellaneous commodities, the Air Force allocated the total “non-fly” obligations on the basis of ratios derived from the amounts previously budgeted for these categories. While DOD had no reliable system to account for and track all of the needed information on actual spending, some of the services’ major commands have data that can be compiled for this purpose. Our visits to selected major operating commands for each military service revealed that they maintain automated accounting and logistics support data systems that could be used to provide unaudited data on spare parts funding allocations and actual obligations to purchase repairable and consumable spare parts in significant detail. For example, at the Army’s Training and Doctrine Command, we found that the Integrated Logistics Analysis Program provided information to monitor and track obligation authority by individual stock number and federal supply class. Personnel at that location used these data to develop a sample report documenting spending in the format requested by Exhibit OP-31. The Air Force’s Air Combat Command and the Navy’s Commander in Chief Atlantic Fleet each had systems that also could be used to provide information on spending. We discussed these reporting deficiencies with Office of the Secretary of Defense comptroller officials, who concurred that some figures on the service’s Exhibit OP-31 reports were estimates and that DOD did not have a comprehensive financial management system that would routinely provide actual spending information. They said that estimates are all they have access to, given the absence of a comprehensive financial management system that reports accurate cost-accounting information. Furthermore, they stated that even though detailed information on such spending is available at the major commands, developing better estimates would entail an expensive and potentially difficult reporting requirement that should be considered in deciding whether the current information is acceptable. DOD’s exhibits were also not complete in that they did not show all of the key information required by the template. DOD’s guidance directed the services to report total operations and maintenance spare parts funding, the spare parts quantities bought, and the reasons for deviations between actual and programmed funding. However, two of the services did not provide information on the quantities of spare parts they had purchased, and none of the services explained variances between actual and initially programmed funding. Service officials commented that these reporting omissions were generally due to DOD’s vague data collection guidance on the template and uncertainties about how to comply. The Army was the only service that reported spare parts quantity purchases each fiscal year. However, the Army’s quantities were estimates that were based on applying historical usage rates to such factors as miles driven and hours flown, even when actual quantities were required. The Navy and Air Force did not report quantities because, according to service officials, such information was not readily available to them. Furthermore, they said that DOD’s data collection guidance did not adequately explain how this information was to be developed. None of the services explained changes between actual and programmed spending in the exhibits as required. In comparing the June 2001 and February 2002 exhibits, we noted that each service’s fiscal year 2001 actual spending deviated from the amount programmed and that some differences were significant. For example, in the February 2002 exhibits, the Navy showed an increase for fiscal year 2001 of approximately $400 million, and the Air Force showed a decrease of approximately $93 million in the actual amounts spent for spare parts versus the amount programmed in the June 2001 exhibits. Neither service provided a reason for the change. While DOD guidance requires the services to report total programmed and actual spending amounts, the services do not identify and report pending supplemental funding requests in their programmed spending totals until after the supplemental funds are received. For example, the Navy’s June 2001 exhibit did not include supplemental funding of about $299 million in its reported fiscal year 2001 programmed funding estimate, which totaled approximately $3.5 billion. However, the Navy’s February 2002 exhibit included the additional funding in the actual fiscal year 2001 actual spending totals. Similarly, the Army’s June 2001 exhibit, which reported programmed funding of approximately $2.1 billion for fiscal year 2002, did not include $250 million in supplemental funding for the purchase of additional spare parts to improve readiness. The supplemental funding was later included in the spending estimates reflected on the February 2002 exhibit. Service officials commented that these reporting omissions were generally due to uncertainties about requirements for reporting changes to spare parts spending estimates that result from supplemental funding. Weaknesses in DOD’s accounting and reporting practices hinder the usefulness of the data to decision makers. Providing actual data on spare parts spending is important to Congress and decision makers because, when linked to factors such as spare parts shortages and readiness, it can help serve as a baseline for evaluating the impact of funding decisions. Because the reports have not cited actual spending and have not been complete, they do not provide Congress with reasonable assurance about the amount of funds being spent on spares. As a result, they have less value to Congress and other decision makers in the Department during their annual deliberations about (1) how best to allocate future operations and maintenance resources to reduce spare parts shortages and improve military readiness and (2) when to make future resource allocation decisions about modernizing the force. Given the importance of spare parts to maintaining force readiness, and as justification for future budget requests, actual and complete information would be important to DOD as well as Congress. Therefore, we recommend that the Secretary of Defense issue additional guidance on how the services are to identify, compile, and report on actual and complete spare parts spending information, including supplemental funding, in total and by commodity, as specified by Exhibit OP-31 and direct the Secretaries of the military departments to comply with Exhibit OP-31 reporting guidance to ensure that complete information is provided to Congress on the quantities of spare parts purchased and explanations of deviations between programmed and actual spending. In written comments on a draft of this report, DOD partially concurred with both recommendations. DOD’s written comments are reprinted in their entirety in appendix VII. DOD expressed concern that the first recommendation focused only on improving the reporting of operations and maintenance appropriations spending for spare parts but did not address other appropriations used for these purposes or working capital fund purchases. DOD stated that in order to have a comprehensive picture of spare parts spending, information on spare parts purchased with working capital funds and other investment accounts needs to be reported. The Department offered to work with Congress to facilitate this kind of analysis. As our report makes clear, we focused our analysis on the information the Department reported—operations and maintenance funding—and our recommendation was directed at improving the accuracy of the information. We continue to believe it is important that the Congress receive accurate actual spending data for these appropriations. Furthermore, as we point out in the report, operations and maintenance funding is the principal source of funds used by the military services to purchase new or repaired spare parts from the working capital funds, and as such, is a key indicator of the priority being placed on spares needs. Lastly, our report recognizes that there are other sources of funds for spare parts purchases, and we support DOD’s statement that it will work with Congress to provide more comprehensive reporting on actual and programmed spending from all sources. In partially concurring with the second recommendation, the Department agreed that the services need to explain deviations between programmed and actual spending but believed that reporting spare parts quantities purchased as required by the financial management regulation does not add significant value to the information being provided to Congress because of the wide range in the unit costs for parts. While we recognize that the costs of parts vary significantly, continuing to include such information by commodity provides some basis for identifying parts procurement trends over time and provides valuable information about why shortages may exist for certain parts. Therefore, we continue to believe that our recommendation is appropriate. To determine the accuracy, completeness, and consistency of the oversight reports to Congress on spare parts spending for the active forces under the operations and maintenance appropriation, we obtained copies of and analyzed data reflected on OP-31 exhibits submitted by the Departments of the Army, Navy, and Air Force for the June 2001 and February 2002 budget submissions. We compared data and narrative explanations on the reports with reporting guidelines and templates contained in the DOD Financial Management Regulation. We analyzed and documented the data collection and reporting processes followed by each of the military departments through interviews with officials and reviews of available documentation at DOD’s Office of the Comptroller and budget offices within the Departments of the Army, Navy, and the Air Force. To determine the availability of alternative systems for tracking and documenting information on actual obligations for spare parts purchases, we visited selected major commands in each of the military departments. These major commands included the Army’s Training and Doctrine Command; the Navy’s Commander in Chief, Atlantic Fleet; and the Air Force’s Air Combat Command. However, we did not attempt to validate the commands’ detailed funding data. We also reviewed our prior reports outlining expectations for enhanced oversight reporting on the use of spare funds and high-risk operations within the Department of Defense. We performed our review from February through August 2002 in accordance with generally accepted government auditing standards. We are sending copies of this report to John P. Murtha, the Ranking Minority Member of the Subcommittee on Defense, House Committee on Appropriations; other interested congressional committees; the Secretary of Defense; Secretaries of the Army, Air Force, and Navy; and the Director, Office of Management and Budget. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. Please contact me on (202) 512-8412 if you or your staff have any questions concerning this report. Staff acknowledgments are listed in appendix VIII. Key contributors to this report were Richard Payne, Glenn Knoepfle, Alfonso Garcia, George Morse, Gina Ruidera, Connie Sawyer, George Surosky, Kenneth Patton, and Nancy Benco. | GAO was asked by the Department of Defense (DOD) to identify ways to improve DOD's availability of high-quality spare parts for aircraft, ships, vehicles, and weapons systems. DOD's recent reports do not provide an accurate and complete picture of spare parts funding as required by financial management regulation. As a result, the reports do not provide Congress with reasonable assurance about the amount of funds being spent on spare parts. Furthermore, the reports are of limited use to Congress as it makes decisions on how best to spend resources to reduce spare parts shortages and improve military readiness. |
Unlike states that opt to cover CHIP-eligible children in their Medicaid programs and therefore must extend Medicaid covered services to CHIP- eligible individuals, states with separate CHIP programs have flexibility in program design and are at liberty to modify certain aspects of their programs, such as coverage and cost-sharing requirements. However, federal laws and regulations require states’ separate CHIP programs to include coverage for routine check-ups, immunizations, emergency services, and dental services defined as “necessary to prevent disease and promote oral health, restore oral structures to health and function, and treat emergency conditions.” States typically cover a broad array of additional services in their separate CHIP programs and, in some states, adopt the Medicaid requirement to cover Early and Periodic Screening, Diagnostic and Treatment (EPSDT) services.must also comply with mental health parity requirements—meaning they must apply any financial requirements or limits on mental health or substance abuse benefits in the same manner as applied to medical and surgical benefits. With respect to costs to consumers, CHIP premiums and cost-sharing— irrespective of program design—may not exceed amounts as defined by law. States may vary separate CHIP premiums and cost-sharing based on income and family size, as long as cost-sharing for higher-income children is not lower than for lower-income children. Federal laws and regulations also impose additional limits on premiums and cost-sharing for children in families with incomes at or below 150 percent of the federal poverty level (FPL). In all cases, no cost-sharing can be required for preventive services—defined as well-baby and well-child care, including age-appropriate immunizations and pregnancy-related services. In addition, states may not impose premiums and cost-sharing that, in the aggregate, exceed 5 percent of a family’s total income for the length of the child’s eligibility period in CHIP. PPACA includes provisions that seek to standardize coverage and costs of private health plans in the individual and small group markets. QHPs offered both on and off the exchanges are required to comply with applicable private insurance market reforms, including relevant premium rating requirements, the elimination of lifetime and annual dollar limits on EHBs, prohibition of cost-sharing for preventive services, mental health parity requirements, and the offering of comprehensive coverage. PPACA allows exchanges in each state to offer coverage of pediatric dental services as an integrated benefit in a QHP or through an SADP, which consumers can purchase separately. In exchanges with at least one participating SADP, QHPs are not required to include the pediatric dental benefit. Some states require children obtaining coverage in their state- based exchanges to enroll in an SADP if their QHP does not include the pediatric dental benefit; consumers purchasing coverage in the federally facilitated exchange are not required to do so. With respect to costs to consumers, QHPs must offer coverage that meets one of four metal tier levels, which correspond to actuarial value (AV) percentages that range from 60 to 90 percent: bronze (AV of 60 percent), silver (AV of 70 percent), gold (AV of 80 percent), or platinum (AV of 90 percent). that a health plan will pay, on average—the higher the AV, the lower the cost-sharing expected to be paid by consumers. Cost-sharing subsidies are available to individuals with incomes between 100 and 250 percent of the FPL to offset the costs they incur through copayments, coinsurance, and deductibles in a silver-level QHP. The cost-sharing subsidies are not provided directly to consumers; instead, QHP issuers are required to offer three variations of each silver plan they market through an exchange in the individual market. These plans are to reflect the cost-sharing subsidies through lower out-of-pocket maximum costs and, if necessary, through lower deductibles, copayments, or coinsurance. Once the adjustments from the subsidies are made, the AV of the silver plan available to eligible consumers will effectively increase from 70 percent to 73, 87, or 94 percent, depending on income. SADPs have different AV requirements than QHPs. SADPs are categorized as “high” and “low” level plans, with 85 and 70 percent AV, respectively.subsidies are not available for pediatric dental costs incurred by a consumer enrolled in an SADP. Deductibles, co-pays, coinsurance amounts, and out-of-pocket maximum costs can vary within these plans, as long as the overall cost-sharing structure meets the required AV levels. Plans are allowed a de minimis variation of +/- 2 percent. Premium costs are not included in the AV computation. these premium contributions ranged from $471 to $8,949 for a family of four. The premium tax credit is available to eligible consumers regardless of which metal tier they choose; however, the credit is calculated based on the second-lowest cost silver plan in the rating area in which the consumer resides. Unlike cost-sharing subsidies, which generally do not apply to costs incurred for services by a consumer enrolled in an SADP, the maximum contribution amount on premiums includes premiums for both QHPs and SADPs, if relevant. Finally, PPACA established out-of-pocket maximum costs that apply to EHBs included in QHPs and SADPs. In 2014, these maximum costs for QHPs ranged from $2,250 to $6,350 for individuals and $4,500 to $12,700 for families for households with incomes between 100 and 400 percent of the FPL. Out-of-pocket maximum costs for SADPs are in addition to the out-of-pocket maximum costs for QHPs and were established by each exchange in 2014. CHIP-eligible children may enroll in QHPs instead of enrolling in CHIP— either through a child-only plan or through a plan with other family members—but they are ineligible for premium tax credits and cost- sharing subsidies because of their eligibility for CHIP. However, if a state experiences a CHIP funding shortfall in the future and is therefore unable to enroll all CHIP-eligible children into a CHIP plan, such children may qualify for premium tax credits and cost-sharing subsidies to offset the In states not experiencing a funding shortfall, cost of QHP coverage.enrolling CHIP-eligible children in QHPs would generally increase costs for families. Under CMS regulations, if an individual who is ineligible for cost-sharing subsidies enrolls in the same policy as another family member who is eligible for cost-sharing subsidies, nobody covered under the policy will qualify for cost-sharing subsidies. As a result, enrolling CHIP-eligible children in QHPs could result in a loss of cost-sharing subsidies for family members that are eligible for these subsidies. To maintain cost-sharing subsidies for eligible family members, the CHIP- eligible child would need to be enrolled in a child-only health plan, for which premium tax credits would be unavailable because of the child’s eligibility for CHIP. We determined that coverage in the selected CHIP plans and QHPs in our five states was generally comparable in that it included some level of coverage for nearly all the services we reviewed. Notable exceptions were certain enabling services and pediatric dental services, which were more frequently covered by the selected CHIP plans. (See app. I for a detailed list of selected services covered by the plans we reviewed.) With respect to certain enabling services, which may be particularly important for low income children, care coordination or case management was offered by all selected CHIP plans, but by only one selected QHP. Similarly, routine transportation to and from medical appointments was covered by two CHIP plans but by none of the selected QHPs. With respect to pediatric dental services, the QHP in New York was the only selected QHP that covered them; the selected QHPs in the other four states did not integrate pediatric dental services within the medical coverage they offered. To obtain coverage for pediatric dental services, consumers who purchased the selected QHP in these states would also need to purchase an SADP. For consumers who purchased the selected QHP in New York or the selected SADP in the other four states, we determined that pediatric dental coverage available was generally comparable to what was available in their state’s selected CHIP plan, with the exception of Utah, where the selected CHIP plan was more generous than the selected SADP. However, the extent to which consumers obtained coverage that included pediatric dental services is not clear. Available federal data with information on QHP enrollment suggest that many children in the United States with exchange coverage in 2014 may have been without comprehensive dental coverage. According to our analysis of enrollment data for 2014 provided by ASPE, 16 percent of children younger than 18 years of age in the 36 states with federally facilitated exchanges were enrolled in a QHP that included comprehensive dental services that covered check-ups, basic, and major dental services. children were enrolled in QHPs that either had less than comprehensive or no dental coverage. Some of these families are likely to have purchased an SADP for their children, however. According to an ASPE report issued in May 2014, 18 percent of children younger than 18 years of age in the 36 states with federally facilitated exchanges who enrolled in a QHP also enrolled in an SADP, and these were likely among the families that had no comprehensive dental coverage included in their According to our analysis of enrollment data for 2014 provided by QHP.ASPE, virtually no children younger than 18 years of age in the 36 states with federally facilitated exchanges were enrolled in a QHP that included comprehensive dental services and an SADP. According to CMS, a QHP must offer check-ups, basic, and major dental services to be considered a QHP with embedded dental coverage. According to our analysis of enrollment data for 2014 provided by ASPE, less than half of the QHPs in a given state offered any type of dental coverage—checkups, basic, or major dental services—in two thirds of states with federally facilitated exchanges. did impose limits on outpatient therapies, pediatric vision, and pediatric hearing. One notable difference between these selected CHIP plans and QHPs was the frequency by which they limited home-and community- based health care. While the selected QHP in four states imposed day or visit limits on these services, only one state’s selected CHIP plan did so. In contrast, no QHPs imposed limits on durable medical equipment, while one CHIP plan imposed a $2,000 annual limit. For services where coverage limits were sometimes imposed on QHPs and CHIP plans, our review found that the limits on CHIP plans were at times less restrictive. For example, the selected QHP in Utah limited home- and community-based health care services to 60 visits per year while the selected CHIP plan in the state did not impose any limits on these services. Comparability between service limits in states’ selected CHIP plans and QHPs was less clear for outpatient therapy services. For example, the selected CHIP plan in New York limited outpatient physical and occupational therapies to 6 weeks per year, with no limits on outpatient speech therapy, while the selected QHP in the state limited outpatient therapies to a combined 60 visits per condition per lifetime. (See app. II for a detailed list of coverage limits for services we reviewed in the selected plans.) In addition, for pediatric dental services, coverage limits in the selected QHP and SADPs were generally similar to those in the selected CHIP plan; however, when there were differences, CHIP was generally more generous. For example, the selected CHIP plan in Kansas allowed one sealant per tooth per year; in contrast, the selected high and low SADP in the state allowed one sealant per tooth every three years. Similarly, the selected CHIP plan in Utah did not have any coverage limits on x-rays while the selected high and low SADPs in the state did. (See app. III for a detailed list of selected dental limits we reviewed in selected plans.) We determined that costs to consumers were almost always less in the selected CHIP plans than in the selected QHPs. Even considering PPACA provisions aimed at reducing cost-sharing amounts for certain low-income consumers who purchased QHPs, the differences remained, though were smaller. For example, the selected CHIP plans in four of the five states did not include any deductible, which means that enrollees in those states did not need to pay a specified amount before the plan began paying for services. In contrast, QHPs we reviewed typically imposed annual deductibles, which were as high as $500 for an individual and $1,500 for a family in the plan variation that offered the lowest available deductibles for QHP enrollees. In addition, consumers who purchase selected SADPs may face separate deductible costs. For example, whereas dental services were subject to the plan deductible in the New York QHP, SADPs in Colorado, Illinois, and Kansas had separate dental deductibles that ranged from $25 to $50 for individuals enrolled in selected high plans to $45 to $50 for individuals enrolled in selected low plans. (See app. III for a detailed list of selected dental cost- sharing we reviewed in the selected plans.) For services we reviewed where the plans imposed copayments or coinsurance, the amount was typically less in a state’s selected CHIP plan compared to its selected QHP, even considering PPACA provisions aimed at reducing cost-sharing amounts for certain low income consumers who purchased QHPs. For example, the selected CHIP plan in two of our five states – Kansas and New York – did not impose copayments or coinsurance on any of the services we reviewed. In two of the remaining three states, the selected CHIP plan imposed copayments or coinsurance on less than half of the services we reviewed, and the amounts were usually minimal and on a sliding income scale. For example, for each brand-name prescription drug, the Illinois CHIP plan imposed a $3.90 copayment on enrollees with incomes greater than 142 and up to 157 percent of the FPL, which was increased to $7 for enrollees with incomes greater than 209 and up to 313 percent of the FPL. In contrast, selected QHPs in all five states imposed copayments or coinsurance on most covered services we reviewed, and the amounts were consistently higher than the CHIP plan in the same state. For example, depending on income, the copayment for primary care and specialist physician visits in Colorado ranged from $2 to $10 per visit for enrollees in the selected CHIP plan, but was $25 and $35 per visit, respectively, for all enrollees in the selected QHP. Cost-sharing for dental services was also higher in a state’s selected SADP than in its selected CHIP plan a majority of the time. In addition, in states where the selected QHP charged coinsurance and the selected CHIP plan required a copayment, a direct comparison of cost differences could not be made, although data suggest CHIP costs would generally be lower. For example, for an inpatient hospital admission, higher-income enrollees in the selected CHIP plan in Colorado paid $50, while all enrollees in the selected QHP in the state were responsible for 20 percent coinsurance after the deductible was met, an amount that was likely to be higher given that 20 percent of the average price for an inpatient facility stay in 2011 was over $3,000.services we reviewed in selected plans.) (See app. IV for a detailed list of cost-sharing for Our review of premiums for selected CHIP plans and QHPs also suggests that premiums were always less in the CHIP plans than in the QHPs we reviewed, even with the application of the premium tax credit to defray the cost of QHP premiums. For example, according to CHIP officials, annual CHIP premiums in 2014 for an individual varied by income level and ranged from $0 for the lowest income CHIP enrollees in Colorado, Illinois, Kansas, and New York, to $720 for enrollees between 351 and 400 percent of the FPL in New York, with most enrollees across the five selected states paying less than $200 per year.annual premiums for a single child only enrolled in selected QHPs ranged from $1,111 to $1,776 in our five states before the application of the premium tax credit. With the premium tax credit, the annual premium amount for selected QHPs was often significantly lower, but was still higher than the selected CHIP plan in all five states. For example, in Illinois, the premium for the selected CHIP plan for an individual with an income at 150 percent of the FPL was $0 and was $1,254 for the selected QHP, which was reduced to $944 after the premium tax credit was applied. However, the additional premium cost to families enrolling previously eligible CHIP children into their QHPs—a possibility if CHIP funding is not reauthorized—may be minimal or nothing.amount lower income families pay in premiums, families with incomes at 250 percent or less of the FPL—at least 75 percent of the separate CHIP enrollees in the states we reviewed—would generally pay no additional premium to add a child to their QHP. For example, in Kansas, the 2014 annual premium for the lowest cost silver level QHP was $4,875 for a couple age 40 and an additional $1,211 to add a child. However, if the couple’s income was 200 percent of the FPL, their maximum annual Because PPACA limits the premium would be $2,494, and they would incur no additional costs by adding a child to their plan. Finally, all selected CHIP plans and QHPs limited the total potential costs to consumers by imposing out-of-pocket maximum costs, and these maximum costs were typically less in the CHIP plans we reviewed. For example, all five states applied the limit a family could pay in CHIP plans as established under federal law—including deductibles, copayments, coinsurance, and premiums—at 5 percent of a family’s income during the child’s (or children’s) eligibility for CHIP. This 5 percent cap resulted in limits that varied based on a family’s income level. This amount ranged from $584 to $2,334 for individuals, and $1,193 to $4,770 for a family of four, between 100 and 400 percent of the FPL in 2014. PPACA also established out-of-pocket maximum costs that apply to QHPs and may vary by income. premiums, which may be separately reduced through the application of premium tax credits. QHPs may set out-of-pocket maximum costs that are lower than those established by PPACA, which was the case for For example, the selected QHP in three of the five selected QHPs.Colorado had individual out-of-pocket maximum costs ranging from $750 to $6,300 for individuals between 100 and 400 percent FPL. This amount was less than out-of-pocket maximum costs established under federal law, which ranged from $2,250 to $6,350 for individuals between 100 and 400 percent FPL in 2014. PPACA out-of-pocket maximum costs on EHB for households with incomes between 100 and 400 percent of the FPL in 2014 ranged from $2,250 to $6,350 for individuals and $4,500 to $12,700 for families. In 2015, out-of-pocket maximum costs on EHB for households with incomes between 100 and 400 percent of the FPL ranged from $2,250 to $6,600 for individuals and from $4,500 to $13,200 for families. Out-of-pocket maximum costs for SADPs are in addition to the out-of- pocket maximum costs for QHPs and may increase potential costs for families who purchase them. In 2014, each exchange established maximum out-of-pocket costs for SADPs, which do not include premiums. Annual out-of-pocket maximum costs for selected SADPs for three of the four selected SADPs were $700 for one child and $1400 for two or more children. We provided a draft of this report for comment to HHS. HHS officials provided technical comments, which we incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from its date. At that time, we will send copies to the Secretary of Health and Human Services and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staffs have any questions about this report, please contact Katherine Iritani at (202)512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V. Testing (screening and/or exam) The Patient Protection and Affordable Care Act (PPACA) allows exchanges in each state to make available coverage of pediatric dental services as an embedded benefit in a QHP or through a stand- alone dental plan (SADP), which consumers may purchase separately. In exchanges with at least one participating SADP, QHPs were not required to include the pediatric dental benefit. Consumers in these five states were not required to purchase SADPs in 2014, even if their QHP did not include the pediatric dental benefit. Rehabilitation is provided to help a person regain, maintain or prevent deterioration of a skill that has been acquired but then lost or impaired due to illness, injury, or disabling condition. While PPACA and its implementing regulations do not define habilitative services, habilitation has been defined by several advocacy groups as a service that is provided in order for a person to attain, maintain, or prevent deterioration of a skill or function never learned or acquired due to a disabling condition. The plan covers bone anchored hearing aids and cochlear implants only. Bone anchored hearing aids are used when traditional hearing aids are not efficient because of complications such as chronic infections or blockage. Cochlear implants are for patients with severe hearing loss where traditional amplification is no longer beneficial. The plan covers care coordination and case management for children with special health care needs only. Utah CHIP defines children with special health care needs as enrollees who have or are at increased risk for chronic physical, developmental, behavioral, or emotional conditions and who also require health and related services of a type or amount beyond that required by adults and children generally. Routine transportation includes transportation to and from medical appointments. Routine transportation is covered for CHIP children greater than 142 and up to 209 percent of the federal poverty level only. Tables 1 and 2 provide information on coverage limits for selected services in State Children’s Health Insurance Program (CHIP) plans and qualified health plans (QHP) in each of the five states we reviewed: Colorado, Illinois, Kansas, New York, and Utah. For coverage limits on pediatric dental services, see Appendix III. Tables 3 through 12 provide information on coverage, coverage limits, and cost-sharing—deductibles, copayments, and coinsurance—for selected dental services in State Children’s Health Insurance Program (CHIP) plans we reviewed in five states: Colorado, Illinois, Kansas, New York, and Utah; a qualified health plan (QHP) in New York; and stand- alone dental plans (SADP) in Colorado, Illinois, Kansas, and Utah. For selected CHIP plans and the QHP in New York, we note differences in cost-sharing amounts by income level. For selected SADPs, we note the cost-sharing amounts for the “high” and “low” level options, which have actuarial values of 85 and 70 percent, respectively. For all five states, cost-sharing amounts were subject to out-of-pocket maximum costs. For CHIP enrollees in each state, cost-sharing and premium amounts were subject to a federally established out-of-pocket- maximum cost equal to 5 percent of a family’s income. For QHP enrollees, issuers established an out-of-pocket maximum cost for each plan that was equal to or less than the out-of-pocket maximum cost established under the Patient Protection and Affordable Care Act (PPACA). PPACA out-of-pocket maximum costs for households with incomes between 100 and 400 percent of the federal poverty level (FPL) in 2014 ranged from $2,250 to $6,350 for individuals and $4,500 to $12,700 for families. In 2014, each exchange established out-of-pocket maximum costs for SADPs. Annual out-of-pocket maximum costs for the selected SADPs in Colorado, Illinois, and Kansas were $700 for one child and $1400 for two or more children. The selected SADP in Utah imposed an out-of-pocket maximum cost of $40 for the low plan and $20 for the high plan. In contrast to CHIP, the out-of-pocket maximum costs for QHPs and SADPs do not include premiums. Tables 13 through 17 provide information on cost-sharing—deductibles, copayments, and coinsurance—for selected services in State Children’s Health Insurance Program (CHIP) plans and qualified health plans (QHP) we reviewed in five states: Colorado, Illinois, Kansas, New York, and Utah. For selected CHIP plans and QHPs, we note differences in cost- sharing amounts by income level. For selected QHPs, these variations reflect the cost-sharing subsidies that are available to certain enrollees. For all five states, cost-sharing amounts were subject to out-of-pocket maximum costs. For CHIP enrollees in each state, cost-sharing and premium amounts were subject to a federally established out-of-pocket maximum cost equal to 5 percent of a family’s income. For QHP enrollees, issuers established an out-of-pocket maximum cost for each plan that was equal to or less than out-of-pocket maximum costs established under the Patient Protection and Affordable Care Act (PPACA). PPACA out-of-pocket maximum costs for households with incomes between 100 and 400 percent of the federal poverty level (FPL) in 2014 ranged from $2,250 to $6,350 for individuals and $4,500 to $12,700 for families. These out-of-pocket maximum costs do not include costs associated with services provided through a SADP and, in contrast to CHIP, these out-of-pocket maximum costs do not include premiums. In addition to the contact named above, Susan T. Anthony, Assistant Director; Sandra George; John Lalomio; Laurie Pachter; and Teresa Tam made key contributions to this report. Children’s Health Insurance: Cost, Coverage, and Access Considerations for Extending Federal Funding. GAO-15-268T. Washington, D.C.: December 3, 2014. Children’s Health Insurance: Information on Coverage of Services, Costs to Consumers, and Access to Care in CHIP and Other Sources of Insurance. GAO-14-40. Washington, D.C.: November 21, 2013. Children’s Health Insurance: Opportunities Exist for Improved Access to Affordable Insurance. GAO-12-648. Washington, D.C.: June 22, 2012. Medicaid and CHIP: Most Physicians Serve Covered Children but Have Difficulty Referring Them for Specialty Care. GAO-11-624. Washington, D.C.: June 30, 2011. Medicaid and CHIP: Given the Association between Parent and Child Insurance Status, New Expansions May Benefit Families. GAO-11-264. Washington, D.C.: February 4, 2011. Oral Health: Efforts Under Way to Improve Children’s Access to Dental Services, but Sustained Attention Needed to Address Ongoing Concerns. GAO-11-96. Washington, D.C.: November 30, 2010. Medicaid: State and Federal Actions Have Been Taken to Improve Children’s Access to Dental Services, but Gaps Remain. GAO-09-723. Washington, D.C.: September 30, 2009. | Federal funds appropriated to states for CHIP—the jointly financed health insurance program for certain low-income children—are expected to be exhausted soon after the end of fiscal year 2015 unless Congress acts to appropriate new funds. Beginning in October 2015, any state with insufficient CHIP funding must establish procedures to ensure that children who are not covered by CHIP are screened for Medicaid eligibility. If ineligible, children may be enrolled into a private qualified health plan—or QHP—that has been certified by the Secretary of Health and Human Services (HHS) as comparable to CHIP, if such a QHP is available. GAO was asked to examine coverage and costs to consumers in selected CHIP plans and private QHPs in selected states. GAO reviewed (1) coverage and (2) costs to consumers for one CHIP plan, one QHP, and, where applicable, one SADP in each of five states—Colorado, Illinois, Kansas, New York, and Utah. State selection was based on variation in location, program size, and design; CHIP plan selection was based on high enrollment; and QHP selection was based on low plan premiums. GAO obtained CHIP and QHP premium data from state officials and federal and state websites. GAO also obtained documents from and spoke to federal officials, including from HHS's Assistant Secretary for Planning and Evaluation, state officials, including from CHIP and insurance departments, and issuers of QHPs. HHS provided technical comments on a draft of this report, which GAO incorporated as appropriate. In five selected states, GAO determined that coverage of services in the selected State Children's Health Insurance Program (CHIP) plans was generally comparable to that of the selected private qualified health plans (QHP), with some differences. In particular, the plans were generally comparable in that most covered the services GAO reviewed with the notable exceptions of pediatric dental and certain enabling services such as translation and transportation services, which were covered more frequently by the CHIP plans. For example, only the selected QHP in New York covered pediatric dental services; the QHPs in the other four states did not include pediatric dental services, although some officials indicated this would change for 2015 offerings. In those four states, stand-alone dental plans (SADP) could be purchased separately. Selected CHIP plans and QHPs were also similar in terms of the services on which they imposed day, visit, or dollar limits, although the five selected CHIP plans generally imposed fewer limits than the selected QHPs. For services where coverage limits were sometimes imposed on QHPs and CHIP plans, GAO's review found that the limits on CHIP plans were at times less restrictive. For example, the selected QHP in Utah limited home- and community-based health care services to 60 visits per year while the selected CHIP plan did not impose any limits. In addition, for pediatric dental services, coverage limits in the selected SADPs were generally similar to those in the selected CHIP plan; however, when there were differences, CHIP was generally more generous. Consumers' costs for these services—defined as deductibles, copayments, coinsurance, and premiums—were almost always less in the five states' selected CHIP plans when compared to their respective QHPs, despite the application of subsidies authorized under the Patient Protection and Affordable Care Act (PPACA) that reduce these costs in the QHPs. Specifically, when cost-sharing applied, the amount was typically less for CHIP plans, even considering PPACA provisions aimed at reducing cost-sharing amounts for certain low income consumers who purchased QHPs. For example, an office visit to a specialist in Colorado would cost a CHIP enrollee a $2 to $10 copayment per visit, depending on their income, compared to the lowest available copayment of $25 per visit in the selected Colorado QHP. GAO's review of premium data further suggests that selected CHIP premiums were always lower than selected QHP premiums, even when considering the application of PPACA subsidies that help to defray the cost to certain consumers. For example, the 2014 annual premium for the selected Illinois CHIP plan for an individual at 150 percent of the federal poverty level (FPL) was $0. By comparison, the 2014 annual premium for the selected Illinois QHP was $1,254, which was reduced to $944 for an individual at 150 percent of the FPL, after considering federal subsidies to offset the cost of coverage. Finally, all selected CHIP plans and QHPs GAO reviewed limited out-of-pocket maximum costs, and these maximum costs were typically less in the CHIP plans. |
Medicaid is jointly financed by the federal government and the states, with the federal government matching most state Medicaid expenditures using a statutory formula that determines a federal matching rate for each state. Medicaid is a significant component of federal and state budgets, with estimated total outlays of $576 billion in fiscal year 2016, of which $363 billion is expected to be financed by the federal government and $213 billion by the states. Medicaid served about 72 million individuals, on average, during fiscal year 2016. As a federal-state partnership, both the federal government and the states play important roles in ensuring that Medicaid is fiscally sustainable over time and effective in meeting the needs of the populations it serves. States administer their Medicaid programs within broad federal rules and according to individual state plans approved by CMS, the federal agency that oversees Medicaid. Federal matching funds are available to states for different types of payments that states make, including payments made directly to providers for services rendered under a fee-for-service model and payments made to managed care organizations: Under a fee-for-service delivery model, states make payments directly to providers; providers render services to beneficiaries and then submit claims to the state to receive payment. States review and process fee-for-service claims and pay providers based on state- established payment rates for the services provided. Under a managed care delivery model, states pay managed care organizations a set amount per beneficiary; providers render services to beneficiaries and then submit claims to the managed care organization to receive payment. Managed care plans are required to report to the states information on services utilized by Medicaid beneficiaries enrolled in their plans—information typically referred to as encounter data. Most states use both fee-for-service and managed care delivery models, although the number of beneficiaries served through managed care has grown in recent years. Federal law requires each state, under both fee-for-service and managed care delivery models, to operate a claims processing system to record information about the services provided and report this information to CMS: Provider claims and managed care encounter data are required to include information about the service provided, including the general type of service; a procedure code that identifies the specific service provided; the location of the service; the date the service was provided; and information about the provider who rendered the service (e.g., provider identification number). Fee-for-service claims records must include the payment amount. Federal law requires states to collect managed care encounter data, but actual payment amounts to individual providers are not required. Long-term services and supports financed by Medicaid are generally provided in two settings: institutional facilities, such as nursing homes and intermediate-care facilities for individuals with intellectual disabilities; and home and community settings, such as individuals’ homes or assisted living facilities. Under Medicaid requirements governing the provision of services, states generally must provide institutional care to Medicaid beneficiaries, while HCBS coverage is generally an optional service. Medicaid spending on long-term services and supports provided in home and community settings has increased dramatically over time—to about $80 billion in federal and state expenditures in 2014—while the share of spending for care in institutions has declined, and HCBS spending now exceeds long-term care spending for individuals in institutions (see fig. 1). All 50 states and the District of Columbia provide long-term care services to some Medicaid beneficiaries in home and community settings. Personal care services, a key type of HCBS, are typically nonmedical services provided by personal care attendants—home-care workers who may or may not have specialized training. The demand for personal care services is expected to increase as is the number of attendants providing these services in coming years. The number of Medicaid beneficiaries receiving personal care services at this time is not known, but likely in the millions. In calendar year 2012, the most recent and complete available data, an estimated 1.5 million beneficiaries in the 35 states reporting at the time received personal care services at least once. Total Medicaid spending for personal care services is also not known, as spending in managed care delivery systems is not reported by service. Total Medicaid spending for personal care services in fee-for-service delivery systems was about $15 billion in FY 2015. With approval from CMS, states can choose to provide personal care services under one or more types of authorities (referred to in this statement as programs) put in place over the past 41 years under different sections of the Social Security Act. The various types of programs provide states with options for permitting participant direction and choices about how to limit services, among other things (see table 1). CMS has implemented the different statutory requirements associated with these various program types by issuing regulations, as well as guidance to help states implement their Medicaid programs in accordance with applicable statutory and regulatory requirements. Guidance can include letters to state Medicaid directors, program manuals, and templates to help states apply for CMS approval to provide certain services like personal care. Together with federal statutes, the regulations and guidance issued by CMS establish a broad federal framework for the provision of personal care services. States are responsible for establishing and administering specific policies and programs within the federal parameters laid out in this framework. In our 2016 report examining the federal program requirements for the multiple programs under which personal care services are provided, we found significant differences in federal requirements related to beneficiary safety and ensuring that billed services are provided. These differences may translate to differences in beneficiary protections across program types. Program requirements can include general safeguards for ensuring beneficiary health and welfare, quality assurance measures, critical incident monitoring, and attendant screening. For example, states implementing an HCBS Waiver program or a State Plan HCBS program must: Describe to CMS how the state Medicaid agency will determine that it is assuring the health and welfare of beneficiaries. To do so, states must describe: the activities or processes related to assessing or evaluating the program; which entity will conduct the activities; the entity responsible for reviewing the results of critical incident investigations; and the frequency at which activities are conducted. Demonstrate to CMS, by providing specific details that an incident management system is in place, including incident reporting requirements that establish the type of incidents that must be reported, who must report incidents, and the timeframe for reporting. In contrast, states implementing a State Plan personal care services program or a Community First Choice program have fewer requirements for beneficiary safeguards. For example, for these programs, states are not required to do the following: Provide CMS with detailed information describing the activities they are taking to assure the health and welfare of beneficiaries. Demonstrate to CMS specific details about their critical incident management process and incident reporting system; instead they are required to describe more generally their “process for the mandatory reporting, investigating and resolution of allegations of neglect, abuse, or exploitation.” Table 2 below illustrates more broadly the differences in federal program requirements that establish beneficiary safeguards and protections that we identified in our 2016 report. Differences in federal program requirements may also result in significant differences in the level of assurance that billed services are actually provided to beneficiaries. States implementing HCBS Waiver programs and State Plan HCBS programs, for example, are required by CMS to provide evidence that the state is only paying claims when services are actually rendered, while the State Plan personal care services and Community First Choice programs are not required to do so. Table 3 below highlights the federal Medicaid personal care services program requirements that we identified in our 2016 report to ensure that billed services are provided for each of the different type of HCBS program states may administer. The four selected states we examined as part of our 2016 report used different methods to ensure attendants provided billed services to beneficiaries, according to state officials. For example, for at least some personal care services programs, two states required beneficiaries to sign timesheets, and two states used electronic visit verification timekeeping systems. All four states performed quality assurance reviews for some personal care services programs to ensure billed services are received. The differing federal program requirements can create complexities for states and others in understanding federal requirements governing different types of HCBS programs, including personal care services. These different requirements may also result in significant differences in beneficiary safeguards and fiscal oversight, as illustrated in the following examples: Beneficiaries may experience different health and welfare safeguards depending on the program in which they are enrolled. For example, in one state we reviewed in 2016, the state required quarterly or biannual monitoring of beneficiaries for most of its personal care services programs. In contrast, for another program, the state required only annual monitoring contacts, in part, officials told us, due to the differing program requirements. Depending on the program type, CMS may have fewer assurances that beneficiaries’ with similar levels of need are in programs with similar protections. For example, three of the four states we reviewed—Maryland, Oregon, and Texas— have in recent years transitioned coverage of personal care services for beneficiaries who need an institutional level of care from personal care services programs with relatively more stringent federal beneficiary safety requirements to programs with relatively less stringent requirements. Although they were not required to do so, state officials in the three states reported that the states chose to continue using the same quality assurance measures in the new programs as the best way to ensure safety for beneficiaries. Without more harmonized requirements, we concluded that CMS has no assurance that states that transition personal care services from HCBS Waivers to Community First Choice in the future will make the same decisions. States can use different processes for each personal care services program to ensure that billed services are actually provided, and some programs may not be subject to federal personal care services requirements explicitly in this regard. For example, in one state we reviewed in 2016, steps taken to ensure billed services are provided under some types of personal care services programs were not required in another of the state’s programs. A report we issued in 2012 reviewing states’ implementation of different HCBS programs also suggested that states could benefit from more harmonization of program requirements. Officials in selected states we reviewed in 2012 noted the complexity of operating multiple programs. For example, officials from one state reported that the complexity resulted in a siloed approach, with different enrollment, oversight, and reporting requirements for each program. The administration and understanding of the programs available to beneficiaries was difficult for state staff and beneficiaries, according to officials in another state. The officials indicated that they would prefer CMS issue guidance on how states could operate different HCBS program types together, rather than issuing guidance on each program separately. In our 2016 report, we acknowledged certain efforts CMS had taken to harmonize requirements and improve oversight of personal care services programs. However, despite these efforts, we found that significant differences in program requirements existed. We recommended that CMS take additional steps to better harmonize and achieve a more consistent application of program requirements, as appropriate, across the different personal care services programs in a way that accounts for common risks faced by beneficiaries and to better ensure billed services were provided. CMS agreed with these recommendations, and has sought input by publishing a request for information on numerous topics related to Medicaid home and community-based services, including input on how to ensure beneficiary health and safety and program integrity across different types of personal care services programs. In our 2017 report examining the data CMS uses to monitor the provision of personal care services, we found that claims and encounter data collected by CMS were not timely. Data are typically not available for analysis and reporting by CMS or others for several years after services are provided. We found that this happens for two reasons. First, although states have 6 weeks following the completion of a quarter to report their claims data, their reporting could be delayed as a result of providers and managed care plans not submitting data in a timely manner, according to the CMS contractor responsible for compiling data files of Medicaid claims and encounters. For example, providers may submit claims for fee-for-service payments to the state late and providers may need to resubmit claims to make adjustments or corrections before they can be paid by the state. Second, once complete MSIS data are submitted by the states, the data must be compiled into annual person-level claims files that are in an accessible format, checked to identify and correct data errors, and consolidated for any claims with multiple records. This process, for one year of data, can take several years and, as a result, when information from claims and encounters becomes available for use by CMS for purposes of program management and oversight it could be several years old. We also found that Medicaid personal care services claims and encounter data that CMS collects were incomplete in two ways. First, specific data on beneficiaries’ personal care services were not included in the calendar year 2012 MSIS data for 16 states, as of 2016, when we conducted our analysis. Nevertheless, these 16 states received federal matching funds for the $4.2 billion in total fee-for-service payments for personal care services that year—about 33 percent of total expenditures for personal care services reported by all states (see figure 2). Second, even for the 35 states for which 2012 MSIS claims and encounter data were available, certain data elements collected by CMS were incomplete. For example, for the records we analyzed, 20 percent included no payment information, 15 percent included no provider identification number to identify the provider of service, and 34 percent did not identify the quantity of services provided (see figure 3). Incomplete data limit CMS’s ability to track spending changes and corroborate spending with reported expenditures because the agency lacked important information on a significant amount of Medicaid payments for personal care services. For example, among the 2012 claims we reviewed for personal care services under a fee-for-service delivery model, claims without a provider identification number accounted for about $4.9 billion in total payments. Similarly, payments for fee-for- service claims with missing information on the quantity of personal care services provided totaled about $5.1 billion. These data gaps represented a significant share of total personal care services spending, which totaled about $15 billion in fee-for-service expenditures in 2015. Even when states’ claims and encounter data collected by CMS was complete, we found that it was often inconsistent, which limits the effectiveness of the data to identify questionable claims and encounters. For purposes of oversight, a complete record (claims or encounters) should include data for each visit with a provider or caregiver, with dates of when services were provided, the amount of services provided using a clearly specified unit of service (e.g., 15 minutes), and the type of services provided using a standard definition. Such a complete record would allow CMS and states to analyze claims to identify potential fraud and abuse. The following examples illustrate inconsistencies in data regarding when services were provided and the types of services that were provided from the 35 states whose data we reviewed: When services were provided. State-reported dates of service were overly broad. In the 35 states, some claims for personal care services had dates of services (i.e., start and end dates) that spanned multiple days, weeks, and in some cases months. For 12 of the 35 states, 95 percent of their claims were billed for a single day of service. However, in other states, a number of claims were billed over longer time periods. For example, for 10 of the states, 5 percent of claims covered a period of at least 1 month, and 9 states submitted claims that covered 100 or more days. When states report dates of service that are imprecise, it is difficult to determine the specific date for which services were provided and identify whether services were claimed during a period when the beneficiary is not eligible to receive personal care services—for example, when hospitalized for acute care services. Type of services provided. States used hundreds of different procedure codes for personal care services. Procedure codes on submitted claims and encounters were inconsistent in three ways: the number of codes used by states; the use of both national and state- specific codes; and the varying definitions of different codes across states. More than 400 unique procedure codes were used by the 35 states. CMS does not require that states use standard procedure codes for personal care services; instead, states have the discretion to use state-based procedure codes of their own choosing or national procedure codes. As a result, the procedure codes used for similar services differed from state to state, which limits CMS’s ability to use this data as a tool to compare and track changes in the use of specific personal care services provided to beneficiaries because CMS cannot easily compare similar procedures by comparing service procedure codes. In our 2017 report we found that Medicaid personal care services expenditure data collected were not always accurate or complete, according to our analysis of expenditure data collected by CMS from states for calendar years 2012 through 2015. When submitting expenditure data, CMS requires states to report expenditures for personal care services on specific reporting lines. These reporting lines correspond with the specific types of programs under which states have received authority to cover personal care services, and can affect the federal matching payment amounts states receive when seeking federal reimbursement. For example, a 6 percent increase in federal matching is available for services provided through the Community First Choice program. For three other types of HCBS programs, CMS also requires states to report their expenditures for personal care services separately from other types of services provided under each program on what CMS refers to as feeder forms—that is, individual expenditure lines for different types of services that feed into the total HCBS spending amount for each program. We found that not all states were reporting their personal care services expenditures accurately, and, as result, personal care services expenditures may have been underreported or reported in an incorrect category. We compared personal care services expenditures for all states for calendar years 2012 through 2015 with each state’s approved programs during this time period and found that about 17 percent of personal care services expenditure lines were not reported correctly. As illustrated in figure 4, nearly two-thirds of the reporting errors were a result of states not separately identifying and reporting personal care services expenditures using the correct reporting lines, as required by CMS. Without separate reporting of personal care expenditures as required, CMS is unable to ensure appropriate federal payment, monitor how spending changes over time across the different program types and have an accurate estimate of the magnitude of potential improper payments for personal care services. The other types of errors involved states erroneously reporting expenditures that did not correspond with approved programs. As a result, CMS is not able to efficiently and effectively identify and prevent states from receiving federal matching funds inappropriately, in part because it does not have accurate fee-for- service claims data that track payments by personal care program type that is linked with expenditures reported for purpose of federal reimbursement. These errors demonstrated that CMS was not effectively ensuring its reporting requirements for personal care expenditures were met. We concluded that by not ensuring that states are accurately reporting expenditures for personal care services, CMS is unable to accurately identify total expenditures for personal care services, expenditures by program, and changes over time. According to CMS, expenditures that states reported through MBES are subject to a variance analysis, which identifies significant changes in reported expenditures from year to year. However, CMS’s variance analysis did not identify any of the reporting errors that we found. CMS officials told us that they would continue to review states’ quarterly expenditure reports for significant variances and follow up on such variances. In our 2017 report, we acknowledged certain efforts CMS had taken to improve the data it collects. However, these efforts had not addressed data issues we identified that limited the usefulness of the data for oversight. We recommended that CMS take steps to improve the collection of complete and consistent personal care services data and better monitor the states’ provision of and spending on Medicaid personal care services. Specifically, CMS agreed with recommendations to better ensure states comply with data reporting requirements and to develop plans for analyzing and using the data. The agency neither agreed nor disagreed with recommendations to issue guidance to ensure key data regarding claims and encounter data are complete and consistent, or with a recommendation to ensure claims data can be accurately linked with aggregate expenditure data. In light of our findings of inconsistent and incomplete reporting of claims and encounters, errors in reporting expenditures, and the high-risk of improper payments, we believe action in response to these recommendations is needed. In conclusion, Medicaid personal care services are an important benefit for a significant number of Medicaid beneficiaries and amount to billions of dollars in spending to the federal government and states. The demand and spending for personal care services continues to grow. However, the services are not without risk. Personal care services are at high risk for improper payments and beneficiaries may be vulnerable and at risk of unintentional harm and potential neglect and exploitation. Over the years, federal laws have given states a number of different options to provide home- and community- based services. Having various options for providing personal care services provides flexibilities for states in how they administer their programs and provide services to different groups of beneficiaries. At the same time, our work has also found a patchwork of federal requirements, resulting in varying levels of beneficiary safeguards and requirements to ensure that billed services are actually provided. As a result, beneficiaries with similar needs could be receiving services in programs with significantly different safeguards in place, depending on the program. Similarly, the level of assurance that billed services are actually provided could vary based on the type of program. Further, our work showed that the data CMS collects for oversight of these programs is not always timely, complete, accurate, and consistent. Without better data, CMS is hindered in effectively performing key management functions related to personal care services, such as ensuring state claims for enhanced federal matching funds are accurate. CMS has taken steps to improve the data it collects from states, and to establish more consistent administration of policies and procedures across the programs under which personal care services are provided. However, we found additional steps are warranted. Chairman Murphy, Ranking Member DeGette, and Members of the Subcommittee this concludes my prepared statement. I would be pleased to respond to any question that you might have at this time. If you or your staffs have any questions about this testimony, please contact Katherine M. Iritani at (202) 512-7114. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this testimony include Tim Bushfield, Assistant Director; Anna Bonelli; Christine Davis; Barbara Hansen; Laurie Pachter; Perry Parsons; and Jennifer Whitworth. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | Medicaid, a joint federal-state health care program, provides long-term services and supports for disabled and aged individuals, increasingly in home and community settings. Federal and state Medicaid spending on home- and community-based services was about $80 billion in 2014. Personal care services are a key component of this care. States can offer personal care services through many different types of programs, and each may be subject to different federal requirements established by statute, regulations, and guidance. The provision of personal care in beneficiaries' homes can pose safety risks, and these services have a high and growing rate of improper payments, including cases where services for which the state was billed were not provided. In recent years Congress has directed HHS to improve coordination of these programs which could harmonize requirements--that is, implement a more consistent administration of policies and procedures--and enhance oversight. This statement highlights key issues regarding (1) the federal program requirements to protect beneficiaries' safety and ensure that billed services are provided, and (2) the usefulness of data collected by CMS for oversight. This testimony is based on reports GAO issued in 2016 and 2017. For these reports, GAO assessed CMS data on personal care services provided to beneficiaries and state spending. GAO also reviewed federal statutes, regulations, and guidance, and interviewed CMS officials. In its November 2016 report, GAO found a patchwork of federal requirements related to how states must protect the safety of beneficiaries in their personal care services programs and to how states ensure that billed services are actually provided. Personal care services help beneficiaries with basic activities of daily living such as bathing and dressing, in a home- or community-based setting. For two types of programs under which personal care services can be offered, states must describe to the Centers for Medicare & Medicaid Services (CMS) how they will ensure the health and welfare of beneficiaries. Similar requirements were not in place for several other programs GAO examined. In addition, for some but not all personal care services programs that GAO reviewed, states must provide evidence to CMS that the state is paying claims for services that have actually been provided. These differing federal program requirements result in uneven beneficiary safeguards and levels of assurances regarding states' beneficiary protections and oversight of billed services. GAO recommended that CMS take steps to harmonize and achieve a more consistent application of federal requirements across programs. CMS agreed with GAO's recommendation and sought input on how to do so by publishing a request for information. In its January 2017 report, GAO found limitations in the data that CMS collects to monitor the provision of personal care services and to monitor state spending on services. For example: Data on personal care services provided were often not timely, complete or consistent. The most recent data available during GAO's review (2016) were for 2012 and included data for only 35 states. Further, 15 percent of claims lacked provider identification numbers and 34 percent lacked information on the quantity of services provided. Data were also inconsistent as more than 400 different procedure codes were used by states to identify personal care services. Without timely, complete, and consistent data, CMS is unable to effectively oversee state programs and verify who is providing personal care services or the type, amount, and dates of services provided. Data on states' spending on CMS's expenditure reports, the basis for states' receipt of federal matching funds, were not always accurate or complete. From 2012 through 2015, 17 percent of expenditure lines were not reported correctly by states, according to GAO's analysis. Nearly two-thirds of these errors were due to states not separately identifying personal care services expenditures, as required by CMS, from other types of expenditures. Inaccurate and incomplete data limit CMS's ability to, among other oversight functions, ensure federal matching funds are appropriate. GAO made several recommendations to improve the data CMS collects to monitor the provision of and expenditures on personal care services. CMS agreed with some but not all of these recommendations. |
Sexual abuse can have negative consequences for children during the time of abuse as well as later in life, according to several recent research reviews. Initial effects reportedly have included fear, anxiety, depression, anger, aggression, and sexually inappropriate behavior in at least some portion of the victim population. Long-lasting consequences reportedly have included depression, self-destructive behavior, anxiety, feelings of isolation and stigma, poor self-esteem, difficulty in trusting others, a tendency toward revictimization, substance abuse, and sexual maladjustment. In addition, researchers have noted that there is widespread belief that there is a “cycle of sexual abuse,” such that sexual victimization as a child may contribute to perpetration of sexual abuse as an adult. Such a pattern is consistent with social learning theories—which posit that children learn those behaviors that are modeled for them—and also with psychodynamic theories—which suggest that abusing others may help victimized individuals to overcome childhood trauma. Critics have argued that empirical support for the cycle of sexual abuse is weak, and that parents are unduly frightened into thinking that little can be done to mitigate the long-term effects of sexual abuse. There remain many unanswered questions about the risk posed by early sexual victimization, as well as about the conditions and experiences that might increase this risk (such as number of victimization experiences, age of the victim at the time of the abuse, and whether the abuse was perpetrated by a family member). There are also questions about factors that may prevent victimized children from becoming adult perpetrators (such as support from siblings and parents or positive relationships with other authority figures). Answers to such questions would be useful in developing both prevention strategies and therapeutic interventions. Studying the relationship between early sexual victimization and later perpetration of sexual abuse is methodologically difficult. If researchers take a retrospective approach, and ask adult sex offenders whether they experienced childhood sexual abuse, there are problems of selecting a representative sample of offenders, finding an appropriate comparison group of adults who have not committed sex offenses but are similar to the study group in other respects, minimizing errors that arise when recalling traumatic events from the distant past, and dealing with the possibility that offenders will purposely overreport childhood abuse to gain sympathy or underreport abuse to avoid imputations of guilt. A prospective approach—selecting a sample of children who have been sexually abused and following them into adulthood to see whether they become sexual abusers—overcomes some of the problems of the retrospective approach, but it is a costly and time-consuming solution. In addition, researchers choosing the prospective approach still face the challenge of disentangling the effects of sexual abuse from the effects of other possible problems and stress-related factors in the backgrounds of these children (e.g., poverty, unemployment, parental alcohol abuse, or other inadequate social and family functioning). This requires the selection of appropriate comparison groups of children who have not been sexually abused and children who have faced other forms of maltreatment, as well as the careful measurement of a variety of other explanatory factors. We collected, reviewed, and analyzed information from available published and unpublished research on the cycle of sexual abuse. Identifying the relevant literature involved a multistep process. Initially, we identified experts in the sex offense research field by contacting the Department of Justice’s Office of Juvenile Justice and Delinquency Prevention and Office of Victim Assistance, the National Institute of Mental Health’s Violence and Traumatic Stress Branch, the American Psychological Association, and academicians selected because of their expertise in the area. These contacts helped identify experts in the field, who in turn helped identify other experts. We also conducted computerized searches of several on-line databases, including ERIC (the Education Resources Information Center), NCJRS (the National Criminal Justice Reference Service), PsycINFO, Dissertation Abstracts, and the National Clearinghouse on Child Abuse. We identified 40 articles on the cycle of sexual abuse issued between 1965 and 1996. Four of these reviewed the literature in the area; of these, two were published in 1988, one was published in 1990, and one was published in 1991. Of the remaining articles, 23 presented findings from retrospective research studies, which began with a sample of known adult sex offenders of children and sought to determine (by asking the offenders) whether they were sexually abused during childhood. Another four presented findings from two prospective research studies, which began with samples of sexually victimized children and tracked them into adulthood to determine how many became sex offenders. Of the original 40 articles, we excluded 5 because they presented findings only, or primarily, on adolescent sex offenders against children, and an additional 4 because we were unable to obtain them. For the studies in our review, we recorded the quantitative results, summarized the methodologies used, and summarized the authors’ conclusions about the cycle of sexual abuse. Each study was reviewed by two social scientists with specialized doctoral training in evaluation research methodology. Conclusions in this report are based on our assessment of the evidence presented in these studies. We sent the list of research articles to two experts, both of whom have done extensive research in the field, to confirm the comprehensiveness of our list of articles. In addition, as a final check, we conducted a second search of computerized on-line databases in March 1996 to ensure that no new research articles or reviews had been published since our original search in October 1995. We sent a draft copy of our report for comment to the two experts previously consulted, as well as to one additional expert, to ensure that we had presented the information about the research studies accurately.Their technical comments were incorporated where appropriate. We did not send a draft to any agency or organization because we did not obtain information from such organizations for use in this study. We did our work between October 1995 and August 1996 in accordance with generally accepted government auditing standards. There was no consensus among the studies we reviewed that being sexually abused as a child led directly to the victim’s becoming an adult sexual abuser of children. However, some studies did conclude that it might increase the risk that victims would commit sexual abuse later. A majority of the retrospective studies noted that most sex offenders had not been sexually abused as children, and the two prospective studies showed that the majority of victims of sexual abuse during childhood did not become sex offenders as adults. The 4 review articles we obtained, which collectively covered roughly two-thirds of the 25 studies we reviewed, concluded that the evidence from these studies was insufficient to establish that being sexually abused as a child is either a necessary or a sufficient condition for the victim’s becoming a sexual abuser as an adult. We reviewed 23 retrospective studies. Appendix I provides additional information on these studies. All but one of the retrospective studies focused on adult male sex offenders, and in most studies the offenders sampled were imprisoned or in some type of treatment program. However, these studies varied considerably in the types of child sexual abusers studied, whether control or comparison groups were used, and if so, the types of individuals in these groups. The retrospective studies also varied considerably in their findings and conclusions. The percent of adult sex offenders against children identified as being sexually abused as children themselves ranged from zero to 79 percent. This variation partially reflects differences across studies in how childhood sexual abuse was defined, as well as other differences in study methodology. This variation may also reflect the differences in the types of child sex offenders studied. For example, both Hanson and Slater (1988) and Garland and Dougher (1988) concluded from their reviews of retrospective studies that offenders who selected male children as victims were more likely to have been sexually abused themselves than were offenders against female children. A few of the studies found that sex offenders of children were more likely to have been sexually abused as children than were members of control groups composed of noninstitutionalized nonoffenders. However, many studies found that, when compared with other types of sex offenders (e.g., rapists or exhibitionists) and other types of nonsexual offenders (i.e., men incarcerated for nonsexual crimes), adult sex offenders of children were not necessarily more likely to have been sexually abused as children. According to several researchers, the relationship between childhood sexual victimization and adult perpetration of sexual offenses against children is complex and requires measurement and analysis of a host of factors. For example, it has been postulated that adult sexual offending is not simply a result of the experience of childhood sexual victimization, but also of other factors such as age at onset of the abuse, nature of the abuse, stability of the caregiver, and/or physical abuse. Studies that collect data on such additional factors may add to our understanding of what types of sexual abuse, perpetrated under what conditions against what types of child victims, are associated with what types of adult sexual offending against what types of victims under what types of conditions. However, while such retrospective studies can help explore factors possibly related to adult sexual offending, they cannot establish the importance of these factors in predicting adult sexual offending. The reason for this is discussed in the following section. The retrospective studies we reviewed had several shortcomings that precluded our drawing any firm conclusions about whether there is a cycle of sexual abuse. First, the studies focused on known sex offenders of children (i.e., offenders who have been detected, arrested, or convicted, or who had been referred or had presented themselves for treatment), and these offenders may not be typical or representative of all sex offenders against children. Second, self-reports of childhood sexual abuse obtained from known sex offenders are of questionable validity. Known offenders may be motivated to overreport histories of abuse to gain sympathy or to excuse their own offenses. Third, where comparison or control groups were used, attempts to match group members to sex offenders of children on factors possibly related to being sexually abused or abusive were typically limited; few of the studies attempted to control for such factors statistically. Finally, one of the major shortcomings of these retrospective studies is that they cannot reveal how likely it is that a person who has been sexually abused as a child will become a sexual abuser in adulthood. For example, even if 100 percent of sexual abusers of children were sexually abused as children, this would not necessarily mean that sexual abuse causes abused children to become abusers themselves. It may be that only a small percentage of sexually abused children become sex offenders against children. Determining how likely victims of childhood sexual abuse are to become adult sex offenders requires that a sample of sexually abused children be followed forward in time, rather than the histories of sex offenders be traced backward. Our review of the literature identified two research studies (described in four articles) that have used a prospective approach in examining the cycle of sexual abuse. One of these studies is part of a larger study of the cycle of violence. Widom is the primary researcher in the larger study, which is still ongoing. It involves a cohort of 908 substantiated cases of child abuse (physical and/or sexual) or neglect processed through the courts between 1967 and 1971. These abuse/neglect cases were restricted to children who were 11 years of age or younger at the time of the abuse or neglect incident. They included 153 sexually abused children, 160 physically abused children, and 697 neglected children. This prospective study also includes a control group of 667 individuals who had no record of abuse or neglect and who were either born in the same hospitals or attended the same elementary schools as the abused children. The control and study group members were matched on sex, age, race, and approximate family socioeconomic status. Local, state, and federal official arrest records containing information recorded up to June 1994 were used to determine how many of the study and control group members were arrested for sex offenses. Table 1 shows results pertaining to sex offenses from the most recent analyses based on this larger study. The study did not distinguish whether the sex offense was perpetrated against a child or an adult. Compared to the control group, a higher percentage of those who had been sexually abused, physically abused, or neglected as children were arrested as adults for any sex crime, for prostitution, and (among males) for rape or sodomy. To determine how different the study groups were from the control group, Widom statistically controlled for such differences between the groups as age, race, and sex; calculated odds ratios; and performed statistical tests. The results indicated that the differences between the sexually abused group and the control group in the odds of arrest for any sex crime or for rape or sodomy separately were not statistically significant. Sexually abused children were significantly more likely to have been arrested for prostitution, however. Twenty-three to 27 years later, sexually abused children were nearly four times more likely to have been arrested for prostitution. On the other hand, members of the childhood neglect study group were significantly more likely than members of the control group to have been arrested for any sex crime or for prostitution. Because it could allow researchers to discern the likelihood of victims becoming abusers, the prospective approach is methodologically superior to the retrospective approach. Widom’s study, however, has several limitations. First, published work from the study has so far relied solely on official arrest data, which may fail to identify some offenders (those who avoid detection or arrest). Second, the study groups of victimized children were identified by using records of substantiated cases of abuse or neglect that were processed through the state courts. Such cases may represent only the most severe instances of abuse and may not be generalizable to all children who have been abused or neglected. Finally, the number of sexually abused males in the abused/neglected sample was small (a total of 24). Statistical comparisons based on small numbers of cases should be interpreted with caution, since small sample sizes may not yield reliable estimates. We located one other study that used a prospective design and followed sexually victimized children into early adulthood. This study sampled 147 boys under the age of 14 who were seen in the emergency room of an urban hospital because of sexual abuse between 1971 and 1975. The researchers also collected data on a comparison sample of boys of the same race and roughly the same age who were seen in the same emergency room at roughly the same time for reasons other than sexual abuse. In the period 1992 to 1994, official juvenile and adult arrest records for the entire victim and comparison sample were collected, and the researchers attempted to locate and interview as many of the men as possible. Fifty of the 147 boys in the victim sample, and 56 of the 147 boys in the comparison sample, were interviewed. They were asked to self-report instances of sex-offending, and were also asked a number of other questions about their family of origin, sexual history, history of sexual victimization, psychological functioning, drug and alcohol use, and criminal behavior. As shown in table 2, the study found little difference between the victim and comparison samples in the percentages that were arrested for, or that self-reported, sex offenses. According to the researchers, one explanation for this finding is that the victim and comparison samples are not as different as originally intended with respect to their having been victims of child sexual abuse. For instance, in the comparison group, 40 percent of the 56 men interviewed reported that they had themselves been sexually abused. Furthermore, 55 percent of the men in the victim sample did not recall, or at least did not report to interviewers, that they had been sexually abused. When the researchers reanalyzed the data and compared all victims (from both the victim sample and the comparison sample) with the remaining nonvictimized members of the comparison group, they did not find a significant difference between the two groups in the likelihood of becoming a sex offender. These findings must also be interpreted with caution, however, because no-difference findings are sometimes attributable to comparing small samples rather than to a real absence of difference between groups. The generalizability of these findings may be limited since the sample of sexually abused boys (and the matched comparison group) is neither a random sample nor a sample that is representative of the general population of children at risk of such abuse. Over 80 percent of the boys sampled were African-American, and a disproportionate number of the men who were interviewed were from poor families and had criminal records. About one-third of the interviewed men who were sexually abused as boys, and about one-fifth of all of the men interviewed, were incarcerated at the time of interview. The Williams et al. study is instructive in that it points to a number of difficulties involved in conducting prospective studies of the relationship between childhood victimization and adult offending. These difficulties include (1) the need to determine whether members of comparison groups were victims of sexual abuse, and (2) the need to employ more than a single outcome measure of offending. Of 15 men who self-reported any sex offense, only 5 had an arrest record for a sex offense; and of 14 men who had been arrested for a sex offense, only 5 self-reported a sex-offending behavior. A number of studies have been done on the cycle of sexual abuse, many of which were reviewed in this report. Most of the studies were retrospective in design; that is, they began with a sample of known sex offenders of children and sought to determine whether they were sexually abused during childhood. The chief limitation of the retrospective studies is that studying a known group of sexual offenders cannot provide any direct information about the extent to which children who are sexually abused become sexual offender as adults. The two studies we reviewed that were prospective in design attempted to overcome this limitation by identifying samples of sexually victimized children and tracking them into adulthood to determine how many became sex offenders. These studies also had limitations, which made it difficult to reach any definitive conclusions about the cycle of sexual abuse. However, in spite of their limitations, overall, the retrospective studies, prospective studies, and research reviews did indicate that the experience of childhood sexual victimization is quite likely neither a necessary nor a sufficient cause of adult sexual offending. Further research would be necessary to determine what kinds of experiences magnify the likelihood that sexually victimized children will become adult sexual offenders against children and, alternatively, what kinds of experiences help prevent victimized children from becoming adult sexual offenders against children. We are sending copies of this report to the Ranking Minority Member of the House Subcommittee on Crime and the Chairman and Ranking Minority Member of the Senate Committee on the Judiciary. Copies will also be made available to others upon request. The major contributors to this report are listed in appendix II. Please call me at (202) 512-8777 if you have any questions about this report. | Pursuant to a congressional request, GAO reviewed research studies regarding the cycle of sexual abuse, focusing on the likelihood that individuals who are victims of sexual abuse as children will become sexual abusers of children in adulthood. GAO found that: (1) there was no consensus among the 23 retrospective and 2 prospective studies reviewed that childhood sexual abuse led directly to the victim becoming an adult sexual abuser; (2) the retrospective studies, which sought to determine whether a sample of known sex offenders had been sexually abused as children, differed considerably in the types of offenders studied, use of control or comparison groups, and definition and reporting of childhood sexual abuse; (3) although some of the retrospective studies concluded that childhood sexual abuse may increase the risk that victims will commit sexual abuse later, most of the studies noted that the majority of sex offenders had not been sexually abused as children; (4) the prospective studies, which tracked sexually abused children into adulthood to determine how many became sex offenders, studied sample populations that may not be representative of the entire population of childhood sexual abuse victims; and (5) the prospective studies found that victims of childhood sexual abuse were not more likely than nonvictims to be arrested for sex offenses. |
Congestion is geographically concentrated in major metropolitan areas, as close to 80 percent of America’s growth and economic development is concentrated in metropolitan areas. Traffic congestion has grown worse in many ways in the past 30 years—trips take longer, congestion affects more of the day and affects more personal trips and freight shipments, and trip travel times are more unreliable. According to AASHTO, travel on the National Highway System has increased fivefold over the past 60 years, from 600 billion miles driven per year to almost 3 trillion in 2009. Annual travel is expected to climb to nearly 4.5 trillion miles by 2050, even with aggressive strategies to cut the rate of growth to only 1 percent per year. The main types of strategies that state and local governments can use to address traffic congestion are improved traffic operations, public transportation, increased capacity, and demand management. ITS generally fits within traffic operations as a way to better manage existing capacity. According to FHWA, traffic congestion is caused by various factors (see fig. 1). Bottlenecks, which reflect inadequate capacity, cause about 40 percent of urban road traffic congestion. The remaining 60 percent of congestion results from other causes, which, according to FHWA, can be addressed by management and operations strategies. ITS encompasses a broad range of wireless and wire line communications-based information and electronic technologies, including technologies for collecting, processing, disseminating, or acting on information in real time to improve the operation and safety of the transportation system. When integrated into the transportation system’s infrastructure and in vehicles themselves, these technologies can relieve congestion, improve safety, and enhance productivity. Using ITS strategies may require officials to make capital improvements by installing equipment, such as traffic control systems and incident management systems. In highly congested metropolitan areas, ITS infrastructure tends to be complex because it typically consists of a set of systems deployed by multiple agencies. For example, the state government typically manages and operates freeway facilities, and city or county governments manage and operate smaller arterial roadways. In a given metropolitan area, the state transportation department, city traffic department, transit agency, and toll authority may each deploy different ITS technologies that address their transportation needs. Metropolitan planning organizations serve a key role in planning, as they have responsibility for the regional transportation planning processes in urbanized areas. Congress established the ITS program in 1991 in the Intermodal Surface Transportation Efficiency Act of 1991 (ISTEA), and DOT created the ITS Joint Program Office in 1994. Since its creation, the ITS Joint Program Office has overseen allocation and expenditure of more than $3 billion for deploying ITS applications and researching new technologies. Under ISTEA and continuing under the Transportation Equity Act for the 21st Century (TEA-21), enacted in 1998, Congress authorized funds specifically for state and local governments to deploy ITS technologies. The Safe, Accountable, Flexible, and Efficient Transportation Equity Act: A Legacy for Users (SAFETEA-LU), enacted in 2005, did not directly reauthorize the ITS deployment program. Although DOT no longer provides dedicated funding for ITS deployment, states can use their federal aid highway program funds for improving traffic operations, including deploying ITS. In addition, state and local governments may use their own funds to finance ITS projects. State funding mainly comes from highway user charges, while local funding primarily comes from general funding allocations, property taxes, sales taxes, and various other taxes and fees. Although DOT does not track state or local spending on ITS, a market research company has estimated that states spent a combined $1.4 billion on ITS in 2010. The ITS Joint Program Office, within RITA, leads research of new ITS technologies and also carries out several activities to promote the use of existing technologies. In this capacity, the office works with the other modal administrations within DOT, including FHWA, the Federal Transit Administration, the Federal Motor Carrier Safety Administration, the Federal Railroad Administration, the National Highway Traffic Safety Administration, and the Maritime Administration. The Joint Program Office was previously housed in FHWA and moved to RITA in early 2006. FHWA’s Office of Operations carries out activities aimed at improving the operations of the surface transportation system, including traffic management, and, as part of these efforts, encourages the use of ITS by state and local governments. State and local governments currently use ITS technologies in a variety of ways to monitor traffic conditions, control traffic flow, and inform travelers. While numerous types of ITS technologies are available for these purposes, their deployment is uneven across the country. We identified several emerging uses of ITS that have significant potential to reduce traffic congestion. These include approaches that use integrated data to manage traffic and inform travelers and use ITS to proactively manage traffic. State and local governments use ITS technologies to monitor traffic conditions, control traffic flow, and inform travelers about traffic conditions so they can decide whether to use alternative, less congested routes (see fig. 2). Transportation agencies use ITS technologies, such as closed circuit cameras and sensors, to monitor traffic conditions in real time. The availability of real-time information means that agency staff can more rapidly identify and respond to events that impede traffic flow, and develop accurate traveler information. For example, cameras are an important component of incident management. Incident management is a planned and coordinated process to detect, respond to, and clear traffic incidents that can cause traffic jams. Operators can use information from cameras to verify traffic conditions detected through sensors, coordinate response to incidents, and monitor the recovery from the incident. According to DOT’s 2010 ITS deployment survey, the percentage of freeway miles covered by cameras increased from approximately 15 The 2010 deployment survey percent in 2000 to 45 percent in 2010.found that 83 percent of freeway management agencies reported a major benefit from cameras—higher than for any other technology. Meanwhile, the level of deployment of cameras on arterials has remained relatively flat. For example, in the 2000 deployment survey, 17 percent of agencies reported deploying cameras on arterials, compared with 21 percent of agencies in 2010. DOT speculated that this may be due to funding limitations at local agencies. Loop detectors use a fixed roadway sensor to measure the number and estimate the speed of passing vehicles. Radar detectors use microwave radar and are mounted on overhead bridges or poles and transmit signals that are reflected off vehicles back to the sensor. The reflected energy is analyzed to produce traffic flow data, such as volume and speed. Vehicle probes use roaming vehicles and portable devices, such as cell phones and Global Positioning System devices, to collect data on travel times. by real-time data collection technologies, as compared with 55 percent in 2010. The use of these technologies has also grown on arterial roadways, with the percentage of signalized intersections covered by electronic data collection technologies growing from approximately 20 percent in 2000 to 48 percent in 2010. In addition, private companies are expanding the use of vehicle probes that collect real-time data on travel time and speed, allowing for greater geographic coverage. Partnering with private companies to gain vehicle probe data expands the data that state DOTs use. According to the 2010 deployment survey, 11 state DOTs reported using vehicle probe data collected by a private sector company. Many technologies can be used to dynamically manage freeway capacity and traffic flow using real-time information. Approximately one-third of the largest U.S. cities deploy traffic control technologies on freeways. Specifically, 35 of the 108 largest metropolitan areas in the United States have deployed one or more of the following freeway technology capabilities: Ramp meters control the flow of vehicles entering the freeway. According to DOT’s 2010 deployment survey, ramp meters are deployed in 27 of the 108 largest metropolitan areas in the country and manage access to 13 percent of freeway miles, about the same level as in 2006. Congestion (or road) pricing controls traffic flow by assessing tolls that vary with the level of congestion and the time of day. All U.S. congestion pricing projects in operation are High Occupancy Toll lanes, which charge solo drivers a toll to use carpool lanes, or peak- period pricing projects, which charge a lower toll on already tolled roads, bridges, and tunnels during off-peak periods. The deployment of congestion pricing relies on electronic tolling ITS technology. Other ITS technologies used to support congestion pricing include sensors that detect traffic conditions and dynamic message signs that announce toll rates. In 2012, GAO found that congestion pricing projects were open to traffic in 14 major metropolitan areas. Reversible flow lanes and variable speed limits can also be used to control freeway traffic and address congestion. These strategies can incorporate various forms of ITS technologies, including retractable access gates and dynamic message signs. According to the 2010 deployment survey, 11 metropolitan areas use reversible flow lanes or variable speed limits on freeways. Transportation agencies can use ITS technologies to control arterial traffic through traffic signals. Types of advanced traffic signal systems include the following: Operating signals under computerized control: This capability allows operators to remotely adjust the signals from the traffic management center to respond to current traffic conditions and allows for enhanced control over signals in response to traffic events. According to the 2010 deployment survey, 50 percent of signalized intersections were under centralized computer control—essentially equal to the proportion in 2000. Adaptive signal control technology: These signals can be automated to adjust signal timings in real time based on current traffic conditions, demand, and system capacity. It allows faster responses to traffic conditions caused by special events or traffic incidents. For example, Los Angeles has developed one of the first fully operating adaptive signal control systems in North America. Despite benefits of adaptive signals, according to DOT, only 3 percent of traffic signals in the country’s largest metropolitan areas are controlled by adaptive signal control. According to DOT, agencies have not deployed adaptive signals because of the costs of deploying, operating, and maintaining them, as well as uncertainty about their benefits. Transportation agencies communicate information gathered from traffic monitoring to the traveling public in various ways, including via dynamic message signs, television, websites, e-mail, telephone, and devices used in vehicles such as cell phones. This information—including information about travel times and traffic incidents—allows users to make informed decisions regarding trip departures, routes, and modes of travel. Dynamic message signs are popular for communicating traffic information to travelers. According to DOT’s 2010 deployment survey, almost 90 percent of freeway agencies, and approximately 20 percent of arterial agencies, reported using dynamic message signs to disseminate traveler information. The number of dynamic message signs deployed on freeways increased from fewer than 2,000 signs in the year 2000 to over 4,000 in 2010, greatly expanding agencies’ capabilities to communicate directly with freeway travelers. Arterial agencies also increasingly adopted dynamic message signs, nearly tripling from 10 percent of responding agencies in 2000 to 26 percent in 2010. The 511 Traveler Information Services are another method of informing travelers. DOT initiated the development of these services and seeks to have states deploy them nationwide. These 511 services provide information via the telephone (using an interactive voice response automated system) and the Internet. State DOTs generally run these services and they operate independently of one another. Currently, 14 states lack 511 service coverage or provide service for only a portion of the state. Additionally, these services vary in the ways they provide information (phone or Internet), the types of information they provide (travel times, roadway weather conditions, construction), and areas they cover (statewide or citywide). To fulfill requirements in SAFETEA-LU, FHWA issued a Final Rule in November 2010 to establish the Real-Time System Management Information Program. The rule contains minimum requirements for states to make information on traffic and travel conditions available through real-time information programs and to share this information. In 2009, 17 of the 19 experts we interviewed about the need for a nationwide real-time traffic information system said such a nationwide system should be developed.that state and local transportation agencies generally develop and use Some of these experts noted these systems within their own jurisdictions, leading to gaps in coverage and inconsistencies in the quality and types of data collected. Because of these gaps, travelers using 511 systems have to contact different systems while they are traveling and may receive different types of information. In general, the level of ITS deployment varies by state and locality. For example, the deployment of ITS technologies across the four metropolitan areas we visited greatly varies (see table 1). ITS is also used more on freeways than on arterial roads. For example, in response to DOT’s 2010 deployment survey, agencies in 21 metropolitan areas reported deploying real-time traffic data collection technologies such as loop detectors on arterial roadways, compared with agencies in 71 metropolitan areas that reported deploying the same types of technologies on freeways. Several experts we interviewed described the deployment of ITS nationwide as “spotty” or having uneven geographical coverage. DOT officials told us that the pace of ITS adoption by state and local governments has been slow and that upgrades to newer types of technologies have been difficult. In the next section we discuss some of the common challenges state and local governments face in deploying ITS, such as funding constraints. We identified four emerging uses of ITS technologies that have the greatest potential to reduce traffic congestion, based on views of experts we interviewed (see table 2).two broad themes: (1) using integrated data to manage traffic and inform travelers, and (2) proactively managing traffic. State and local governments face various challenges in deploying and effectively using ITS technologies to manage traffic congestion. As mentioned previously, ITS in metropolitan areas tends to be complex and is deployed by multiple agencies, which involves planning and coordination across agencies. Effectively using ITS is dependent upon agencies having the staff and funding resources needed to maintain and operate the technologies. We identified four key challenges agencies face in using ITS: strategic planning, funding deployment and maintenance, having staff with the knowledge needed to use and maintain ITS, and coordinating ITS approaches. Planning for ITS is a key component of strategically using ITS to address transportation issues and reduce congestion. Transportation planning for metropolitan areas has traditionally focused on building and maintaining basic infrastructure to ensure adequate roadway capacity. ITS, in contrast, focuses on managing already-existing capacity to use it more effectively. Strategically using ITS requires agencies to shift focus from planning construction and maintenance of roadways to planning the operations of the surface transportation system, a shift that, according to DOT, some states and local transportation agencies have not yet fully made. A RITA official told us that planning is a major challenge that affects agencies’ ability to make effective use of ITS. The federal ITS program, as mentioned previously, initially included a DOT program that provided grants to transportation agencies specifically to deploy ITS. As a result, many agencies have deployed ITS based on the availability of funding rather than systematic planning, according to two stakeholders, a national transportation organization representative, a DOT official, and four transportation agencies we interviewed. According to FHWA officials, ITS deployment has not always been clearly connected to a transportation problem or need, or well integrated with other transportation strategies and programs. If state and local governments do not consider the range of available ITS options in developing their congestion management strategies, they may miss opportunities to better manage traffic and make the best use of scarce funds to address congestion. Most experts we spoke to believed that limitations of planning processes, as well as the availability of information to support sound decision making, were challenges faced by state and local governments in using ITS. Furthermore, six experts, two stakeholders, and officials from five transportation agencies we contacted noted that there is a need for more planning and analysis information such as cost-benefit information and performance measures. Some of these officials noted that it is currently difficult to calculate and measure the benefits of ITS. For example, in its 2010 deployment survey, DOT found that 25 percent of agencies responsible for managing arterial roadways reported that they had not deployed adaptive traffic signal control technology because of uncertainty about benefits. Lack of quantifiable information about benefits can put ITS projects at a disadvantage compared with other types of transportation projects such as road improvements or bridge replacements, which have more easily quantified benefits. While some studies show that various types of ITS technologies can be cost-effective, conducting such studies can be challenging. FHWA has emphasized the importance of incorporating transportation operations (including ITS) into transportation planning, along with related objectives and performance measures. Despite FHWA’s promotion of the use of such an approach, many metropolitan planning organizations do not fully consider operations in the planning process. A recent FHWA assessment found that metropolitan planning organizations increasingly address traffic operations (including ITS) in their plans, but only 36 percent include specific, measurable objectives related to operations that meet DOT’s recommended criteria. Despite challenges, DOT reports that some regions have effectively incorporated ITS into their planning efforts, including Hampton Roads, Virginia. The Hampton Roads Transportation Planning Organization, the metropolitan planning organization for the area, scores ITS projects for their capacity to support planning objectives and has been able to acquire federal funding for several ITS plans and projects through this process. These include a centralized traveler information system and signal system upgrades. Funding constraints pose a significant challenge to transportation agencies in their efforts to deploy ITS technologies because of competing priorities and an overall constrained funding situation. ITS projects must compete for funding with other surface transportation needs, including construction and maintenance of roads, which often take priority, according to officials from transportation and stakeholder agencies we interviewed. As we reported in 2005, transportation officials often view adding a new lane to a highway more favorably than ITS when deciding how to spend their limited transportation funds. DOT has noted that funding constraints might explain why the rate of adoption of arterial management technologies over the past decade has been flat. In addition, the 2010 deployment survey found that 55 percent of agencies responsible for managing freeways, compared with 36 percent of agencies responsible for managing arterial roadways, plan to invest in new ITS in 2010 to 2013. Transportation agencies face difficult decisions regarding the allocation of their transportation funding, and many have faced severe revenue declines in recent years, restricting the availability of funds for transportation improvements. For example, a county transportation official we interviewed reported that the funds for deploying and maintaining ITS have been reduced annually over the last 3 to 4 years because of reduced county revenues, which has led to the county suspending almost all deployment of ITS field devices. Transportation officials must identify priorities and make trade-offs between funding projects that preserve or add new infrastructure and those that improve operations, such as ITS projects. Preserving infrastructure is a high priority for state and regional decision makers. Traffic growth has outpaced highway construction, particularly in major metropolitan areas, which puts enormous pressure on roads. According to FHWA’s most recent projections (using 2006 data), less than half of the vehicle miles traveled in urban areas are on good-quality pavements and about one-third of urban bridges are in deficient condition.stakeholders and officials from four transportation agencies we spoke with noted, ITS projects have difficulty competing for funding with other needs, such as road and bridge maintenance projects. For example, one city transportation official told us the city must devote most of its resources to highway and bridge projects rather than new technology, and in some cases the city has resorted to demolishing unsafe bridges because of lack of funds rather than repairing or replacing them. These funding issues exist within the context of an overall large funding gap for maintaining and improving the nation’s surface transportation infrastructure. The Highway Trust Fund has been undergoing a solvency crisis in recent years. Its expenditures have exceeded its revenues, which derive mainly from motor fuel taxes. According to 2006 National Surface Transportation Infrastructure Financing Commission estimates, combined revenues at all levels of government, under current policies, will meet only 58 percent of the capital investment requirements for U.S. highway maintenance and only 41 percent of the costs for highway improvement for the period 2008-2035. Agencies that are able to deploy ITS often face additional challenges in funding the operations and maintenance of these technologies. Eight experts we interviewed noted that funding operations and maintenance of ITS is more challenging than funding the initial deployment. Two experts we interviewed noted that ITS is often installed and then not fully utilized or maintained. Additionally, in response to FHWA’s 2009 proposed requirement for states to make travel information available as part of a Real-Time System Management Information Program, several states identified operation and maintenance costs as a barrier to the implementation of such a program.some systems may exceed those of deployment. For example, in 2003, investments for signal control hardware had initial costs of $21,000 to $30,000 and yearly maintenance costs of $9,000 to $10,500 over a 5- year time frame. Ongoing costs of operations for FHWA officials told us that it is often difficult for state and local agencies to sustain the operations of ITS technologies because of funding constraints and the higher priority agencies place on basic infrastructure. For example, a county transportation agency official we interviewed reported that the agency’s operating budget has been reduced by about 30 percent over the past 2 years, which has led to reduced maintenance of ITS devices. Officials from one local agency told us that one of its big challenges is identifying operations and maintenance funding to support newer systems. Advanced traffic signal systems are one area in which operations and maintenance funding challenges can limit effectiveness and impede greater expansion. According to FHWA, over 50 deployments of these signal systems have occurred over the last two decades. However, over half of the deployments were deactivated because of insufficient resources or lack of maintenance or operations capabilities. Additionally, a 2010 study on adaptive traffic control systems found that funding—including the high cost of deployments and the lack of funding for operations—was the main factor in why these systems are not more widely deployed. Transportation officials in one metropolitan area we visited told us that it was common for smaller cities to fund the deployment of advanced traffic signals but be unable to fund, maintain, and repair them after deployment, causing signal failures that can impair coordination with neighboring cities and operation of the larger network. The lack of funding availability for operations and maintenance is compounded by other challenges such as insufficient staffing resources, difficulty in planning maintenance costs, and the fast pace of technological change. RITA officials noted that some local governments will not install ITS because they do not have the staff to do the continual maintenance that the systems require. Three stakeholders and officials from six transportation agencies told us that funding the operations and maintenance of ITS is difficult to plan for, because of challenges accounting for maintenance costs and the fast pace of technology. The life cycle of ITS technologies is short, between 5 and 7 years, according to one ITS researcher, meaning that equipment or software will become obsolete or require retooling within that time frame. Some states and localities have developed alternative methods for financing congestion reduction efforts, including ITS projects. These supplement traditional funding sources and have included imposing additional tolls, local taxes, or fees; developing partnerships with private industry; and designating separate funding. For example, Half of the budget of the Metropolitan Transportation Authority of Los Angeles County comes from a 1.5 percent sales tax dedicated to transportation. This allows the agency to fund and deploy ITS improvements countywide, on arterials, highways, and the transit system. The Virginia DOT is constructing High Occupancy Toll lanes on I-495 through a public-private partnership. This agreement provided Virginia with needed construction funds, as the project would otherwise consume more than a year of the state’s construction funds. Some state and local governments have purchased traffic data from private companies because they can avoid the costs of data collection, including sensor deployment and operations and maintenance. ITS is a rapidly developing field that requires a specialized workforce familiar with emerging technologies. Staff responsible for managing ITS systems need knowledge in a variety of areas, including project management and systems engineering, according to two FHWA division office ITS engineers. Workforce demographic changes, the competitive labor market, new technologies, and new expectations in the transportation industry combine to make attracting and retaining a capable workforce difficult for state and local transportation agencies. In addition, a 2011 National Cooperative Highway Research Program study found that U.S. universities produce too few skilled applicants for state and local DOTs. These issues combine to affect the ability of state and local agencies, especially smaller agencies, to manage ITS. Many state and local transportation agencies struggle to maintain in- house staff with the skills and knowledge needed to manage ITS projects. Eight of the 15 experts we spoke with noted that agencies face challenges in maintaining staff with the expertise and skills needed for ITS. For example, 1 expert noted that ITS requires skills that civil engineers—with whom transportation agencies are generally well staffed—are not specifically trained in, such as understanding electrical systems, communication networks, and interagency relationship building. Another expert noted difficulty finding staff with other skills necessary to ITS management, such as contract management, systems integration, and information technology troubleshooting skills. In addition, the fast pace of technological change and resource limitations put more demands on transportation officials and limit training opportunities. RITA officials told us that transportation agencies need systems engineers to manage ITS deployment and operations but do not have them in sufficient numbers. For example, a local government official told us he has been unable to fill a vacant ITS-related engineering position because of a hiring freeze that has been in effect for over 3 years. According to this official, this makes it difficult to complete ITS projects even when funds for projects are available. Once ITS professionals have needed skills, agencies find it difficult to retain them. Eight of the 15 experts we spoke with noted that retention of qualified staff is a challenge for agencies. Limitations in salary and career opportunities can limit the ability of state and local governments to retain staff. One expert noted that the ITS staff at his state DOT could double their salary by going elsewhere, and another mentioned a state DOT employee who had multiple job offers from the private sector and whom the state DOT could no longer afford. Additionally, officials from 10 transportation and stakeholder agencies we interviewed noted that retaining staff was a challenge. For example, officials from several transportation and stakeholder agencies noted that, because of budget restrictions, they have been unable to hire ITS staff to replace those who have retired. This is a particular issue for small agencies, according to two FHWA division office ITS engineers. The agencies controlling arterial roadways and intersections, including traffic signals, are typically county and city governments and are smaller in terms of funding and personnel, on average, than agencies controlling freeways, which are typically state governments. For example, the National Transportation Operations Coalition’s 2007 National Traffic Signal Report Card Technical Report found that agencies operating very small signal systems scored markedly lower on signal operations than all other agencies, likely because of staff not having specialized knowledge of signal systems operations and maintenance. Additionally, the report found almost one-half of all 417 survey respondents did not have staff or resources committed to monitor or manage traffic signal operations on a regular basis. According to a paper by two FHWA division office ITS engineers in California, small to medium-size agencies in the state lack qualified staff and, as a result, find it difficult to implement complex ITS projects successfully. The engineers noted that these agencies are not able to maintain staff with project management and systems engineering expertise because of insufficient ITS activity to justify a full-time staff position, high turnover of staff, and difficulty in obtaining ITS training. In the paper, the FHWA engineers proposed several potential solutions for these agencies, such as sharing technical staff within the same agency, sharing ITS staff between agencies, hiring consultants, or hiring another agency to perform some of the needed functions. Seven experts, six stakeholders, and officials from nine transportation agencies we spoke with noted that agencies often address these issues by hiring consultants for ITS support. State and local agency officials reported hiring consultants to perform a range of ITS tasks, such as maintaining ITS equipment, developing the regional architecture needed to meet federal requirements, and conducting the systems engineering to develop project requirements. Of the 15 experts we spoke to, 12 rated institutional leadership and support as a challenge facing state and local governments in deploying, operating, and maintaining ITS. Five identified it as a major challenge, 3 as between a major and a minor challenge, 4 as a minor challenge, 2 as not a challenge, and 1 had no basis to judge. elected and appointed officials lack good understanding of potential ITS benefits, and require reeducation when there is a change in leadership, which can lead to variations in funding and other support. The majority of the experts we interviewed noted that the level of ITS leadership varies across the country and from agency to agency. As mentioned earlier, in highly congested metropolitan areas, ITS systems tend to be complex and involve multiple agencies. Transportation networks include freeways, arterial roadways, and transit systems that cross state and jurisdictional boundaries; and ITS may be implemented by numerous agencies, such as state DOTs, counties, cities, and transit agencies. For example, in the Pittsburgh metropolitan area, approximately 260 townships manage their own traffic signals, and in the Los Angeles metropolitan area, approximately 120 cities manage their own traffic signals, according to metropolitan planning organization officials. As noted previously, better integration of data across jurisdictions can improve traffic operations and traveler information. According to FHWA, better coordination has the potential to improve a region’s integration of ITS approaches, permitting agencies to leverage resources, avoid duplication, and enhance ITS effectiveness. However, we found coordination of various ITS elements and technologies is a challenge for agencies. Fourteen experts, seven stakeholders, and officials from five transportation agencies we interviewed noted that In addition, the DOT 2010 coordination across agencies is a challenge.deployment survey found that about 39 percent of freeway management agencies employ coordinated traffic incident management and only about 16 percent of freeway agencies and 28 percent of arterial agencies engage in cross-jurisdictional traffic signal coordination. Agencies face difficulty coordinating for many reasons, including differing priorities and perspectives. In 2007, we reported that common challenges transportation agencies face in coordinating include difficulties aligning perspectives when working on regional projects and addressing competing ideas of which jurisdictions should be responsible for the management and funding of ITS projects that cross boundaries. FHWA officials noted that some communities may have priorities that are contrary to the goal of creating free-flowing traffic, such as slowing down traffic through the town. Additionally, officials from six transportation agencies we interviewed discussed differing jurisdictional priorities as obstacles to regional goals. For example, in regard to traffic signals, officials in one metropolitan area we visited told us some cities work together to manage their signals with the purpose of expediting traffic through a corridor, while other cities want to independently manage their signals to slow traffic or discourage additional traffic. In another metropolitan area we visited, metropolitan planning organization officials reported challenges deciding who will bear the financial responsibility for bus priority signals that would allow buses to have priority through traffic signals. While the transit agency that operated the buses wanted a single equipment system to enable buses to move freely at signals in the region’s various jurisdictions, cities operating the traffic lights could not afford to modify their systems. In some cases, agencies are able to work together to achieve common goals to reduce congestion. For example, three jurisdictions outside of Pittsburgh—Cranberry Township, Seven Fields Borough, and Adams Township—worked together in 2008 to implement a signal coordination project along Route 228, a congested arterial corridor. These jurisdictions were able to secure a mix of local and state funding to implement the project and established an agreement to govern the maintenance of the signals. According to an evaluation, the project could yield total benefits of up to approximately $2 million in reduced delay, reduced fuel consumption, and reduced emissions over a 5-year period. For a 5-year cost of $70,000, the public could realize a benefit-to-cost ratio of as much as 30 to 1.a consensus basis to promote better traffic management along the I-95 corridor by involving state and local transportation agencies, toll authorities, and related organizations since the early 1990s. Initially focused on incident management, the coalition now addresses other issues including data sharing to enhance decision making by states. Other areas in which the coalition is now working include integrating tolling systems and promoting availability of real-time truck-parking information along the corridor. DOT activities sponsored and funded by RITA and FHWA promote and support the use of ITS and address the challenges that state and local governments face in deploying and effectively using ITS technologies. We identified several leading practices for successfully encouraging the adoption of new technologies: developing a strategy to promote and support the use of technologies; choosing appropriate methods to promote the use of technology by the target audience, including making users aware of ITS resources; and monitoring technology adoption. Further use of these leading practices could improve DOT’s promotion of ITS while leveraging its resources. DOT agencies—specifically RITA and FHWA—sponsor and fund various activities that promote and support the use of ITS by state and local governments. These activities can be categorized as training and education, technical assistance, publications and guidance, ITS databases, planning and analysis tools, funding, demonstration and pilot projects, and ITS standards and architecture. RITA’s activities focus on conveying knowledge of the value and uses of ITS technologies, while FHWA’s activities promote strategies for improving traffic operations, many of which make use of ITS technologies. The activities sponsored by RITA and FHWA help state and local governments address the challenges they face in deploying, operating, and maintaining ITS technologies. For a summary of various DOT activities that address the state and local challenges we have previously identified, see appendix II. DOT has undertaken various activities that can assist state and local governments in addressing challenges they face in planning the strategic use of ITS technologies. FHWA sponsors a program called Planning for Operations aimed at incorporating traffic operations strategies, supported by ITS technologies, into mainstream transportation planning. For example, this approach advocates using operations-based objectives and performance measures, such as reducing delays as a result of incidents, as a basis for choosing congestion management strategies, such as traffic incident management strategies that make use of ITS technologies to identify and respond to incidents more quickly. As part of this effort, FHWA sponsors workshops for metropolitan planning organizations and has written guidance that provides examples of operations objectives, performance measures, and a sample transportation plan that includes different operational strategies. In addition, RITA hosts an ITS portal on its website that includes ITS-related information that can be useful for planning, such as databases with studies highlighting the benefits, costs, and lessons learned associated with ITS deployments. Although DOT no longer provides dedicated funding for ITS deployments, several funding mechanisms can be used for ITS-related deployments and operations. SAFETEA-LU authorizes states to use their federal aid highway funding for developing and implementing ITS systems. For example, funds from the Highway Trust Fund’s National Highway System, Surface Transportation, and Congestion Mitigation and Air Quality Improvement programs are eligible to be used for the deployment and operations of ITS technologies. Although funding of ITS technologies is not specifically tracked, FHWA officials estimate that approximately 3 to 5 percent, or between $800 million and $1.3 billion for fiscal year 2010, of federal aid highway program funds have been used for ITS technologies. For the most part, this funding is not for pure ITS projects but rather for ITS technologies that are incorporated into larger road and bridge improvement projects. According to FHWA officials, an internal analysis found that a similar percentage of funds, or between about $800 million and $1.3 billion, of FHWA’s American Recovery and Reinvestment Act funds were used for ITS deployments, with the majority of the total American Recovery and Reinvestment Act funds being obligated between early 2009 and March 2011. In fiscal year 2010, RITA obligated approximately $28.2 million for research on emerging uses of ITS technologies and obligated an additional $12.3 million to programs supporting the deployment of ITS, including the Professional Capacity Building program. DOT also provides funding for limited trial deployments of ITS. Since 2005, FHWA has provided about $26.6 million and managed about $150.9 million of RITA’s funds for demonstration projects that support the use of ITS technologies in managing traffic congestion, including four Urban Partnership Agreement projects, two Congestion Reduction program projects, and two Integrated Corridor Management projects. In addition, FHWA has sponsored several smaller-scale demonstration projects that examine and test ITS applications, such as a demonstration project to develop an enhanced 511 traveler information system. DOT sponsors multiple activities and programs aimed at ensuring that the state and local transportation workforce and leaders have adequate ITS knowledge. RITA operates a Professional Capacity Building program that aims to enhance the professional development of current and emerging ITS professionals. According to RITA statistics, between January 2010 and June 2011, the program reached over 3,400 transportation professionals through multiple activities, including 13 webinars, 8 web- based courses, 5 workshops, 6 presentations, and 12 peer-to-peer exchanges on topics such as ITS project management, systems engineering, adaptive signal control technology, and integrated corridor management. The program is in the process of refocusing its efforts in order to prepare transportation professionals for new connected vehicle technologies as well as to allow them to take advantage of proven ITS technologies. Similarly, FHWA conducts a variety of activities aimed at building the expertise of the state, regional, and local workforce in traffic operations strategies and associated ITS technologies. In addition to offering some training courses through RITA’s Professional Capacity Building program, FHWA offers its own training courses, technical assistance, and a variety of publications and guidance aimed at improving the management of traffic operations and the use of ITS. For example, between January 2010 and June 2011 FHWA offered 52 workshops, 2 webinars, and 12 peer-to- peer exchanges related to topics such as adaptive signal control technology, traffic incident management, and ITS performance measures. Most of these activities are sponsored by FHWA’s Office of Operations under individual program areas, such as traffic incident management, traffic signal management, congestion pricing, and real-time traveler information. FHWA also has an additional initiative—including guidance, training, and technical assistance—aimed at improving traffic signal management. In addition, RITA and FHWA have activities focused on enhancing the knowledge of state and local leaders about traffic operations and ITS technologies. Through its Professional Capacity Building program, RITA emphasizes leadership awareness through activities such as peer-to-peer exchanges. RITA officials told us they are also considering possible new ways to reach high-level decision makers. FHWA is sponsoring an initiative that provides guidance to leaders in 12 states on how to integrate transportation operations and ITS technologies into the state planning process, with the intent of turning these states into models for other states. Furthermore, FHWA has an effort under way to identify and contact newly appointed state DOT leaders to discuss the benefits of operational strategies that use ITS technologies, including hosting workshops with top-tier leaders. DOT promotes the coordination of ITS approaches among state and local government agencies, emphasizing the benefits of a regional approach. For example, FHWA promotes regional collaboration through its Planning for Operations program as well as the Regional Concept for Transportation Operations initiative. Specifically, this initiative provides state and local officials with various publications that encourage a coordinated regional approach in the planning for and deployment of ITS- based operational strategies, such as traffic incident management or traveler information services. RITA and FHWA also promote regional cooperation by sponsoring demonstration projects through the Integrated Corridor Management initiative. This initiative aims to integrate operational strategies and ITS technologies among transportation operators along a specific corridor, supporting interagency collaboration and the integration of systems. Additionally, RITA and FHWA promote ITS coordination through the development and support of ITS architecture and standards used to facilitate the exchange of information and ensure compatibility among ITS technologies at a regional level. One RITA official told us that the regional architecture is often the catalyst for interagency contact between state and local DOTs. Furthermore, FHWA encourages regional approaches by supporting alliances of transportation agencies in multiple states. For example, the I- 95 Corridor Coalition includes 40 member agencies, toll authorities, and other entities located along the corridor that work together with the aim of creating seamless operations across jurisdictions and modes. The coalition has been supported by RITA funds that are managed by FHWA and used for efforts that benefit all the coalition members, such as purchasing private sector data that are shared among the agencies. Similarly, the North/West Passage Corridor Coalition was created as part of a shared fund study, supported by FHWA, that combines funds among eight member states along the I-90 and I-94 corridors in order to develop effective methods for sharing, coordinating, and integrating traveler information and operational activities across state borders. The National Academies’ Transportation Research Board and we have identified a number of leading practices for successfully encouraging the adoption of new technologies. Of these, the ones we have identified as being most applicable for assessing DOT’s efforts to promote and support ITS use by state and local governments fall into three main areas (see table 3). RITA and FHWA each have strategies that guide their efforts to promote and support the use of ITS technologies at the state and local levels. RITA has developed a strategic plan for its Professional Capacity Building program that outlines goals, performance measures, and an action plan for implementation of professional development activities for ITS professionals and leaders. In addition, RITA is developing a strategy to help ensure that the results of its ITS research become commercially viable and are adopted by the transportation community and is planning to issue this strategy in the third quarter of fiscal year 2012. Likewise, FHWA’s Office of Operations has developed a plan that outlines, among other things, the activities associated with promoting better traffic operations among state and local agencies, including the use of ITS technologies. The plan defines goals, performance measures, and activities for each traffic operations program, such as sponsoring workshops on real-time traveler information, developing guidance on the state of the practice for traffic incident management, and creating training courses on road weather traffic management. RITA and FHWA coordinate on ITS research programs and in developing a strategic research plan for ITS, but they have not fully or clearly defined their roles and responsibilities for promoting and supporting ITS technologies. RITA and FHWA both participate in the ITS Strategic Planning Group, a departmental group that oversees DOT’s ITS research efforts. The Strategic Planning Group’s charter, a document that specifies the process for multimodal coordination, describes RITA’s leadership role in advocating for advanced ITS technologies that address congestion issues, among other things. However, the respective roles and responsibilities of RITA and FHWA in promoting and supporting ITS are not defined in the charter or in RITA’s strategic research plan. In addition, the ITS Professional Capacity Building strategic plan does not discuss the roles and responsibilities of the modal agencies, such as FHWA, in developing activities to support ITS professionals. Although RITA and FHWA officials said that they coordinate informally, we have found that, as part of agreeing to respective roles and responsibilities, collaborating agencies should clarify who will do what. We have previously identified a number of surface transportation programs where potential duplication, overlap, or fragmentation could exist. See GAO, List of Selected Federal Programs That Have Similar or Overlapping Objectives, Provide Similar Services, or Are Fragmented across Government Missions, GAO-11-474R (Washington, D.C.: Mar. 18, 2011). We have used the term “fragmentation” to refer to those circumstances in which more than one federal agency (or more than one organization within an agency) is involved in the same broad area of national need. The presence of fragmentation and overlap can suggest the need to look closer at the potential for unnecessary duplication. However, determining whether and to what extent programs are actually duplicative requires programmatic information that is often not readily available. technologies.effort is currently on meeting with select universities to identify the learning providers. One expert and a transportation agency said that the roles of RITA and FHWA should be better defined so that state and local government officials are aware of which agency is playing which role. However, according to a RITA official, the focus of this Furthermore, in comparing RITA and FHWA websites related to ITS, we found that each of the sites provided links to different studies and guidance for several of the same or similar ITS uses. For example, in a search for the benefits associated with arterial management applications, RITA’s and FHWA’s websites provided different documents with no clear coordinated approach to addressing the topic. for training opportunities on arterial management, we looked at two FHWA websites and a RITA website and found 16 different courses cited. FHWA officials noted that such inconsistencies exist because each agency has a different outlook on ITS technologies. In addition, the large array of information and pace of development make it difficult to completely align the websites. RITA’s and FHWA’s websites provide some links to each other’s ITS resources, such as between FHWA’s Arterial Management program and Adaptive Signal Control Technologies program and RITA’s ITS databases. given the current fiscal environment, may inhibit RITA and FWHA from fully leveraging their resources to promote ITS. RITA and FHWA have defined their target audiences for promoting and supporting ITS technologies. RITA’s Professional Capacity Building strategic plan defines the target audience as the ITS practitioner, including federal, state, and local level professionals from all surface modes, decision makers, researchers, and students. However, a RITA official told us that the agency intends to more narrowly define its target audience to better focus its efforts. According to FHWA officials, FHWA defines its main audience as state DOTs, in part because of its role in administering the federal aid highway program. FHWA is building stronger relationships with metropolitan planning organizations and transportation agencies in major metropolitan areas as part of its efforts to promote improved traffic operations, according to an FHWA official. However, the official noted that it is difficult to work with local transportation agencies, since there are so many of them. As previously mentioned, smaller transportation agencies tend to face additional challenges in deploying ITS technologies, such as having limited time or knowledge to plan for ITS and difficulty recruiting and retaining a qualified workforce to manage ITS. RITA and FHWA involve stakeholders in the process of developing activities and information on traffic operations and related ITS technologies. RITA has elicited input from stakeholders in developing its activities. For example, the agency conducted three user workshops in developing the Professional Capacity Building strategic plan, getting feedback from 148 multimodal public and private sector users in two interactive web meetings. RITA issued a request for information in July 2011, seeking input from interested public, private, and academic entities in identifying the needs for ITS learning among transportation professionals and innovative techniques for delivering ITS learning. FHWA also involves stakeholders at the program-planning level, specifically when major products are being developed. For example, an FHWA official told us that the Planning for Operations program used peer groups from metropolitan planning organizations to develop and review guidance materials. Experts, transportation agencies, and stakeholders we interviewed considered some of the activities sponsored by RITA and FHWA more useful than others. The 14 experts we interviewed considered training and education activities, including webinars, as well as technical assistance activities, such as the peer-to-peer exchanges, to be the most useful of the activities offered by RITA and FHWA. Many of the transportation agencies and stakeholders we interviewed found webinars particularly useful. Additionally, experts and transportation agencies we interviewed, as well as stakeholders with whom RITA consulted indicated that opportunities to share information among their peers, either via workshops or peer-to-peer exchanges, provide valuable ways to learn from others’ experiences. A RITA official told us that the peer-to-peer program may be phased out as RITA refocuses the agenda of the Professional Capacity Building program on connected vehicle technologies, leaving less of a focus on mainstream ITS. In RITA’s planning workshops, users indicated that they primarily would like real-world experience “from the source,” stating that opportunities to learn from peers, including peer-to-peer exchanges, are a desirable way to learn. In our interviews, two transportation agencies and three experts also said that it would be useful to have more opportunities to learn from peers. RITA’s refocused agenda could decrease the opportunities for state and local officials to participate in an effective method for relaying ITS information and technical assistance to DOT’s target audience. In contrast, other resources, such as the information sources sponsored by RITA and FHWA may not be as useful to state and local officials. According to the experts we interviewed, RITA’s and FHWA’s publications and guidance related to ITS, as well as the ITS databases, were not considered as useful as other activities.agencies noted that FHWA’s website is helpful, four experts and one While several transportation state and local official said that RITA’s and FHWA’s websites have too much information and are not well organized. In addition, three experts and one transportation agency commented that it is difficult to identify needed information given the amount of information available. Specifically, one expert noted there was little effort to highlight or summarize the most important information on these websites. Users that RITA surveyed, as well as some experts and transportation agencies we interviewed, indicated that they would like specific benefit information related to ITS deployment. At the same time, the majority of experts we interviewed said that the ITS databases housing this type of information were only somewhat useful. Likewise, one transportation stakeholder did not think the databases were useful and found them difficult to navigate, while another stakeholder did not think the studies in the databases were useful. In addition, we searched the ITS database for the benefits associated with arterial management projects and found 125 separate studies in six categories dated from 1994 to 2011. Of these studies, 21, or only 17 percent, were completed in the last 5 years. RITA officials told us that there are fewer evaluations being completed to include in the ITS databases, since DOT no longer provides dedicated funds for ITS deployments. In addition, as previously mentioned, DOT’s current ITS research agenda focuses on connected vehicle technologies. RITA officials also acknowledged that the information in the databases may be dated, but noted that the information is still useful. According to a RITA official, the information in the databases is updated on a rolling basis as DOT reports are completed and other external reports are submitted by state and local governments. A RITA official also stated that RITA tracks the monthly usage statistics for the ITS databases, although this doesn’t measure the usefulness of the databases. ITS-related information that is not easily accessible, timely, and relevant will not effectively meet the needs of state and local officials as they plan for and deploy ITS technologies, resulting in underused resources. Transportation agencies may not be aware of all of the ITS-related activities and information offered by RITA and FHWA. In an informal poll that a RITA official recently conducted of transportation professionals at two outreach events sponsored by transportation organizations, RITA officials found that 10 of 29 professionals polled, or 35 percent, were not aware of the activities and information available through RITA, and 21 percent were not aware of activities and information on transportation operations offered by FHWA. Likewise, four experts, a transportation agency, and a stakeholder we interviewed said that DOT could improve communications about ITS activities and information with state and local governments, for example, by becoming more engaged with state and local officials. For example, two experts said that transportation agencies were not aware of how to contact the ITS specialists in FHWA’s Resource Center that offer ITS technical assistance. According to two FHWA division office ITS engineers in California, although DOT sponsors Internet-based training, most local agencies have not taken advantage of these activities. An FHWA official also acknowledged that it is difficult to match users with their activities and get state and local officials to take advantage of the activities available. RITA and FHWA are taking some steps currently to improve access to and awareness of ITS-related information and assistance. For example, RITA is developing plans to target audiences through partnerships with professional associations that may have more direct access to ITS practitioners, such as the Institute of Transportation Engineers and ITS America. It also plans to more effectively use University Transportation Centers, which are established to “advance significantly the state-of-the- art in transportation research and expand the workforce of transportation RITA is also planning to use video more aggressively to professionals.”promote ITS activities and develop testimonials to promote the Professional Capacity Building program. FHWA is focusing on outreach and marketing as a critical element of an implementation plan for its traffic signals program, with the aim of increasing awareness and directly engaging stakeholders on the benefits and applicability of the strategy. SAFETEA-LU set a cap of $250,000 per fiscal year for DOT’s funding of outreach for ITS-related activities, but this cap may be lifted in the next reauthorization of surface transportation programs. As noted earlier, RITA is developing a strategy, to be issued in the third quarter of fiscal year 2012, to help ensure that the results of its ITS research become commercially viable and are adopted by the transportation community. Such a strategy could provide an opportunity for RITA, as well as its partner FHWA, to further identify methods for improving access to and awareness on the part of state and local transportation agencies of ITS resources related to traffic management. Also, as noted previously, RITA is considering phasing out its peer-to- peer program, while experts and transportation agencies we interviewed as well as stakeholders RITA consulted indicated that methods for sharing information among peers provide valuable ways to learn from others’ experiences. Therefore, this strategy could also provide an opportunity to identify ways to facilitate the exchange of information among state and local officials. However, RITA has not yet determined to what extent its strategy will address these issues. Several options have been proposed for improving communication about ITS resources and facilitating learning exchanges. A 2011 report solicited by RITA to identify best practices for promoting ITS technologies included a recommendation that the agency create an ITS Partners program that would incorporate a number of its activities under a single brand, encourage and support the deployment of ITS by public agencies, and increase collaboration among federal agencies, state and local agencies, universities, and industry.program, implementing an interactive website where agencies can share experiences, and establishing networks of individuals interested in specific topics. Activities would include marketing the While RITA is planning to enhance partnerships with professional associations and University Transportation Centers to leverage its resources, RITA has not yet decided on the extent to which it will implement this recommendation. Officials cited restricted funding as a factor in their implementation decision. In addition, RITA’s Professional Capacity Building strategic plan includes a goal to establish an ITS learning portal for “one-stop shopping” of training courses, technical assistance, and peer-to-peer events. According to a RITA official, this effort is currently on hold, awaiting the results of a National Cooperative Highway Research Program study. This study, which is being conducted by the Transportation Research Board, is focusing on designing an Operations Center of Excellence that would facilitate implementation of best practices for traffic operations, including ITS, and promote collaboration among state and local government officials in developing best practices. The study will assess the needs of state and local transportation agencies, inventory the available resources, and analyze alternative methods to implement and fund such a center. The study is DOT has not yet defined its expected to be completed in early 2012.role in establishing, supporting, and implementing such a center. A RITA official said that the organization would need extra funds if it was tasked with operating such a center and will wait for the outcome of the study to determine the role it can play. FHWA officials told us that they envision that they would be heavily involved in setting up the Operations Center of Excellence, but would prefer that it not be funded by DOT. Participation in this effort, if and when it is implemented, could allow both RITA and FHWA to identify and potentially take advantage of opportunities to leverage their ITS promotion and support activities with those of external organizations. Such leveraging is particularly important given federal fiscal constraints. As RITA develops its strategy for ensuring that the results of its ITS research become commercially viable and are adopted by the transportation community, it could benefit from working with FHWA to consider this range of options for improving communication about ITS resources related to traffic management, thereby enhancing access to and awareness of these resources, and facilitating learning exchanges among state and local governments, while leveraging its resources. Both RITA and FHWA collect information to monitor the adoption of ITS technologies and use this information to understand the level of deployment and make decisions on how to encourage the future deployment of ITS technologies, according to officials from both agencies. Nearly every year since 1997, RITA has conducted a national survey of state and local government agencies on the deployment of various ITS technologies and reported the results on its website. The deployment survey also gauges the factors affecting decisions to purchase ITS, views on benefits associated with ITS, and plans for continued investment. According to a RITA official, the agency uses the information on the level of current ITS deployments to help make decisions about future research. In addition, the survey provides feedback to RITA officials on the level of stakeholder interest in deploying specific ITS technologies and operational strategies. For example, the survey results assist the Professional Capacity Building program in determining the locations where ITS technologies are deployed and any gaps in deployment that merit attention. FHWA also uses the deployment survey to understand ITS deployment trends. FHWA officials said they use the deployment statistics when developing operations-based initiatives, such as selecting the states to include in a program aimed at accelerating the integration of ITS and operational strategies into mainstream transportation planning. In addition, FHWA recently used the 2010 survey results when issuing a Final Rule for the Real-Time System Management Information Program, which requires states to establish programs to collect traffic and travel information. The survey was used to establish a baseline for the deployment of 511 traveler information services and determine the effect this rule would have on the expansion of 511 services, according to a RITA official. FHWA’s Office of Operations’ plan also incorporates deployment assessments for specific operations programs, such as the Road Weather Management program. This program tracks the rate of adoption of road weather technologies, such as a decision support system that helps winter maintenance managers make road treatment decisions. As traffic congestion is projected to grow and state and local governments face fiscal constraints, ITS technologies and operational strategies supported by ITS provide opportunities for state and local governments to manage traffic congestion on the nation’s existing roadways. Furthermore, emerging uses of ITS technologies have the potential to build upon existing investments in ITS by integrating real-time traffic information and instituting proactive management techniques. However, the challenges that state and local governments face in planning and funding ITS use, ensuring that staff and leaders have adequate knowledge of ITS, and coordinating ITS approaches impede their ability to make the most effective use of ITS technologies in addressing congestion. While DOT’s efforts to promote and support the use of ITS technologies help state and local agencies address these challenges, the department could improve the effectiveness of these efforts through greater use of leading practices for promoting technology use. The lack of clearly defined respective roles and responsibilities of RITA and FHWA in promoting and supporting ITS raises questions about whether DOT could better leverage its resources and provide a more specific, cohesive strategy for ITS as it evolves. In addition, DOT’s activities may not be achieving maximum results, as state and local officials may have difficulty identifying the most relevant information or may not be aware of all of the ITS-related activities sponsored by RITA and FHWA. Taking steps to more effectively target efforts and leverage resources by further exploring internal and external opportunities to promote and support ITS technologies could better ensure that DOT’s activities achieve their intended purposes. Some options currently under consideration hold promise for facilitating the exchange of ITS information among state and local governments as well as for enhancing communication to improve access to and awareness of ITS-related resources. It will be important for DOT to work with its external partners and determine its role in these efforts to ensure it is fully leveraging its resources in promoting the use of ITS and maximizing its reach. If DOT does not effectively target and leverage its efforts to promote and support the use of current and emerging ITS technologies by state and local transportation agencies, DOT may struggle in helping these agencies transition to the next generation of ITS. To effectively target efforts, leverage resources, better promote and support the use of ITS technologies by state and local governments, and improve access to and awareness of ITS resources, we recommend that the Secretary of Transportation take the following three actions: clearly define and document the respective roles and responsibilities of RITA and FHWA in promoting and supporting the use of ITS, revise ITS information on RITA and FHWA websites to improve its usefulness for state and local audiences based on their needs, and include in RITA’s strategy for promoting the adoption of ITS technologies plans for collaborating with external partners to (1) further enhance communication about the availability of ITS resources and (2) facilitate learning exchanges. We provided a draft of this report to the Department of Transportation for review and comment. DOT said it would consider our recommendations, and provided technical clarifications that we incorporated into the report as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to interested congressional committees and the Secretary of Transportation. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2834 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff that made significant contributions to this report are listed in appendix III. This report addresses (1) how state and local governments currently use Intelligent Transportation Systems (ITS) technologies to manage traffic and emerging uses of these technologies that have the greatest potential to reduce congestion, (2) what types of challenges state and local governments face in using ITS technologies to manage traffic congestion, and (3) the extent to which the Department of Transportation’s (DOT) promotion and support of state and local governments’ use of ITS technologies have met leading practices and responded to challenges they face. To determine how and to what extent state and local governments currently use ITS technologies to manage traffic, we analyzed DOT’s policy and planning documents and data on ITS deployment from its 2010 ITS deployment survey. On the basis of interviews with DOT officials and analysis of the data, we determined that the data were sufficiently reliable for our purposes. We also analyzed pertinent legislation, documents, and studies of traffic management approaches and ITS deployment in the United States. We synthesized information from interviews with officials from DOT, including the Research and Innovative Technology Administration (RITA) and Federal Highway Administration (FHWA). We also interviewed officials from related associations such as the American Association of State Highway and Transportation Officials (AASHTO) and the Intelligent Transportation Society of America (ITS America). We conducted site visits to Washington, D.C.; Pittsburgh, Pennsylvania; Austin, Texas; and Los Angeles, California. At each site, we obtained documentation and interviewed officials from one or more state departments of transportation; one or more local government transportation agencies; the metropolitan planning organization; one FHWA division office responsible for the area; and, if applicable, any academics, researchers, or coalitions focused on ITS in that metropolitan area. We selected these locations from those with high congestion levels as determined by the Texas Transportation Institute’s 2010 Urban Mobility Report and varied ITS deployment levels as determined by DOT’s 2007 deployment survey database. We made a final selection of sites that included cities of different sizes and geographical representation, and one metropolitan area that spans more than one state (Washington, D.C.). We are not able to generalize our findings in these site visits to the whole country but used the other sources mentioned above to gain a more general perspective. We also conducted a literature search to identify background materials on emerging ITS technologies, published research by prospective ITS experts, and leading practices in promoting and supporting the adoption of new transportation technologies. The literature search focused on databases with transportation and engineering journal articles and conference proceedings (e.g., ProQuest and Transport Research International Documentation) as well as government reports (e.g., National Technical Information Service). The search terms used were related to using ITS for managing traffic congestion (e.g., incident response management). We conducted semistructured interviews with 15 experts, whom we selected based on recommendations from officials at RITA, FHWA, AASHTO, and ITS America using several criteria. The primary requirement was that each individual have expertise in at least one of the following ITS fields that are important for traffic management: freeway management, arterial management, traffic incident management, roadway operations and maintenance, traveler information, and road weather management. In addition, we selected individuals with experience in the operations or deployment of ITS; planning, development, or evaluation of ITS projects; or experience with DOT’s efforts to promote and support the use of ITS technologies. In making our final selection, we considered publications and ITS experience and aimed to include a mix of individuals from state and local government, transportation associations, academia, and private industry. We selected experts based on how frequently they were recommended, a proxy for their standing within the ITS community, and to obtain a representative mix of officials from state and local government, academia, transportation associations, and private industry (such as consultants and ITS service or equipment providers). Through this representative mix, we believe that we have obtained a balanced set of perspectives. We identified emerging uses of ITS technologies, which we defined as approaches that have begun to be used over the last 5-10 years, including approaches being researched or promoted by DOT, through interviews with DOT officials, experts, and a literature search. We excluded technologies with primary applications outside roadway traffic management, such as transit ITS, except when it had bearing on roadway traffic management. The scope of our work did not include connected vehicle technology or uses of ITS primarily aimed at other than managing and reducing traffic congestion, such as rural safety. To determine what emerging uses of ITS technologies have the greatest potential to reduce congestion, we presented the experts with a list of emerging uses of ITS technologies that we identified. This list consisted of (1) real-time data capture, sharing, and management; (2) real-time traveler information; (3) integrated corridor management; (4) active transportation and demand management; (5) enhanced incident response management; (6) weather responsive traffic management; and (7) work zone management. We asked the experts if there were other emerging uses of ITS technologies that they believe have significant potential to reduce traffic congestion, and asked them to rate these and the above ITS uses on their potential to reduce traffic congestion. On the basis of the expert ratings, we selected the four emerging uses that all experts ranked as having at least medium potential to reduce traffic congestion, and which the most experts (at least 9 of the 15) rated as having high potential to reduce traffic congestion. To determine what types of challenges state and local governments face in using ITS technologies to manage traffic congestion, we conducted interviews with and obtained documents from RITA and FHWA officials, and AASHTO and ITS America representatives; conducted interviews with identified experts; reviewed published research on ITS challenges identified through a literature search; gathered information through interviews and documents collected during the site visits described above; and analyzed these various interviews and documents to identify the most frequently cited challenges. We did not otherwise assess the extent of these challenges in the locations visited, such as determining actual funding or staffing levels. To determine the extent to which DOT’s promotion and support of state and local governments’ use of ITS technologies responded to challenges they face and met leading practices, we collected information on DOT’s ITS promotion and support through interviews with RITA and FHWA officials and reviews of RITA’s and FHWA’s program and strategic planning documents, including documents related to the professional capacity- building program and traffic operations improvement efforts. In addition, we reviewed RITA’s and FHWA’s efforts to promote and support ITS technologies, including various studies, guidance, websites, demonstration project and highway funding, and RITA’s ITS databases. We limited our work to DOT’s activities and information relevant to the promotion and support of state and local governments’ use of ITS, not including DOT’s efforts aimed at bringing new technologies to market. We determined how DOT is required to promote and support the use of ITS technologies through reviews of pertinent laws. To determine the extent to which DOT’s efforts are meeting the challenges and leading practices, we reviewed literature on promoting and supporting the use of new technologies, including prior GAO reports, Transportation Research Board publications, and other academic publications, particularly focusing on leading practices that encourage the adoption of transportation technologies by state and local governments. On the basis of the scope and nature of DOT’s efforts, we identified the following practices as most applicable: (1) developing a strategy to promote and support the use of technologies; (2) choosing appropriate methods to promote the use of technology by the target audience; and (3) monitoring technology adoption. We compared DOT’s efforts with these leading practices and evaluated any areas needing improvement. We also obtained the views of identified experts and state and local officials interviewed during site visits about the usefulness of DOT’s efforts and any needed improvements. We conducted this performance audit from January 2011 through February 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Examples FHWA’s Planning for Operations program sponsors webinars, case studies, and workshops. FHWA’s Resource Center and division offices provide assistance related to planning for operations, including ITS expertise. FHWA provides planning-related guidance on its website, including case studies, a desk reference on benefit/cost analysis, guidebooks, and reports. RITA’s databases provide information on the benefits and costs associated with ITS technologies. FHWA’s ITS Deployment Analysis System assists in planning for ITS deployments. FHWA provides federal aid highway funds to states, some of which can be applied to ITS projects. FHWA has estimated that between 3 and 5 percent of the total funds, or $800 million to $1.3 billion in fiscal year 2010, has been used for funding ITS. Demonstration and pilot projects RITA and FHWA fund various projects aimed at applying ITS technologies, such as projects funded under the Urban Partnership Agreements, Congestion Reduction Program, and Integrated Corridor Management Program, totaling more than $177 million since 2005. RITA’s Professional Capacity Building program offers webinars, workshops, and presentations for ITS professionals. FHWA’s Office of Operations and Resource Center provide seminars, training courses, and workshops for traffic operations managers as part of their efforts to improve traffic operations, such as traffic signal management. FHWA also sponsors workshops to develop local ITS champions and educate newly appointed leaders at state DOTs. RITA and FHWA facilitate peer-to-peer exchanges to transfer ITS knowledge and experiences from model users to agencies with less experience. FHWA’s Resource Center and division offices provide assistance and guidance on ITS-related issues, such as systems engineering, regional architecture, and traffic operations. RITA’s website includes a searchable ITS library with a variety of studies and guidance. FHWA provides studies and guidance related to improving traffic operations in areas such as traffic incident management, traffic signal management, congestion pricing, and real-time traveler information, among others. FHWA’s Regional Concept for Transportation Operations initiative offers studies and guidance to promote a regional approach to transportation management and operations Demonstration and pilot projects DOT’s Integrated Corridor Management projects, jointly run by RITA and FHWA, promote interjurisdictional partnerships to transform the way a corridor operates. These standards and architecture, supported by efforts of RITA and FHWA, define and support a common structure for regional ITS projects with interoperable technologies. David J. Wise, (202) 512-2834 or [email protected]. In addition to the individual named above, Judy Guilliams-Tapia, Assistant Director; Leia Dickerson; Jennifer DuBord; Colin Fallon; David Hooper; Erik Kjeldgaard; Terence Lam; Emily Larson; Sara Ann Moessbauer; Madhav Panwar; and Joshua Ormond made key contributions to this report. | Traffic congestion burdens the nations quality of life and will likely grow substantially if current trends continue. Intelligent Transportation Systems (ITS) are a range of technologies that can reduce congestion at less cost than some other approaches. The U.S. Department of Transportations (DOT) Research and Innovative Technology Administration (RITA) is responsible for promoting and supporting the use of ITS in coordination with other modal administrations, including the Federal Highway Administration (FHWA). Since 1994, DOT has overseen the allocation and expenditure of more than $3 billion for deploying and researching ITS. GAO was asked to address (1) the current and emerging uses of ITS technologies by state and local governments, (2) the challenges these governments face in using ITS, and (3) the extent to which DOTs efforts to promote and support ITS address these challenges and follow leading practices. To conduct this work GAO visited four sites, and interviewed and analyzed documents and data from DOT and state and local transportation officials, ITS experts, and other stakeholders. State and local governments currently use ITS technologies in various ways to monitor and control traffic and inform travelers. For example, transportation agencies use cameras to monitor traffic conditions, signal technologies to control traffic flow, and dynamic message signs to inform travelers about travel conditions. By interviewing experts, GAO identified several emerging uses of ITS that have significant potential to reduce traffic congestion. For example, integrating traffic and emergency services data can allow for enhanced detection of and response to roadway incidents. However, some cities use ITS and the emerging uses to a much greater extent than others. State and local governments face multiple challenges in using ITS technologies to manage traffic congestion. For example, some agencies do not fully integrate ITS into their planning processes. Funding the deployment and maintenance of ITS technologies is also an issue, because of funding constraints and competition with other needed infrastructure projects. Further, agencies struggle to attract and retain staff with the skills necessary to manage and maintain ITS systems and may not have leaders who support ITS. Finally, coordination among agencies can enhance the effectiveness of ITS through such activities as synchronized traffic signals along a corridor, but such coordination can be difficult given agencies differing perspectives and priorities. RITAs and FHWAs activities to promote and support the use of ITS technologies help address these challenges. Both offer ITS-related training and technical assistance and provide guidance and information on their websites. FHWA estimates that states used about $800 million to $1.3 billion of their eligible 2010 federal aid highway funds and $798 million to $1.3 billion of American Recovery and Reinvestment Act funds on ITS. Further adoption of leading practices could improve these efforts. RITAs and FHWAs respective roles in these efforts are not clearly defined, potentially inhibiting their ability to effectively leverage resources. Some experts and transportation agencies noted that ITS-related information on RITAs and FHWAs websites is not always presented in a way that is useful and some agencies lack awareness of some ITS activities sponsored by DOT. Several options have been proposed to improve communication about ITS-related activities and facilitate the sharing of ITS information among state and local officials. While RITA intends to develop a new strategy in 2012 for promoting the use of ITS, it has not yet determined whether it will incorporate any of these proposals. GAO recommends that the Secretary of Transportation clearly define the roles of RITA and FHWA in promoting the use of ITS, improve the usefulness of ITS information on the agencies websites, and include in its strategy plans to further enhance communication on ITS activities. DOT reviewed a draft of this report, said it would consider our recommendations, and provided technical comments. |
Influenza is characterized by cough, fever, headache, and other symptoms and is more severe than some viral respiratory infections, such as the common cold. Most people who contract influenza recover completely in 1 to 2 weeks, but some develop serious and potentially life-threatening medical complications, such as pneumonia. On average each year in the United States, more than 36,000 individuals die and more than 200,000 are hospitalized from influenza and related complications. People aged 65 years and older, people of any age with chronic medical conditions, children younger than 2 years of age, and pregnant women are generally more likely than others to develop severe influenza-related complications. Vaccination is the primary method for preventing influenza and its more severe complications. Produced in a complex process that involves growing viruses in millions of fertilized chicken eggs, influenza vaccine is administered annually to provide protection against particular influenza strains expected to be prevalent that year. Experience has shown that vaccine production generally takes 6 or more months after a virus strain has been identified, and vaccines for certain influenza strains have been difficult to mass-produce. After vaccination, the body takes about 2 weeks to produce the antibodies that protect against infection. According to CDC, the optimal time for vaccination is October through November, because the annual influenza season typically does not peak until January or February. Thus in most years, vaccination in December or later can still be beneficial (see fig. 1). If supplies permit, CDC recommends a vaccination for anyone who wants one. Because circulating influenza strains change, a new vaccine is created each year. For this reason, and because immunity declines over time, CDC recommends a new influenza vaccination every year for high-risk individuals and other priority groups, including close contacts of those at high risk. Two types of vaccine are recommended for protection against influenza in the United States: (1) an inactivated virus vaccine injected into muscle and (2) a live virus vaccine administered as a nasal spray. The injectable vaccine—which represents the large majority (over 95 percent) of influenza vaccine administered in this country—can be used to immunize healthy individuals and those at high risk of severe complications, including those with chronic illness and those aged 65 years and older. The nasal spray vaccine, in contrast, is currently approved for use only among healthy individuals aged 5–49 years who are not pregnant. Although vaccination is the primary strategy for protecting individuals who are at greatest risk of serious complications and death from influenza, antiviral drugs can also contribute to the treatment and prevention of the disease. In a typical year, manufacturers make influenza vaccine available before the optimal fall vaccination season. For the 2003–04 influenza season, two manufacturers—one with production facilities in the United States (sanofi pasteur) and one with production facilities in the United Kingdom (Chiron)—produced about 83 million doses of injectable vaccine, which represented about 96 percent of the U.S. vaccine supply. A third U.S. manufacturer (MedImmune) produced the nasal spray vaccine. According to CDC, MedImmune produced about 3 million doses of the nasal spray vaccine, or about 4 percent of the overall influenza vaccine supply, for the 2003–04 season. Influenza vaccine production and distribution are largely private-sector activities. Manufacturers sell influenza vaccine to resellers (such as medical supply distributors and pharmacies), to federal agencies and state and local public health departments, or directly to providers (see fig. 2). Individuals can obtain an influenza vaccination at a number of places, including physicians’ offices, public health clinics, nursing homes, and nonmedical locations such as workplaces or retail outlets. Millions of individuals receive influenza vaccinations through mass immunization campaigns in these nonmedical settings, where organizations such as visiting nurse agencies under contract administer the vaccine. HHS has limited authority to control vaccine production and distribution directly; influenza vaccine supply and marketing are largely in the hands of the private sector. In the event that the Secretary of HHS determines and declares a public health emergency, the Public Health Service Act authorizes the Secretary to “take such action as may be appropriate” to respond. Within HHS, CDC is one of the agencies that help protect the nation’s health and safety. CDC’s activities include efforts to prevent and control diseases and to respond to public health emergencies. ACIP, after consulting with CDC, makes recommendations on which population groups should be targeted for vaccination. CDC also administers a number of programs to help make vaccines, including influenza vaccine, affordable for low-income and other populations. For example, under CDC’s Vaccines for Children program, vaccines are provided free of charge for certain children 18 years of age or younger, including those who are Medicaid- eligible, uninsured, or underinsured (that is, their insurance does not include vaccinations). CDC also reserves stockpiles of certain vaccines. For the 2004–05 influenza season, CDC contracted with vaccine manufacturers to supply influenza vaccine for a national stockpile for the first time. The agency originally contracted for 4.5 million doses, including 2 million doses from Chiron, which were therefore not available. CDC also maintains stockpiles of antiviral medications that can alleviate influenza symptoms and reduce contagion in those who contract the disease. Other organizations within HHS that are involved with immunization activities include the National Vaccine Program Office, which is responsible for coordinating and ensuring collaboration among the many federal agencies involved in vaccine and immunization activities, and the Food and Drug Administration (FDA), which in approving and regulating the use of vaccines and drugs, including antiviral medications, is responsible for ensuring that they are safe and effective. In addition to federal agencies, state and local health departments are often the first responders in situations affecting public health. Initially for the 2004–05 influenza season, CDC in May 2004 recommended that about 188 million Americans receive a vaccination—about 85 million at high risk of severe complications and about 103 million in other priority groups, such as people in close contact with high-risk individuals, healthy people aged 50–64 years, and health care workers. CDC also suggested that, depending on the availability of vaccine, other individuals who should receive a vaccination include (1) any person who wished to reduce the likelihood of contracting influenza, (2) individuals who provide essential community services, and (3) students and others in institutional settings. Although Chiron had announced that it was experiencing production problems in August 2004, according to CDC, the manufacturer had assured the agency that the production issues were being resolved. Subsequently, on September 24, 2004, CDC reiterated its recommendation that 188 million individuals in high-risk and other groups be vaccinated as vaccine became available. CDC also recommended that anyone wanting to reduce the risk of contracting influenza be vaccinated. Not everyone in these high-risk and priority groups, however, receives a vaccination each year. Among health care workers, for example, about 40 percent received a vaccination in the 2002–03 and 2003–04 seasons, according to one CDC survey. Similarly, about 66 percent of individuals aged 65 years and older reported receiving influenza vaccination in the 2002–03 and 2003–04 influenza seasons, according to CDC estimates. After the October 5, 2004, announcement of the sharp reduction in expected influenza vaccine supply, federal, state, and local health officials took steps to help ensure that those at high risk of severe complications from infection had access to influenza vaccine. For example, health officials quickly revised vaccination recommendations so that the remaining supply could be targeted to those in priority groups comprising those at high risk, certain health care workers, and household contacts of children younger than 6 months of age. Other efforts focused on distributing vaccine to priority groups and on keeping providers and the public updated as to vaccine availability. Finally, late in the influenza vaccination period—from mid-December through January—health officials’ actions focused on further augmenting the vaccine supply and, once supply increased, on encouraging vaccination for anyone remaining in the priority groups and for others who had earlier deferred vaccination (see fig. 3). Several responses by public health officials took place within hours or days of the public announcement that a severe shortage of influenza vaccine was imminent. Federal and state health officials redefined priority groups for influenza vaccination. CDC immediately redefined the groups recommended to receive vaccine in 2004–05 for protection against influenza and its complications and issued revised recommendations on October 5, 2004. These revised recommendations focused on priority groups that included high-risk individuals, health care workers involved in direct patient care, and household contacts of children younger than 6 months of age. CDC’s revised recommendations decreased the number of people in groups recommended for vaccination from about 188 million to about 98 million (see table 1). At the same time, CDC also asked people not in these priority groups to forgo or defer vaccination. State and local health officials we met with reported having quickly adopted CDC’s revised recommendations. Some health departments, however, found that they did not have enough vaccine to cover everyone in CDC’s priority groups and therefore subdivided CDC’s priority groups. For example, in Maine, all health care workers were initially excluded from the state’s priority groups, although later, Maine health officials recommended vaccination for particular types of health care workers, such as those working in intensive care units and emergency departments, if local vaccine supply allowed. HHS collaborated with manufacturers to temporarily halt further distribution of injectable influenza vaccine and to ramp up production of nasal spray vaccine. At the request of CDC, sanofi pasteur, the sole remaining manufacturer of injectable influenza vaccine for the U.S. market, voluntarily suspended further distribution of the approximately 25 million doses it had not yet shipped on October 5, 2004, until the week of October 11, 2004, when CDC completed its assessment of the situation. Distribution was temporarily halted because CDC needed time to devise a plan to better target vaccine distribution to providers serving individuals in the priority groups. HHS officials also worked with MedImmune, the maker of the nasal spray vaccine, to increase its production for the 2004–05 influenza season from about 1 million doses to a total of 3 million doses. Federal officials evaluated foreign sources of influenza vaccine and assessed the federal stockpile of antiviral medications. On October 11, 2004, HHS convened an interagency team, comprising officials from HHS’s Office of the Secretary, CDC, FDA, and others, to devise a plan to import influenza vaccine not licensed for the U.S. market from foreign manufacturers; this vaccine could be administered in the United States under an investigational new drug protocol. Around the same time, FDA quickly authorized the redistribution of vaccine among hospitals and other health entities to alleviate shortages. HHS also assessed its stockpile of antiviral medications that could be used to prevent or treat influenza and began the process of purchasing more. According to HHS officials, by December 2004 the federal government purchased and stockpiled enough antiviral medicines to treat more than 7 million people. State and local health departments used existing emergency plans and incident command systems. Some state and local health departments used their emergency preparedness plans and incident command systems (the organizational systems set up specifically to handle the coordinated response to emergency situations) during the influenza vaccine shortage. The five state health departments and two of the local health departments we visited used their incident command systems to help manage shortage-related activities, and three of the state health departments reported using their emergency plans. In addition, officials from the Florida Health Care Association, an organization representing long-term-care providers in that state, reported using certain elements in their disaster planning guide, which includes plans for disasters like hurricanes or bioterrorism. Federal and state officials took measures against price gouging. Around the time (October 13, 2004) that one Florida-based distributor was sued by that state for selling influenza vaccine at significantly inflated prices, several states began issuing warnings that all suspected cases of price gouging by vaccine distributors and providers would be reported to the states’ attorneys general for further investigation and possible prosecution. In support of states’ efforts to curtail the overpricing of limited influenza vaccine, CDC began collecting reports of price gouging and shared the information with the National Association of Attorneys General and state prosecutors. On October 14, 2004, the Secretary of HHS sent a letter to the attorney general of each state, urging thorough investigation of reports of price gouging, and on October 22, 2004, HHS filed a “friend of the court” brief in support of the Florida lawsuit. Beginning in mid-October, federal, state, and local public health officials acted to distribute the remaining 25 million doses of injectable influenza vaccine across the states and directed the limited amount of available injectable vaccine to those in priority groups. State and local public health departments also took steps to help ensure that vaccine was distributed to those within their jurisdictions who were in priority groups. In October and November, working with representatives from national public health organizations and sanofi pasteur, CDC developed a plan to distribute sanofi pasteur’s unshipped vaccine. The plan consisted of two overlapping phases and was aided by the manufacturer’s voluntary sharing of proprietary information to help identify geographic areas in greatest need of vaccine. Phase I, which began the week of October 11, 2004, consisted of filling orders that were clearly identifiable as public-sector orders and orders, such as those from long-term-care facilities, that had been placed with sanofi pasteur. Orders selected for full or partial filling included those that could be immediately identified as placed by the Department of Veterans Affairs, the Indian Health Service, long-term-care facilities and hospitals, and others (see table 2). Filling these orders distributed approximately 13 million doses of vaccine over a 6–8 week period. Phase II, which was announced by CDC on November 9, 2004, consisted of distributing approximately 12 million doses: about 3 million doses for some of the remaining public-sector orders from phase I and about 9 million doses across the states according to a formula based on each state’s percentage of the estimated nationwide unmet need. CDC calculated a state’s unmet need by taking the total estimated number of individuals in priority groups in the state and subtracting the total number of doses that had been delivered before and during phase I. To help state health officials identify the regions within their states needing vaccine from phase II distribution, CDC developed an Internet-based program called the Flu Vaccine Finder on its secure data network. The program allowed state health officials to view, county by county, a list of vaccine orders shipped by sanofi pasteur to various types of customers, such as pediatricians and hospitals. Officials could then allocate vaccine available to their state under phase II to providers within their state that needed, but had not yet received, vaccine (see fig. 4). According to CDC officials, the agency understood that not all of the phase II doses would be ready to ship to states at once, so orders were partially filled and shipped in waves. Furthermore, the formula for determining each state’s allocation was imperfect, according to CDC, resulting in some states’ having more vaccine than needed to cover demand from those in priority groups and other states’ having too little. In response, CDC reallocated vaccine available for ordering by states in December 2004. In addition, some states found it necessary to redistribute vaccine within their own borders, or they attempted to purchase or sell vaccine to other states to best align supply and demand at local levels. States could begin ordering their vaccine allotments through the secure data network on November 17, 2004, and ordering continued through mid-January. Public health officials at all levels implemented various strategies to help ensure that their vaccine supplies were targeted to high-risk individuals and others in priority groups. Emergency directives issued. To help support providers in vaccinating only those individuals in CDC’s priority groups, a number of states, such as California and Florida, issued emergency public health directives requiring health care providers to limit influenza vaccination to people in priority groups and to refrain from vaccinating individuals not in CDC’s priority groups. Some of these directives, including those of the District of Columbia and Michigan, explicitly stated that providers failing to comply with these directives could face penalties, such as fines or imprisonment. But some states chose not to issue emergency directives. For example, Minnesota state health officials reported that they had such strong voluntary compliance and cooperation from the state’s provider community that they decided it was not necessary to post a directive mandating compliance. Surveys conducted of providers and long-term-care facilities. During mid-October, working with national professional organizations, CDC conducted a survey of long-term-care facilities to identify those that had placed orders with Chiron. A number of health departments, including six we visited, had also surveyed long-term-care facilities, and at least two, Minnesota and Seattle–King County in Washington State, completed their surveys before CDC began administering its version. In addition, many state health departments, including three we visited, surveyed providers about vaccine availability and the need for covering those in priority groups. In an effort to assess the degree of the vaccine supply shortage, for example, Minnesota public health officials developed and administered a survey to identify how much influenza vaccine was available in each of its 92 local public health jurisdictions, not knowing before the shortage which providers had ordered vaccine from Chiron or which ones had ordered from sanofi pasteur. Vaccine transferred among states. Because CDC’s distribution plan was based in part on estimated need for vaccine, some states received more than enough to cover demand from their priority groups, and some states received too little. To redistribute vaccine to locations that needed vaccine to meet demand from priority groups, a state could attempt to sell its available vaccine to another state. According to the Association of State and Territorial Officials, Nebraska shipped some vaccine to other states when its own demand was met. Minnesota state health officials also reported offering to sell available vaccine to other states. At the same time, states without enough vaccine, such as Maryland, tried to obtain it from another. Partnerships established with the private sector. To augment state and local vaccine supply, public health departments looked to the private sector for help. A number of state and local health departments we talked with reported facilitating redistribution or acting as brokers for donations of vaccine that had been purchased by large employers for employee vaccination campaigns before the shortage. According to health officials in Washington, for example, one large employer donated about 700 doses of influenza vaccine to the health department in Seattle–King County, which was then able to supply local nursing homes. Certain states and localities partnered with for-profit and not-for-profit home health organizations, which held mass immunization clinics and set up clinics in providers’ offices to help administer the vaccine quickly. For example, the Visiting Nurses Association of Southern Maine held a mass immunization clinic on a local college campus. These organizations followed CDC’s recommendations for vaccinating priority groups by screening potential vaccine recipients. Crowding alleviated through appointments and lotteries. In an effort to control crowding, health officials in some localities created vaccination appointments for individuals who were at high risk or in another priority group. When available supplies were insufficient to cover every qualified person who wanted a vaccination, some health departments held lotteries for available vaccine. The local public health department in Portland, Maine, for example, held a lottery for the small amount of vaccine it had received before the shortage plus the several hundred doses donated by an area medical center and the state department of health. To register for the lottery, people had to show they belonged to a priority group by supplying a note from their provider. Throughout the 2004–05 influenza vaccine shortage, federal, state, and local health officials used a variety of communication mechanisms to keep health officials, providers, and the public updated about vaccine availability and about the various strategies for distribution to providers and the public. At the federal level, CDC held frequent press conferences beginning in early October 2004. At these events, the agency updated the public on current efforts and recommendations, and it asked people who did not belong to a priority group to step aside and defer vaccination so that those in the priority groups would have access. CDC also conducted biweekly conference calls with representatives from various national health organizations to update them and obtain their feedback on distribution efforts. According to CDC officials, state and local health officials could generally access the minutes from these discussions the following day on CDC’s Health Alert Network. CDC also used this network to send advisories and updates on the influenza vaccine situation, beginning on October 5, 2004, and continuing through the end of January. The majority of the state health officials we met with reported receiving key information about the shortage from this network; the information was then forwarded to local health officials, hospitals, and medical associations that, in turn, passed the information on to providers. State and local health officials we met with also reported using various communication methods to relay national guidance, along with state and local guidance, and information about vaccine availability. These communication methods included mass e-mails and faxes; public education campaigns for influenza prevention; the media, including television, radio, and newspapers; telephone hotlines; and Web sites (see table 3). At the latest part of the influenza vaccination period, from mid-December 2004 through January 2005, federal and state health officials took several actions intended to further augment the vaccine supply and make vaccine more accessible. Four areas were addressed: broadened recommendations for groups to be vaccinated, modifications to the Vaccines for Children program, purchase of foreign-made vaccine, and release of the federal stockpile of influenza vaccine. CDC and states broadened the priority groups for influenza vaccination. On December 17, 2004, CDC announced broadened vaccination recommendations to include those aged 50–64 years and household contacts of high-risk individuals in locations where state and local health officials judged vaccine supply to be adequate. CDC’s broadened recommendations became effective January 3, 2005, allowing extra time for vaccination of individuals in the original priority groups and time for state and local health departments to prepare for increased requests for vaccine. As of January 3, 2005, however, according to information from the Association of State and Territorial Health Officials, 20 states had already expanded vaccination recommendations: 13 specified the additional groups identified by CDC, and 7 lifted all vaccination restrictions, allowing anyone wanting a vaccination to get one. On January 27, 2005, CDC endorsed states’ efforts to broaden vaccination recommendations to include all people wanting influenza immunization in states and localities where vaccine supply was sufficient to do so. Before that date, according to association officials, 27 states had already expanded recommendations to include everyone, although a few states waited longer to expand recommendations. CDC made vaccine from the Vaccines for Children program more widely available. CDC’s ACIP passed a resolution for CDC’s Vaccines for Children program, effective December 17, 2004, that expanded the groups of children eligible to receive the program’s influenza vaccine to include program-eligible children outside of CDC’s priority groups who were household contacts of people in high-risk groups. Later, on January 27, 2005, CDC authorized limited amounts of influenza vaccine from the Vaccines for Children program and held by the states to be transferred to state health departments for nonprogram use where the demand among program-eligible children had already been met. Public providers that had a reserve of program vaccine after vaccinating their program-eligible children could then use this vaccine for adults and children who were not eligible for the Vaccines for Children program. HHS purchased foreign-manufactured influenza vaccine for the U.S. market. After efforts initiated in early October to develop a plan to obtain foreign-made influenza vaccine that was not licensed for the U.S. market and make it available under an investigational new drug protocol, HHS in December 2004 purchased about 1.2 million doses from one manufacturer in Germany and, in January 2005, purchased about 250,000 doses from another manufacturer in Switzerland. CDC could then make this vaccine available to those states and localities wanting additional vaccine to alleviate shortages. According to HHS officials, however, none of the additional doses were used in the 2004–05 influenza season. CDC made stockpiled vaccine available to providers. On January 27, 2005, after the production of 3.1 million late-season doses designated for CDC’s stockpile of influenza vaccine, CDC announced that that it would make this vaccine available to sanofi pasteur, which, in turn, could market and sell the vaccine to public and private providers and then replenish CDC’s stockpile. This strategy allowed providers to order influenza vaccine directly from the manufacturer or a distributor, rather than go through state or local health departments. Providers who purchased these stockpiled doses would also be allowed to return unused vaccine for a credit and would have to pay only shipping costs for returned vaccine. Although the actions taken to address the influenza vaccine shortage helped achieve vaccination rates approaching past levels for certain priority groups (see fig. 5), a number of lessons emerged from federal, state, and local responses to the 2004–05 influenza shortage. Some lessons were specific to that season’s shortage, and others have wider ramifications for potential future shortages or a pandemic. The primary lessons can be grouped into three broad, interrelated categories: planning, timely action, and communication. Before October 5, 2004, CDC lacked a contingency plan specifically designed to respond to a scenario involving a severe influenza vaccine shortage at the start of the traditional fall vaccination period; the absence of a plan led to a delay in response. Faced with the unanticipated shortfall in the amount of influenza vaccine expected to be available for the 2004–05 influenza season, CDC revised recommendations and worked with sanofi pasteur to begin assessing available supply and to create a distribution plan for the remaining vaccine. Developing and implementing this plan took time and led to delays in response and some confusion at the state and local levels, particularly right after Chiron’s October 5, 2004, announcement. Public health officials in all five states we visited remarked that although phase I of CDC’s redistribution plan quickly and effectively distributed some vaccine to public and private providers serving priority groups, the vaccine available in phase II of CDC’s redistribution plan was too much, too late. Phase II ordering began on November 17, 2004, and continued into January 2005, but several weeks could elapse after orders were placed until vaccine was delivered. According to state and local public health officials we interviewed, by the time the vaccine was delivered through a cumbersome distribution process, demand for the vaccine had substantially waned, and public and private providers were left to redistribute the excess. The phase II distribution problem was compounded for state and local health officials because CDC restricted access to its secure data network to two people per state. This narrow restriction left several state and local public health officials, according to those we interviewed, without vital information about the supply or demand for vaccine. Our work showed that four areas of planning are particularly important for enhancing preparedness before a similar situation in the future: (1) defining the responsibilities of federal, state, and local officials; (2) using emergency preparedness plans and emergency health directives; (3) distinguishing between demand and need; and (4) identifying mechanisms for distributing and administering vaccine. Better defining responsibilities of federal, state, and local officials can minimize confusion. During the 2004–05 vaccine shortage, CDC worked with national organizations representing states and localities to coordinate roles and responsibilities. Several public health officials we spoke with reported that CDC effectively worked with sanofi pasteur and national organizations representing state and local health officials to coordinate responsibilities shortly after Chiron’s announcement. Despite these efforts, however, problems occurred. For example, to identify national demand for vaccine, federal, state, and local health officials surveyed providers in states and localities to assess existing supply and additional need. CDC worked with national professional associations to survey long-term-care providers throughout the country to determine if seniors had adequate access to vaccine. Maine and other states, however, also surveyed their long-term-care providers to make the identical determination. This duplication of effort expended additional resources, burdened some long-term-care providers in the states, and created confusion. Emergency preparedness plans and emergency health directives help coordinate local response. State and local health officials in several locations we visited reported that using existing emergency plans or incident command centers helped coordinate effective local response to the vaccine shortage. For example, public health officials from Seattle– King County said that using the county’s incident command system played a vital role in coordinating an effective and timely local response and in communicating a clear message to the public and providers. In addition, according to public health officials, emergency public health directives helped ensure access to vaccine by supporting providers in enforcing CDC’s recommendations and in helping to prevent price gouging in those states whose directives addressed price gouging. Certain officials we spoke with, however, reported that although plans and directives helped, improvements were still needed. Some health officials indicated that as a result of the past influenza season, they were revising state and local preparedness plans or modifying command center protocols to prepare for future emergency situations. For example, in Maine, after experiences during the 2004–05 influenza season, state officials recognized the need to speed completion of their pandemic influenza preparedness plan. In addition, they said the vaccine shortage experience helped identify which officials should attend which meetings during a crisis to ensure the right people have the right information. Distinguishing between demand and need for vaccine can improve distribution. In discussing the adequacy of vaccine supplies, public health officials make a distinction between demand and need for vaccine by a high-risk group. In this context, demand is the number of high-risk individuals who want to receive an influenza vaccination, and need is the total number of high-risk individuals in an area or region, regardless of whether they want to receive a vaccination. Because some individuals in high-risk groups are unlikely to be vaccinated, estimating vaccine amounts on the basis of total need, rather than demand, can overstate the amount that will likely be used in any given location. Differentiating between demand and need would have helped states avoid substantially over- or underordering vaccine from CDC or a manufacturer. California state officials said that differentiating between demand and need earlier in the season could have reduced delays and confusion during the shortage. Certain states and localities we visited had taken time before the season to address contingencies for vaccine supply fluctuations. For example, Minnesota state officials used experiences in previous influenza seasons to build a state influenza plan that educated providers and local public health officials about the difference between demand and need. According to state officials, communicating this difference to local providers and health officials helped more accurately identify how much vaccine was in demand throughout the state. The distribution and administration of vaccine can be facilitated. One mechanism used in a majority of the states and localities we visited was building partnerships between public and private sectors. This mechanism was effective in both the distribution and the administration of vaccine. In San Diego County, California, for example, local health officials worked with a coalition of partners in public health, private businesses, and nonprofit groups throughout the county. In addition, several states and localities also partnered with other organizations, including home health organizations, to increase their capacity to administer vaccine to large numbers of people. For example, public health officials, including those in California and Florida, worked with national home health organizations to quickly immunize those in high-risk and other priority groups by holding mass immunization clinics. Other mechanisms we identified, aimed mainly at addressing the challenge of administering a limited amount of vaccine, included scheduling appointments and holding lotteries. In Stearns County, Minnesota, for example, public health officials worked with private providers to implement a system of vaccination by appointment. Rather than standing in long lines for vaccination, individuals with appointments went to a clinic during a given time slot. Public health officials in Portland, Maine, emphasized the effectiveness of holding a lottery as a way to equitably administer limited amounts of vaccine to people and as an alternative to having large crowds show up for a limited number of doses. After the 2004–05 influenza season, CDC officials developed lessons learned from their experiences, including lessons on the importance of contingency planning and defining which groups have higher priority in the event of a vaccine shortage. In August 2005, CDC issued interim guidelines to assist state and other immunization programs in planning for and dealing with an influenza vaccine shortage during the 2005–06 season. Also in August 2005, CDC published potential priority groups for vaccination in the event of a shortage. Because the total vaccine supply for the 2005–06 influenza season was not then known, however, CDC did not recommend setting priorities for injectable vaccine at that time. On September 2, 2005, CDC published priority recommendations for use of injectable vaccine through October 24, 2005. During the 2004–05 influenza vaccine shortage, federal, state, and local officials needed to continually adapt to changing vaccine supply and demand, to make decisions, and to take action quickly. The actions they took after the traditional fall vaccination period, however, came too late to boost supply while demand was still high. These actions included making available foreign-manufactured vaccine that was not licensed for the U.S. market, expanding availability of vaccine from the Vaccines for Children program, and releasing vaccine reserved for the federal stockpile. HHS’s decision to purchase influenza vaccine not licensed for the U.S. market and to make it available under an investigational new drug protocol was too late to mitigate the shortage’s effects because of when such vaccines became available and because of cumbersome administrative requirements. Soon after Chiron’s October 5, 2005, announcement, HHS started looking into foreign vaccine that was licensed for use in other countries but not in the United States. Nonetheless, by the time HHS purchased this vaccine in December 2004 and January 2005, there was little demand for it. CDC officials acknowledged that one lesson learned from experience in 2004–05 was that use of foreign-licensed vaccine under an investigational new drug protocol during the influenza season requires that vaccine be shipped no later than the beginning of October. Further, recipients of such vaccines may be required to sign a consent form and follow up with a health care worker after vaccination— steps that, according to health officials we interviewed in several states, would be too cumbersome to administer and could dampen public enthusiasm for being vaccinated. Although about 1.5 million doses of this vaccine became available, none were used because demand had fallen, and injectable vaccine licensed for the U.S. market was still available. CDC’s December 2004 and January 2005 implementation of decisions to make vaccine from the Vaccines for Children program more widely available was not timely and lacked flexibility. CDC explored options to use program vaccine to vaccinate three groups of people—children eligible for the Vaccines for Children program but not in a priority group, children not eligible for the program, and adults—but only in geographic areas where the needs of eligible children in high-risk groups had been met. But by the time CDC determined that demand from eligible children had been met and announced that it was taking steps to make more program vaccine available for others, many states’ demand for additional vaccine had dropped. Because vaccine purchased under the Vaccines for Children program became available for nonprogram use so late, some states reported they were unable to vaccinate all their state’s children in CDC’s priority groups. In other states, vaccine purchased under the program remained unused after all program-eligible children were vaccinated, but completing the process to transfer the unused vaccine delayed some states from administering the remaining vaccine to individuals not eligible for Vaccines for Children. Since CDC expanded program vaccine availability too late, vaccine purchased under the Vaccines for Children program ultimately went unused. As a result, CDC is surveying epidemiologists, state health officials, and immunization managers on lessons learned to connect activities to outcomes, such as releasing program vaccine to increase immunization rates. Further, state health officials we interviewed reported that administrative difficulties in making vaccine available to a broader population hindered its ready use during the shortage. According to state health officials in California and Washington, if broadening Vaccines for Children eligibility had been more flexible and allowed more efficient transfer of vaccine to those not in the program, vaccine could have been made available sooner and more widely to people in priority groups. CDC’s decision to release influenza vaccine produced for its national stockpile was also ineffective because the action came too late. The majority of doses reserved for the stockpile were not delivered until January 2005 because CDC wanted doses produced earlier in the season to be available to fill state orders. By the time the stockpiled doses were released back to the manufacturer for purchase by providers and others in January, national demand had shrunk. Of the 3.1 million doses of injectable vaccine released from the stockpile in January 2005, only approximately 115,000 were ordered. Without exception, state health officials in the five states we visited reported that this vaccine became available too late in the season to be useful. Finally, certain states faced barriers when trying to buy available influenza vaccine from other states, preventing timely redistribution. During the 2004–05 shortage, some state health officials reported problems with their ability—both in paying for vaccine and in administering the transfer process—to purchase influenza vaccine. For example, Minnesota tried to sell its available vaccine to other states seeking additional vaccine for their high-risk populations. According to federal and state health officials, however, certain states lacked the funding or flexibility under state law to purchase the vaccine when Minnesota offered it. In response to problems encountered during the 2004–05 shortage, the Association of Immunization Managers proposed in 2005 that federal funds be set aside for emergency purchase of vaccine by public health agencies, eliminating cost as a barrier in acquiring vaccine to distribute to the public. While part of the lesson learned about communication was positive, some aspects of this lesson pointed to need for improvement. Positives can be seen, for example, in the extent of CDC’s communication. During the 2004–05 shortage, CDC communicated regularly through a variety of media as the situation evolved. Officials from most states and localities we talked with reported that CDC played an active role in communicating information despite a changing environment. Several state and local officials we spoke with said that biweekly conference calls were effective in providing updates and coordinating responsibilities. The state health officer from Alabama, for instance, noted the frequency and quality of the communications that CDC put forth during the influenza season. Despite these positives, when examining the 2004–05 influenza season, state and local officials identified areas of communication to improve for future seasons. During our visits to states and localities, we found four particularly important communication issues. These issues included maintaining consistency of communications to avert confusion, understanding the importance of changing messages under changing circumstances, using diverse media to reach diverse audiences, and educating providers and the public about prevention alternatives. Consistency among federal, state, and local communications is critical for averting confusion. Health officials in Minnesota, for example, reported that some confusion resulted when the state determined that the influenza vaccine supply was sufficient to meet demand and therefore made vaccine available to other groups, such as healthy individuals aged 50–64 years, earlier than recommended by CDC. Similarly, health officials in California reported that in mid-December, local radio stations in the state were running two public service announcements—one from CDC advising those aged 65 years and older to be vaccinated, and one from the California Department of Health Services advising those aged 50 years and older to be vaccinated. They emphasized that these mixed messages created confusion. In addition, some individuals seeking influenza vaccine in other regions could have found themselves in a communication loop that provided no answers. For example, CDC advised people seeking influenza vaccine to contact their local public health department; in some cases, however, individuals calling the local public health department were told to call their primary care provider, and when they called their primary care provider, they were told to call their local public health department. This inconsistency in information from authoritative sources led to confusion and possibly to high-risk individuals’ giving up and not receiving an influenza vaccination. Modifying messages to respond to changing circumstances can prevent unintended consequences. Beginning in October, CDC communicated a message asking individuals who were not in a high-risk group or another priority group to forgo or defer vaccination, or to step aside, so that that those in priority groups could have access to available vaccine. According to CDC, this message resulted in an estimated 17.5 million individuals who specifically deferred vaccination to save vaccine for those in the priority groups. Public health officials we interviewed, however, lamented the fact that this nationwide effort did not also include a message to individuals who did step aside to check back with their providers or to seek an influenza vaccination later in the season. State and local officials suggested that CDC should have had a message to step aside until a certain estimated date, when more vaccine would be available and demand from individuals in the narrowed CDC priority groups would ease. These officials noted that many people in priority groups, including those aged 65 years and older who should have been vaccinated, stepped aside. These officials also said that they were concerned about other individuals, particularly those aged 50–64 years, who were not vaccinated during the moderate 2004–05 influenza season and, as a result, might think vaccination was not important enough to seek in future seasons. Using diverse media helps reach diverse audiences. During the 2004–05 influenza season, public health officials reported the importance of using a variety of communication methods to help ensure that messages reached as many individuals as possible. For example, officials from the health department in Seattle–King County, Washington, reported that it was important to have a telephone hotline as well as information posted on a Web site, because some seniors calling Seattle–King County’s hotline reported that they did not have access to the Internet. Further, public health officials in Miami–Dade County in Florida said that bilingual radio advertisements promoting influenza vaccine for those in priority groups helped increase the effectiveness of local efforts to raise vaccination rates. Education is important in alerting providers and the public about prevention alternatives. Educating health care providers and the public about all available influenza vaccines and forms of prevention may increase the number of vaccinated individuals and also reduce the spread of influenza. Experience with the nasal spray vaccine in 2004–05 illustrates the importance of education. Approximately 3 million doses of nasal spray vaccine were ultimately available during the season for vaccinating healthy individuals. According to public health officials we interviewed, however, some individuals were reluctant to use this vaccine because they feared that the vaccine was too new and untested or that the live virus in the nasal spray could be transmitted to others. State health officials in Maine, for example, reported that the state purchased about 1,500 doses of the nasal spray vaccine for their emergency medical service personnel and health care workers, yet 500 doses were administered. Further, public health officials we interviewed said that education about all available forms of prevention, including the use of antiviral medications and good hygiene practices, can help reduce the spread of influenza. According to CDC officials, as part of preparations for the 2005–06 influenza season, the agency developed a draft communication plan— separate from the interim guidelines issued to states—from lessons learned, which includes messages for responding to the fluctuations in supply and demand anticipated throughout the season. As of August 2005, CDC officials said that this plan will remain in draft form because tactics will be changed and updated as circumstances change. Aided by a relatively moderate influenza season, efforts to mitigate the sudden and unexpected shortage of influenza vaccine for the 2004–05 season were largely successful, although the season was not without problems. Lacking a preseason plan to address a significant shortfall after the beginning of the traditional fall vaccination period, the federal government reacted to the shortage and its aftereffects as they unfolded throughout the season. This lack of preseason planning created confusion and delays during the optimal fall influenza vaccination window, when state and local public health agencies and health care providers most needed vaccine to protect individuals at high risk of severe complications. Conversely, federal efforts to boost supply late in the season had little effect, because demand fell off sharply in December and January, and vaccine became available too late. In some instances, uncoordinated communication from federal to state and local jurisdictions, and to providers and the general public, contributed to confusion, frustration, and individuals’ failure to seek or receive an influenza vaccination. Drawing from experiences during the 2004–05 shortage, CDC has taken a number of steps, including issuing interim guidelines in August 2005, to assist in responding to possible future shortages. It is too early, however, to assess the effectiveness of these efforts in coordinating actions of federal, state, and local health agencies and others who play a part in the annual influenza vaccination process. In commenting on a draft of this report, HHS noted that the draft summarized in detail the activities undertaken by CDC and its public- and private-sector partners to deal with the influenza vaccine shortage of 2004–05, and the agency concurred with our finding that contingency planning will greatly improve response efforts. The agency also provided details on other actions, such as approval of additional influenza vaccines for the U.S. market, that were under way. HHS also agreed that adjustments to vaccination recommendations and vaccine supply ideally should occur earlier in the influenza season, but such adjustments cannot always be implemented in a shortage year. HHS’s written comments appear in appendix I. As arranged with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 30 days after its issue date. At that time, we will send copies of this report to the Secretary of HHS, the Directors of CDC and the National Vaccine Program Office, and other interested parties. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff members have any questions, please contact me at (202) 512-7119 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff members who made major contributions to this report are listed in appendix II. In addition to the contact named above, Kim Yamane, Assistant Director; George Bogart; Ellen W. Chu; Nicholas Larson; Jennifer Major; Terry Saiki; and Stan Stenersen made key contributions to this report. Influenza Pandemic: Challenges in Preparedness and Response. GAO-05- 863T. Washington, D.C.: June 30, 2005. Influenza Pandemic: Challenges Remain in Preparedness. GAO-05-760T. Washington, D.C.: May 26, 2005. Flu Vaccine: Recent Supply Shortages Underscore Ongoing Challenges. GAO-05-177T. Washington, D.C.: November 18, 2004. Infectious Disease Preparedness: Federal Challenges in Responding to Influenza Outbreaks. GAO-04-1100T. Washington, D.C.: September 28, 2004. Public Health Preparedness: Response Capacity Improving, but Much Remains to Be Accomplished. GAO-04-458T. Washington, D.C.: February 12, 2004. Infectious Disease Outbreaks: Bioterrorism Preparedness Efforts Have Improved Public Health Response Capacity, but Gaps Remain. GAO-03- 654T. Washington, D.C.: April 9, 2003. Bioterrorism: Preparedness Varied across State and Local Jurisdictions. GAO-03-373. Washington, D.C.: April 7, 2003. Flu Vaccine: Steps Are Needed to Better Prepare for Possible Future Shortages. GAO-01-786T. Washington, D.C.: May 30, 2001. Flu Vaccine: Supply Problems Heighten Need to Ensure Access for High- Risk People. GAO-01-624. Washington, D.C.: May 15, 2001. Flu Pandemic: Plan Needed for Federal and State Response. GAO-01-4. Washington, D.C.: October 27, 2000. | In early October 2004, the nation lost about half its expected influenza vaccine supply when one of two major manufacturers announced it would not release any vaccine for the 2004-05 season because of potential contamination. The Centers for Disease Control and Prevention (CDC) had earlier recommended vaccination for 188 million individuals, including those at high risk of severe complications from influenza (such as seniors and those with chronic conditions), and other groups (such as their close contacts). Although health officials took actions to distribute the limited supply of influenza vaccine, reports persisted of high-risk individuals and others in priority groups who could not find a vaccination, including those who were turned away and never returned when supplies became available. Such reports raised questions about the adequacy of U.S. preparedness to respond to significant vaccine shortages. GAO was asked to examine actions taken at federal, state, and local levels to ensure that high-risk individuals had access to influenza vaccine during the shortage, including any lessons learned. Federal, state, and local health officials took several actions beginning in October 2004 to help ensure that individuals at high risk of severe complications from influenza had access to vaccine. Federal officials, for example, quickly revised vaccination recommendations to target available vaccine to high-risk individuals and to other priority groups. Additional actions were aimed to distribute vaccine expeditiously and to communicate with providers and the public as events unfolded and vaccine supplies changed. Beginning in mid-December, health officials took steps to distribute additional vaccine, broadening recommendations on who should be vaccinated. Although these actions helped achieve vaccination rates approaching past levels for certain priority groups, such as those aged 65 years and older, several lessons emerged, including some that could help with future shortages. First, unless planning for problems is already in place, action is delayed. CDC's lack of a contingency plan contributed to delays and uncertainty about how to ensure that high-risk individuals had access to vaccine. Second, when actions occur late in the influenza season, they are likely to have little effect. Third, effective response requires communication that is both clear and consistent. CDC has taken a number of steps, including issuing interim guidelines in August 2005, to respond to possible future shortages. It is too early, however, to assess the effectiveness of these efforts in coordinating actions of federal, state, and local health agencies and others. In commenting on a draft of this report, HHS concurred with GAO's finding that contingency planning would improve response efforts, and the agency indicated that additional preparations were under way. |
I-NET, Inc. is a high technology corporation that provides federal agencies with computer systems and support services. For fiscal year 1992, it was the third largest recipient of 8(a) contract awards, which totaled over $65 million. During its nearly 10-year (Sept. 20, 1984, to June 16, 1994) program participation, I-NET obtained 145 8(a) contracts totaling at least $508 million. At least 126 of the 145 contracts were awarded noncompetitively. TAMSCO is a high technology corporation that provides computer systems and support services to federal agencies and large Department of Defense contractors. For fiscal year 1992, it was the ninth largest recipient of 8(a) contract awards, totaling over $30 million. During its program participation from May 14, 1984, until September 18, 1993, TAMSCO obtained 108 8(a) contracts totaling at least $356 million. At least 82 of the 108 contracts were awarded noncompetitively. In March and April 1995, as a part of our continuing work on the 8(a) program, we testified that the program has continued to experience problems in achieving its objectives. As the value and number of 8(a) contracts continue to grow, the distribution of those contracts remains concentrated among a very small percentage of participating 8(a) firms, while a large percentage get no awards at all. This is a long-standing problem. For example, in fiscal year 1990, 50 firms representing fewer than 2 percent of all program participants obtained about 40 percent, or $1.5 billion, of the total $4 billion awarded. Of additional concern is that, of the approximately 8,300 8(a) contracts awarded in fiscal 1990 and 1991 combined, 67 contracts were awarded competitively. In fiscal year 1994, the top 50 firms represented 1 percent of the program participants and obtained 25 percent, or $1.1 billion, of the $4.37 billion awarded, while 56 percent of the firms got no awards. In fiscal year 1994, $383 million in contracts were awarded competitively. The eligibility and participation files for the top 25 8(a) contract award recipients for fiscal year 1992, from which we selected I-NET and TAMSCO, showed that approximately $816 million, or about 22 percent of the total 8(a) contract dollars awarded that year, went to the top 25 firms. These firms had obtained, as of May 1995, a total of $4.9 billion in 8(a) contracts. Of these firms, three were Black-owned; eight were Hispanic-owned; six were Asian-owned; and five were Native American-owned. SBA had initially recommended that 15 of these 25 firms not be accepted into the program because the applicants did not meet eligibility standards for one or more of the following reasons: The ownership or control of the firms resided in individuals other than those who were applying (8 firms). The owners were not economically disadvantaged (2 firms). The firm was acting as a broker/dealer in violation of the Walsh-Healy Act (1 firm). The firms lacked the financial capability to perform on the contracts they wished to bid on (5 firms). SBA could not provide adequate contract support for the firms to succeed (3 firms). These recommendations were overruled, in some cases by high-level SBA officials, despite the fact that some of the firms had not been recommended for acceptance up to three times previously for the same reasons. As of May 1995, 18 of these 25 firms had exited from the program; yet at least 17 are still performing on contracts awarded while they were in the program. According to SBA, the total dollar value of contracts awarded to the firms initially not recommended for participation in the program is at least $2.9 billion. An SBA Office of Inspector General (OIG) audit report (Sept. 1994) also questioned the continued eligibility of large 8(a) firms in the program and identified some of the same causes. In its report, it cited findings wherein individuals in the program had overcome their economic disadvantage but remained in the program by understating their net worth; SBA officials had miscalculated the net worth; high personal income was also not considered in the evaluation of net worth; and individuals remained in the program because either the firm’s equity, the owner’s personal residence, and/or the spouse’s net worth were not considered factors in determining the owners’ net worth. Consequently, individuals could remove equity from the firms and use it to purchase expensive personal residences exempt from net worth evaluations. According to SBA 8(a) regulations, negative control is the lack of power by a program participant to control a firm’s operations. For the 8(a) program, SBA regulations state that a program applicant must unconditionally own at least 51 percent of the firm and control its operations. Control is further defined as a condition that would not allow a noneligible person to benefit from the program or subjugate the control of the firm’s operations. SBA had concerns about negative control issues at both I-NET and TAMSCO, but it ultimately admitted both firms to the 8(a) program. SBA officials recommended denying I-NET acceptance into the program in four separate instances, but other SBA officials overruled these recommendations. SBA officials had determined that I-NET’s owner and president, Mrs. Kavelle Bajaj, lacked the technical and managerial experience to run a high technology computer firm. They also determined that, rather than Mrs. Bajaj, Mr. Bajaj, a recognized expert in the field, would actually control and run the firm’s operations. A former I-NET Vice President for Marketing and Operations told us that Mrs. Bajaj lacked the technical and managerial skills needed to run a computer company and that he was hired by Mr. Bajaj in January 1985 to help start and run the firm and to “teach” Mrs. Bajaj how to run a business. For this, Mrs. Bajaj gave the former vice president 24.5 percent of the company. Shortly after leaving the company in 1988, this individual was replaced by Mr. Bajaj, who was appointed Executive Vice President. Mr. Bajaj formally became I-NET’s president after I-NET exited from the 8(a) program in 1994. On the résumé he submitted to SBA-OIG during its 1992 audit, Mr. Bajaj stated that he was “responsible for day-to-day operations” of I-NET. Mrs. Bajaj was adamant with us that she unconditionally owned and controlled the firm. However, Mrs. Bajaj provided no explanation when we asked her how she maintained control over I-NET while, at the same time, her husband represented that he had the day-to-day responsibilities for I-NET operations. Further, a senior SBA official told us that the memorandum prepared by an SBA regional staff member recommending acceptance into the program used “circular reasoning” in overruling the District Office’s objections to this firm. Other SBA officials who relied on the first official’s analysis agreed that it was “double talk” that inadequately addressed the reason to overrule the recommended refusal. One stated that I-NET’s admission to the 8(a) program was “questionable.” Nevertheless, these officials stood by their decision to recommend accepting I-NET. From the outset, SBA questioned the control that TAMSCO’s nondisadvantaged (Caucasian) owner exercised over the disadvantaged (Hispanic) owner due to the structure of the board of directors, the owners’ prior relationship, and their compensation. However, SBA allowed TAMSCO to participate fully in the 8(a) program. SBA identified the ownership and negative control issue at TAMSCO during the application process and twice recommended that the firm’s application be denied. SBA determined that the firm was owned by two persons, with the Hispanic owner having 51 percent and the Caucasian owner, 49 percent. SBA compared their résumés and other documentation in the 8(a) application and found that the Caucasian owner had previously held supervisory positions over the Hispanic owner and that the two-man board of directors, on which both served, allowed the Caucasian owner to have negative control over the Hispanic owner. SBA officials concluded that the firm should be rejected because the Caucasian owner would improperly benefit from the program. We also found that the personal financial statements and other documentation showed that the Caucasian owner had a higher salary than the Hispanic owner and that the firm was located at the Caucasian owner’s residence. A former official of the firm told us that the two owners were “co-dependent” and functioned as equals. TAMSCO’s president (the Hispanic owner) told us that (1) despite his previous relationship with the Caucasian owner, ownership was structured so that TAMSCO would be eligible for Small and Disadvantaged Business contracts and (2) it was agreed that he would maintain total control over the firm’s operations. The SBA official who overturned the two recommendations for denial had no answers or explanations as to why he had accepted TAMSCO into the 8(a) program over the prior objections of SBA officials concerning negative control. He also denied meeting or discussing the matter with TAMSCO’s owners. However, the TAMSCO owners told us that they had had substantive discussions and meetings with him on the issue of negative control. I-NET provided false and misleading information to SBA regarding its equity ownership in the firm, the owner’s educational credentials, and the owner’s citizenship status. Despite these misrepresentations, SBA did not terminate I-NET from the program or suspend its contracts. I-NET submitted false statements to SBA about its equity ownership. Documents, interviews, and a federal court case revealed that I-NET had entered into partnership agreements with two individuals in January 1985 for a total of 49-percent ownership interest (each with a 24.5-percent share) without disclosing these transactions to SBA, as required by SBA regulations. One of the 24.5-percent equity owners also owned another computer services company. At the time, SBA regulations prohibited a business concern in a related field from owning any equity in an 8(a) firm. Although I-NET repurchased this ownership interest within a year of its issuance, Mrs. Bajaj never informed SBA about this transaction. Mrs. Bajaj submitted a false statement about I-NET’s ownership status to SBA in January 1986, when I-NET notified SBA that 49 percent of the company’s stock was unissued. However, 24.5 percent was still outstanding with the one remaining partner. Believing that SBA would approve only a 15-percent transfer of ownership, Mrs. Bajaj attempted to reduce the remaining partner’s interest to 15 percent and privately negotiate away the remaining difference. In 1988, Mrs. Bajaj submitted a second document to SBA, stating that 49 percent of the stock was “unissued,” despite the outstanding 24.5-percent equity ownership by the remaining partner. She told us that she considered the stock unissued until a dispute with this partner over his ownership was resolved. In August 1994, 2 weeks after agreeing to withdraw from the program, I-NET notified SBA that it intended to sell 20 to 25 percent of the firm’s stock through a private placement offered through a large investment company. When SBA officials learned of the impending sale, SBA attorneys recommended against approving it because its terms would have relinquished control of the firm to the outside private investors. The terms of the transactions, according to the SBA attorneys who reviewed the documents, enabled the investors to have negative control over the firm’s operations. SBA has not issued a decision, but I-NET completed the sale without a waiver from SBA, thus potentially jeopardizing its current 8(a) program contracts. The SBA Associate Administrator for Minority and Enterprise Development told us that the matter was being handled immediately; but, as of August 14, 1995, no final decision had been rendered. Mrs. Bajaj provided false information about her educational credentials, which SBA relied upon, in part, for admittance to the 8(a) program. She certified on the résumé accompanying her 8(a) application to SBA in January 1983 that she had obtained an AA degree in Computer Science and Technology from Montgomery College in Rockville, Maryland. Transcripts from Montgomery College show that she never earned the stated degree. SBA denied I-NET’s application for the 8(a) program in October 1983 because of lack of technical and managerial experience. Mrs. Bajaj again submitted a résumé with the same false information in a reconsideration appeal application later that month. According to a former I-NET senior executive, Mrs. Bajaj attached a résumé that contained the same false information to contract proposals submitted to agencies. In 1992, when the SBA-OIG audited I-NET, I-NET provided the OIG another résumé claiming she held the same nonexistent AA degree. Mrs. Bajaj admitted to us not having the degree and stated that she “naively” thought that the credits she had earned to obtain her Bachelor of Science degree in Home Economics from the University of Delhi, India, counted toward an AA degree in computer science and technology. SBA documents show that SBA relied in part on Mrs. Bajaj’s false information about the AA degree at the time when it was certifying I-NET for program participation. In an October 1993 document, the SBA Regional Counsel stated that the “original recommendation for I-NET’s approval was based, at least in part, on false information submitted by the applicant regarding Mrs. Bajaj’s degree.” Although SBA officials acknowledged this fact in October 1993, I-NET remained in the program for another 9 months and obtained additional contract awards totaling at least $13.5 million. When asked about this document, the Regional Counsel stated that the falsification was not itself sufficient to terminate the firm, despite SBA regulations that providing false information to SBA is grounds for termination from the program. Mrs. Bajaj also misrepresented her citizenship on her first application on January 11, 1983. She said that she was a U.S. citizen, but she did not obtain her citizenship until May 13, 1983. (U.S. citizenship is a requirement for acceptance into the 8(a) program.) She told us that she thought she would be a citizen by the time the application was processed. She also said that although SBA had told her that she need not be a citizen at the time of application, she was concerned that her pending citizenship status would have held up her 8(a) application. I-NET was accepted into the program on September 20, 1984. SBA did not recognize that I-NET had provided misleading financial statements concerning its total revenues. Furthermore, I-NET misstated its financial condition as being at risk in efforts to continue 8(a) program contracts. I-NET submitted financial statements to SBA that misrepresented its size by excluding certain revenues from the total sales, which allowed it to meet size standards for contracts in 1991 and 1992. I-NET explained the exclusion of this revenue in footnotes to its audited 1988 through 1990 financial statements, claiming that it was entitled to exclude these revenues because I-NET had earned no income on the revenues. SBA did not recognize or react to the information in the 1988 through 1990 financial statement footnotes until 1992. These exclusions permitted I-NET to obtain at least 11 contracts for which it was not eligible. However, I-NET included these revenues in its yearly total sales figures in submissions to an outside investment firm when it was seeking private outside investment. Our review of I-NET’s 1989 and 1990 corporate tax returns, submitted to SBA, shows that I-NET’s gross receipts as reported to the Internal Revenue Service were also substantially greater than those reported to SBA. In 1992, SBA found that the excluded revenue should have been counted for 8(a) size purposes. Therefore, in early 1993, SBA considered terminating certain contracts on the grounds that I-NET was not eligible because it had exceeded its size standards. In response, I-NET submitted an Impact Analysis Statement to SBA in April 1993. The statement said, in part, “. . . (t)he banking industry continues to label I-NET and Kavelle in a negative way . . . and maintaining adequate capital and credit are a constant challenge which leaves the company at risk.” However, in reviewing the matter and determining if I-NET met early graduation criteria, SBA found that I-NET had a $25-million line of credit with its bank, had obtained loans and financings exceeding $2 million, and had sales approaching $100 million per year. Based on its review, SBA did not find that I-NET was at risk. When asked about this apparent contradiction, Mrs. Bajaj told us that it was her view that $25 million was not sufficient credit. During this same time period, however, I-NET did not portray itself as a company at risk when it sought outside investors. A written private placement memorandum about I-NET states that as of June 1993, I-NET had a backlog of over $580 million in contracts and projected revenues through 1997 of about $1.3 billion. Subsequent to our interview of Mrs. Bajaj, I-NET provided us a written response to the risk issue. It stated that, at the time the memorandum was written, I-NET “. . . had severe cash flow problems and was having difficulty securing credit.” Furthermore, in December 1993, SBA determined that I-NET again had claimed erroneously that it lacked access to credit when it was appealing SBA’s October 1993 proposed early graduation action. In its review, SBA also determined that I-NET appeared to be misleading SBA by using inappropriate time periods to calculate earnings. Although SBA officials responsible for monitoring I-NET’s progress had become aware that I-NET had grown too large for continued program participation, SBA allowed the company to remain enrolled for almost 2 additional years. During this time, I-NET continued to obtain large contract awards. In fact, 6 days prior to I-NET’s initially being recommended for early graduation in September 1992, it was awarded a $134-million contract. The SBA official who approved the contract award was also responsible for initially recommending I-NET’s early graduation. When we interviewed him, he explained that, under SBA regulations, until a firm is officially out of the program, it can still obtain contract awards for which it is eligible. Although he wanted I-NET out of the program, he felt he could not deny contract awards until I-NET had either graduated or been terminated. However, SBA regulations and a 1982 federal court decision, in conjunction with a Comptroller General decision on the same issue, concluded differently. Both the court and the Comptroller General determined that an 8(a) firm that has exceeded size limitations must have its 8(a) contracts suspended. The regulations also state that contracts can be suspended pending a termination action by SBA. When asked about this contradiction, responsible SBA officials responded by stating that SBA lacked the proof required to terminate I-NET, despite regulations regarding actionable offenses for termination, which include providing false information to SBA—something that SBA concedes occurred. In January 1993, the SBA-OIG provided a draft audit report to the SBA office responsible for I-NET, recommending that no further contracts be awarded to I-NET because it had exceeded its size standards and had provided incorrect information to SBA for its annual size-standard determinations. However, until I-NET left the program in June 1994, SBA awarded I-NET additional contracts totaling at least $62 million. In 1993, the U.S. Coast Guard directed a noncompetitive IDIQ contract with a maximum value of $14 million to TAMSCO. During the preaward phase of the contract, Coast Guard contracting officials, who told us that it was always their intention to award the contract to TAMSCO, met with TAMSCO representatives and discussed the contract, competition thresholds, and Standard Industrial Classification (SIC) codes. The Coast Guard changed the original SIC code so that TAMSCO would be eligible for the award; used the IDIQ contracting option; and lowered the labor hours to avoid competition. Further, one Coast Guard official’s notes referred to this IDIQ contract to TAMSCO as a “graduation present” from the 8(a) program. Coast Guard officials changed the SIC code assignment and minimum contract value. Following these changes, TAMSCO was awarded a large noncompetitive IDIQ contract 1 day before its term was to expire in the 8(a) program in September 1993. Had the Coast Guard contracting officer’s originally assigned SIC code been used, TAMSCO would not have been eligible for the contract because the company had exceeded the size standard for the originally assigned SIC code. Based on notes that the Contracting Officer’s Technical Representative (COTR) wrote during meetings between Coast Guard officials and TAMSCO, it appears that the Coast Guard officials and TAMSCO had concerns about the competition thresholds. In essence, we believe that they wished to avoid the $3-million threshold required for competitive 8(a) service contracts. The Coast Guard lowered the labor hours, thus being able to award an IDIQ noncompetitive contract. Our analysis of labor costs determined that Coast Guard officials lowered the total number of labor hours in the contract by 46 percent from what was specified in the contract solicitation. Thus, the minimum contract value dropped below the $3-million competition threshold, from $4.6 million to $2.1 million. We interviewed a Coast Guard officer involved in the contract award who also developed the original minimum contract value. When we asked him about a Coast Guard finding that if fully loaded labor rates had been used in the contract, the minimum value of the contract would have exceeded competitive thresholds, he had no answer. However, he stated that the Coast Guard officials had done everything possible to get TAMSCO the contract, including changing the SIC codes and using the IDIQ contracting option. The COTR also told us that the SIC code was intentionally changed to meet TAMSCO’s eligibility and that the Coast Guard viewed competition of contract awards as a hindrance to furthering the mission. A draft of an internal Coast Guard memorandum, written to justify the contract award to TAMSCO, sheds light on Coast Guard attitudes about the use of competition and 8(a) sole source contracts. The COTR sent the memorandum—in electronic mail (e-mail) format—to another Coast Guard official for comment. The commenting Coast Guard official responded to the COTR’s memorandum—also by e-mail—by interspersing his remarks in all capital letters. (See fig. 1.) According to two former TAMSCO officials involved in the award, the COTR had provided them with a later draft of the internal memorandum to review before he submitted it to higher-level Coast Guard officials. One of the TAMSCO officials told us that providing TAMSCO the memorandum to review was inappropriate; the other felt uncomfortable with receiving the document because the Coast Guard was always careful not to release internal documents. According to these two former TAMSCO officials and TAMSCO’s president, while they did not think it improper for TAMSCO to provide information on the 8(a) program and other contracting procedures to the Coast Guard, they agreed that the Coast Guard should have been using its own contracting officials to obtain the information. Notes that the COTR took during Coast Guard/TAMSCO discussions also referred to suggestions that the contract be awarded to TAMSCO as a “graduation present” before the end of TAMSCO’s 8(a) program participation. For example, one note stated, in part, “IDIQ: Grad Pt. -eligible until grad from program Sept 18, ’93.” In other words, TAMSCO could get a sole source IDIQ contract as a graduation present until its graduation date of September 18, 1993. (See fig. 2.) In addition to the Coast Guard contract, TAMSCO obtained at least 22 other 8(a) awards within 2 weeks of its “graduation” from the program totaling at least $63 million. Thirteen of the awards were IDIQ contracts from a number of government agencies, including the Coast Guard award. We began our investigation by reviewing the application, eligibility, and participation files for the top 25 8(a) contract award recipients for fiscal year 1992, as compiled in our 1993 report. These records were located in 10 SBA District Offices nationwide. The files for two firms were unavailable for review. A third file did not contain eligibility documents. We looked for indicators of potential regulatory violations and criminal misconduct. We initially selected four of the firms for further investigation. However, the records we compiled for one firm were destroyed in the Oklahoma City bombing tragedy on April 19, 1995, and our investigation of another firm was not complete at the time of this publication. We then narrowed our investigation to two firms—I-NET, Inc. of Bethesda, Maryland, and Technical and Management Services Corporation (TAMSCO) of Calverton, Maryland. We interviewed officials and reviewed documents from the SBA, Office of Inspector General; various SBA district and regional offices; SBA’s Central Office; U.S. Department of Transportation, Office of Inspector General; U.S. Coast Guard; Resolution Trust Corporation, Office of Inspector General; Defense Contract Audit Agency; Department of Justice; and the Federal Bureau of Investigation. We also interviewed current and former employees of the firms, subcontractors, representatives of financial institutions, and others. As requested, we plan no further distribution of this report until 30 days from the date of this letter. At that time, we will send copies of this report to the Administrator of SBA and to others upon request. If you have any questions concerning this report, please call me at (202) 512-6722 or Robert H. Hast, Assistant Director for Investigations, New York Regional Office, at (212) 264-0982. Major contributors to this report are listed in appendix III. Section 8(a) of the Small Business Act, as amended, established the Minority Small Business and Capital Ownership Development Program, or 8(a) program, to promote the development of small businesses owned by socially and economically disadvantaged individuals so that they could develop into viable competitors in the commercial marketplace. To be eligible for the program, a small business must be 51 percent unconditionally owned and controlled by one or more socially and economically disadvantaged individuals. The company must also meet the small business size standards established by SBA for the firm’s industry as defined in the classification categories prescribed by the Standard Industrial Classification (SIC) Manual. SBA approves applicable SIC codes for participating firms. Participating 8(a) firms may have one or more SICs assigned to them by SBA. To be considered a small business and remain eligible for the program, participating firms must not have outgrown all their SBA-approved SIC codes. Size standards for each SIC code are generally defined by the firm’s number of employees or its average annual gross sales. Under the program, SBA acts as a prime contractor, entering into contracts with other federal agencies and then subcontracting work to firms in the 8(a) program. Firms in the program are also eligible for financial, technical, and management assistance from SBA to aid their development. Participating firms can stay in the program for up to 9 years. The Small Business Act, as amended, and federal regulations define “socially disadvantaged” as those persons who have been subjected to racial, ethnic, or cultural bias because of their identities as members of groups, without regard to their individual qualities. Certain racial and ethnic groups such as Black Americans, Hispanic Americans, Subcontinental Asian Americans, and Native Americans are presumed to be socially disadvantaged. However, individuals in groups not cited in the act, who can demonstrate that they are socially disadvantaged, may also be eligible. SBA regulations define “economically disadvantaged” as socially disadvantaged individuals who are unable to compete in the free enterprise system because their opportunities to obtain credit and capital have been more limited than those of others in similar businesses. Further, program applicants must demonstrate a personal net worth that does not exceed certain limits so as to meet and maintain the criteria for an economic disadvantage. Each 8(a) firm under SBA’s regulations is subject to a program term of 9 years. However, SBA may also, under its regulations, “graduate” an 8(a) firm prior to the expiration of its 9-year program term if that 8(a) firm substantially achieves the target objectives and goals set forth in its business plan. To date, according to SBA, no 8(a) firm has graduated. Colsa Inc. Barry L. Shillito, Senior Attorney Leslie Krasner, Attorney Adviser The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO reviewed the Small Business Administration's (SBA) 8(a) program, focusing on whether: (1) ineligible 8(a) firms have received contracts through their improper participation in the program; (2) 8(a) firms have misrepresented themselves to enter and stay in the program; (3) firms exceeding the size standard have inappropriately received 8(a) awards; (4) SBA has allowed ineligible firms to remain in the program after they exceeded the size limitations; and (5) federal contracting authorities have improperly used indefinite delivery, indefinite quantity (IDIQ) contracts to avoid competition. GAO found that: (1) the two firms studied were initially recommended for nonacceptance into the 8(a) program because of eligibility questions about who actually controlled the firms; (2) SBA justification for accepting the firms was questionable, since the questions about the firms' ownership were never fully answered; (3) one firm's owner misrepresented her personal qualifications, her equity in the firm, and ownership changes, but SBA took no action when it found out about the misrepresentations; (4) the firm received millions of dollars worth of 8(a) contracts after it had grown too large to participate in the program; (5) although the firm hid its size by excluding items from its financial statements, understating its total revenue, and representing itself as a company at financial risk, it had considerable access to credit; (6) SBA allowed the firm to remain in the program and receive new 8(a) contracts even after it had determined that the firm had grown too large for continued program participation; (7) the Coast Guard awarded a sole-source IDIQ contract to the second firm by changing the contract's classification code to one for which the firm was eligible and altering the contract's original minimum value below the minimum threshold for mandatory 8(a) competitive procurements; and (8) the Coast Guard believed that competitive 8(a) procurements hindered its mission and viewed the contract as a graduation present to the firm. |
DOD Instruction 5100.73, Major DOD Headquarters Activities, establishes a system to identify and manage the number and size of major DOD headquarters activities. The instruction also provides an approved list of major DOD headquarters activities, including the Offices of the Secretary of the Army and Army Staff; the Office of the Secretary of the Navy and Office of the Chief of Naval Operations; Headquarters, Marine Corps; and the Offices of the Secretary of the Air Force and Air Staff. All personnel working within these headquarters organizations are considered to be performing major headquarters activities functions. According to the instruction, other headquarters organizations include portions of the defense agencies, DOD field activities, and the combatant commands, along with their subordinate unified commands and respective service component commands. For example, according to DOD Instruction 5100.73, only personnel performing major headquarters activities functions in the Defense Information Systems Agency and Air Combat Command’s Intelligence Squadron would be considered headquarters, while personnel performing other functions would be excluded. Several DOD organizations have responsibilities related to major DOD headquarters activities, including those summarized below. The Office of the Deputy Chief Management Officer (ODCMO) is responsible for ensuring that DOD components are accurately identifying and accounting for major DOD headquarters activities, according to criteria established in DOD Instruction 5100.73. In addition, the Deputy Chief Management Officer has primary responsibility set forth under department guidance related to improving the efficiency and effectiveness of operations across DOD’s business functions, and is authorized by the Chief Management Officer to act as the Principal Staff Assistant to issue policy and guidance regarding matters relating to the management and improvement of DOD business operations. This has included responsibilities related to identifying and monitoring implementation of cost savings opportunities and efficiencies across DOD’s business areas. The Under Secretary of Defense for Personnel and Readiness, according to DOD Instruction 5100.73, is responsible for reviewing and issuing guidance over, and consolidating changes in, manpower authorizations and personnel levels for major DOD headquarters activities, among other things. In addition to these responsibilities, the Under Secretary of Defense for Personnel and Readiness also compiles the annual Defense Manpower Requirements Report, which provides DOD’s manpower requirements, to include manpower assigned to major headquarters activities, as reflected in the President’s budget request for the current fiscal year. The Under Secretary of Defense for Personnel and Readiness is also responsible for developing an annual guide for DOD components to use when compiling their IGCA Inventory submissions in response to statutory and regulatory reporting requirements. In addition, the Under Secretary of Defense for Personnel and Readiness shares responsibility—with the Under Secretary of Defense for Acquisition, Technology and Logistics and the Office of the Under Secretary of Defense (Comptroller)—for issuing guidance for compiling and reviewing the Inventory of Contracted Services. The Under Secretary of Defense for Personnel and Readiness in particular compiles the inventories prepared by the components. The heads of DOD components, including the Secretaries of the military departments, the Chairman of the Joint Chiefs of Staff, and the heads of other DOD components have responsibility, according to this instruction, for maintaining a management information system that identifies the number of personnel and total operating costs of major DOD headquarters activities, and reporting on these data to the Under Secretary of Defense (Comptroller). Since 2010, DOD has recognized that it must reduce the cost of doing business, including reducing the rate of growth in personnel costs and finding further efficiencies in overhead and headquarters, in its business practices, and in other support activities. Therefore, the department has pursued headquarters-related reduction efforts in recent years to realize cost savings. See appendix IV for additional details on these efforts. Since 2014, and in part to respond to congressional direction, DOD has undertaken initiatives intended to improve the efficiency of its business processes, which include headquarters organizations, and identify related cost savings, but it is unclear to what extent these initiatives will help the department achieve the savings it has identified. In May 2015, DOD concluded its Core Business Process Review, which was intended to apply lessons learned and information technology approaches from the commercial sector to the department’s six core business processes— management of human resources, healthcare, financial flow, acquisition and procurement, logistics and supply, and real property—in order to save money and resources while improving mission performance. Through this review, ODCMO identified $62 billion to $84 billion in potential cumulative savings opportunities across the six business processes for fiscal years 2016 through 2020. The review identified that these potential savings opportunities could be achieved through civilian personnel attrition and retirements to occur without replacements over the next 5 years, matching labor productivity in comparable industries or sectors, and improving core processes such as rationalizing organizational structures to reduce excessive layers, optimizing contracts, and using information technology to eliminate or reduce manual processes. According to ODCMO officials, DOD ultimately concluded that these potential savings opportunities could not entirely be achieved through these means. Nevertheless, ODCMO officials noted that DOD is already engaging in initiatives that, in effect, address the opportunities highlighted by the Core Business Process Review. The four department-led initiatives we reviewed that include headquarters organizations are concurrent and have varied scopes. Two of the initiatives are focused on OSD and its related organizations—one of these initiatives consists of a series of business process and systems reviews and the other initiative is a review focused on reducing the number of layers in OSD. A third initiative is focused specifically on contracted services requirements for DOD organizations outside the military departments known as the Fourth Estate. Finally, the fourth initiative—the review of the organization and responsibilities of DOD—is focused on updating or adjusting organizational relationships and authorities across the entire department, with a final report to possibly be issued later in 2016. The four initiatives were not completed, or their results were not available, in time for us to assess their effect, and therefore it is unclear to what extent they will contribute toward the savings identified by the Core Business Process Review. The initiatives are described in more detail below. In August 2014, DOD announced Business Process and Systems Reviews (BPSR), which, according to BPSR implementation guidance, are intended to review business processes and the supporting information technology systems within selected organizations in OSD and associated defense agencies and DOD field activities. The purpose of these BPSRs is to provide senior officials with information to clarify whether their organizations are aimed at departmental outcomes, to identify resources allocated to outcomes and any obstacles to achieving those outcomes, and to identify activities that might be improved or eliminated. As of April 2016, DOD had completed BPSRs for five of nine organizations. In some cases, organizations have taken steps to implement potential improvement and savings opportunities identified by the BPSRs. For example, as a result of a review of the ODCMO, the Deputy Secretary of Defense approved the implementation of a single service provider for the Pentagon’s information technology operations in May 2015. In other cases, it is unclear whether organizations have begun taking steps to implement the opportunities identified by the BPSR reviews. For example, the Office of the Assistant Secretary of Defense for Energy, Installations, and Environment identified a potential opportunity to reduce military construction costs by up to 3 percent through revisions to antiterrorism standards for DOD-owned buildings, but noted that this potential opportunity must first be subject to thorough analysis to fully appreciate its validity and return on investment. The department is currently working to complete BPSR reviews for four other organizations. According to ODCMO officials, DOD may conduct more BPSRs in the future but currently has no specific plans to do so once these four are completed. In July 2015, DOD announced an effort to reduce layers of management and staff—known as delayering—in the management structure of OSD and associated defense agencies and DOD field activities. According to OSD officials and DOD’s fiscal year 2017 budget request, the department intends to use this review to help respond to certain provisions in the National Defense Authorization Act for Fiscal Year 2016, namely, the 25 percent reduction to the headquarters baseline amount by fiscal year 2020 and the $10 billion in cost savings from headquarters, administrative, and support activities by fiscal year 2019. For this effort to reduce OSD organizational layers, the ODCMO has directed these organizations, with the support of an ODCMO team, to rationalize organizational layers and supervisory spans of control, as well as to identify redundant and obsolete workload and capture potential cost savings. ODCMO’s guidance to the organizations conducting the delayering reviews recommends, among other things, that the number of organizational layers in OSD should not be more than five, and that the capabilities and functional areas that have been historically assigned to an OSD organization will remain within the same organization, unless a functional assessment allows an opportunity for cross-organizational partnership and shared work activities. According to officials from ODCMO and the Office of the Under Secretary of Defense for Personnel and Readiness, the organizations have identified the civilian positions they intend to eliminate or restructure as part of the initiative. However, the results of the initiative are not yet publicly available. The Deputy Chief Management Officer stated that the department would issue a report, at an unspecified time, that will include the cost savings identified by this OSD Organizational Delayering initiative. According to the department’s budget request for fiscal year 2017, the objective of this OSD delayering review is to achieve $1.5 billion in cost savings from fiscal year 2018 through fiscal year 2021. Also in July 2015, DOD announced that it would seek to improve the outcomes of contracted services through standardized processes and governance structures. This initiative is intended, according to OSD officials, to help the department achieve the 25 percent headquarters reduction and the $10 billion in headquarters-related cost savings, which were directed by the National Defense Authorization Act for Fiscal Year 2016. In December 2015, the Deputy Chief Management Officer directed Fourth Estate organizations to convene internal review boards known as Services Requirements Review Boards to review their requirements for contracted services. These boards, which DOD has implemented for OSD, the defense agencies, and DOD field activities, are intended to assess every service contract within these organizations that is worth $10 million or more to determine whether a valid requirement for that contract remains or whether the funds could be better employed elsewhere within the same organization. The results of these reviews are then considered by DOD leadership using a senior review panel, comprising the Deputy Chief Management Officer, the Principal Deputy Under Secretary of Defense for Acquisition, Technology and Logistics, and the Principal Staff Assistant for the organization being reviewed. In March 2016, the Deputy Chief Management Officer reported that the objective of this effort would be to achieve savings of at least 5 percent in spending on such contracts, but did not specify the baseline from which the 5 percent would be measured. According to the department’s budget request for fiscal year 2017, DOD expects to realize savings through this initiative of $1.9 billion in direct appropriations by 2021 within OSD, the defense agencies, and DOD field activities, and additional savings in working capital-funded entities. The Deputy Chief Management Officer also stated that the department would issue a single report that will include the cost savings identified by the Services Requirements Review Board, as well as the OSD Organizational Delayering initiative, but did not specify a time frame for doing so. In January 2016, the Deputy Secretary of Defense noted that the Secretary of Defense, as part of his institutional reform agenda, directed the Deputy Chief Management Officer and the Director for Joint Force Development (J7) to lead a review of organizations and responsibilities of the DOD. The objective of this review is to make recommendations for updates or adjustments to organizational relationships and authorities, based on the department’s experiences operating under the Goldwater- Nichols Department of Defense Reorganization Act of 1986. The department intends to use this review, according to ODCMO officials, to address the provision in the National Defense Authorization Act for Fiscal Year 2016 that requires DOD to conduct a comprehensive review of headquarters and administrative and support activities for purposes of consolidating and streamlining headquarters functions. To conduct the review, ODCMO officials stated that the department has developed five working groups, led jointly by OSD and Joint Staff officials, with each working group addressing a different topic: optimization of command and control relationships to meet current and future security challenges; improved coordination and elimination of overlaps between OSD and the Joint Staff; the possible establishment of U.S. Cyber Command as a unified combatant command; potential improvements to the requirements and acquisition decision-making processes; and increased flexibility in law and policy governing joint duty qualifications. In addition, as part of this review of DOD’s organization and responsibilities, the military departments have established their own working groups to assess the structures of their respective secretariats and staffs to identify potential improvements. According to ODCMO officials, most of the working groups planned to complete their reviews and brief the Secretary of Defense by March 2016. The results of these reviews were not available at the time of our review. However, in a speech in April 2016, the Secretary of Defense provided an overview of some preliminary recommendations that may result from this review, such as clarification to the role of the Chairman of the Joint Chiefs of Staff, changes to joint personnel management, and adapting combatant commands to new functions. According to ODCMO officials, the department may issue a report with findings and recommendations on the overall review later in 2016. DOD has taken steps to improve some available data on headquarters organizations, but does not have reliable data for assessing headquarters functions and associated costs. Consistent with a past GAO recommendation, DOD published a new framework describing major headquarters organizations and stated that it has established a new definition of major DOD headquarters activities (although the department has not yet updated its headquarters instruction to reflect this definition). In addition, DOD is working to identify which organizations or portions of organizations meet a new definition of major DOD headquarters activities that was included in the National Defense Authorization Act for Fiscal Year 2016, and intends to revise its headquarters instruction upon completion of this effort. Finally, the department plans to update a key resource database, the Future Years Defense Program (FYDP), to improve visibility of headquarters resources. However, the one department-wide data set that identifies specific DOD headquarters functions contains unreliable data because the department has not aligned these data with its definition of major headquarters activities, nor does it have plans to collect information on the costs associated with functions within headquarters organizations. In 2015, the department began an effort to improve some available headquarters data, which addresses a fundamental problem that our prior reports have cited and DOD has acknowledged as a longstanding challenge. Specifically, in August 2015, DOD published a framework describing the major headquarters activities and stated that it has established a new definition for its major DOD headquarters, although the department has not yet updated its guiding instruction on headquarters to reflect this new definition. The National Defense Authorization Act for Fiscal Year 2016 was enacted in November 2015 and included a revised definition of major DOD headquarters activities. Since that time, according to ODCMO officials, DOD has been working to determine which organizations or portions of organizations meet the new definition in the act in order to establish a more accurate headquarters baseline. In March 2016, the Deputy Chief Management Officer reported that the department plans to complete this effort by June 2016, thereby institutionalizing an authoritative headquarters baseline for purposes of reporting and tracking. At this time, the department also plans to update its guiding instruction on headquarters with the new definition. According to ODCMO officials, tracking would include revising the headquarters- related coding of program elements in its key resource database—the FYDP—to ensure they are appropriately designated as headquarters according to the new definition, and, where necessary, to break down these program element codes into headquarters and nonheadquarters components to better reflect allocation of headquarters resources. According to DOD officials, they have begun updating the resource coding in the FYDP and plan to complete this effort by late 2016. The re-baselining effort took on increased urgency when, in August 2015, the Deputy Secretary of Defense announced a new 25 percent cost- reduction target for major DOD headquarters activities (the military departments, OSD staff, the Joint Staff, defense agencies, DOD field activities, and combatant commands) in anticipation of a congressional mandate for additional reductions. In addition, the National Defense Authorization Act for Fiscal Year 2016 allows documented savings achieved pursuant to this 25 percent headquarters reduction to be counted toward another of the act’s requirements, namely, that the Secretary of Defense implement a plan to ensure the department achieves not less than $10 billion in cost savings from the headquarters, administrative, and support activities of the department by fiscal year 2019. According to ODCMO officials, DOD plans to meet this $10 billion savings requirement by identifying existing efficiency initiatives whose savings will be applied toward the savings total. For example, ODCMO officials stated that the ODCMO will apply the savings that were identified through an information technology consolidation initiative, through its OSD Organizational Delayering initiative, as well as through its efforts to streamline contracted services by means of the Services Requirements Review Board. In March 2016, the DCMO provided an interim response to Congress identifying that the fiscal year 2017 President’s Budget included $7.8 billion in new efficiencies over the next 5 years, but did not provide more specific information on when and from where in the budget these efficiencies would be realized or how the department would apply them to the $10 billion savings required by Congress. ODCMO’s interim response stated that the department will issue a report that provides a breakdown of the $10 billion cost savings by year, but did not provide a time frame for doing so. Part of the reason that DOD must undertake concurrent reviews and studies to achieve efficiencies is that the department does not have reliable data in two main areas. First, available DOD-wide data sources on headquarters functions are not aligned with the department-wide definition of headquarters. We attempted to conduct an independent review to assess headquarters functions, and we considered several department-wide data sources but found limitations in each. Second, DOD’s data on headquarters functions do not include information on costs associated with functions within headquarters organizations, nor, according to OSD officials, does the department have plans to collect such information. According to federal standards for internal control, an agency must have relevant, reliable, and timely information to run and control its operations. This information is required to make operating decisions, monitor performance, and allocate resources, among other things. The lack of reliable data may hinder DOD’s ability to conduct a comprehensive review for purposes of consolidating and streamlining headquarters functions, among other things, as DOD was directed to do in the National Defense Authorization Act for Fiscal Year 2016. According to OSD officials, although DOD has several sources to organize and categorize its workforce, only one department-wide data set, known as the Inherently Governmental / Commercial Activities (IGCA) Inventory, identifies specific DOD headquarters functions in the form of authorized military and civilian positions. In the IGCA Inventory, each DOD position is assigned a function based on the type of work performed, and 38 of these 306 functions are headquarters-related. Examples of such headquarters-related functions include Operation Planning and Control, Military Education and Training, and Systems Acquisition. Navy guidance specifically notes that the IGCA Inventory may be used as a total force shaping tool and a starting point for future manpower reviews or initiatives. For an example of the type of information that reliable data on headquarters functions could produce, see appendix V, which provides our analysis of the headquarters functions with the highest number of positions for each military service and Fourth Estate component in fiscal year 2014. However, we found that because the data in this data set were not aligned with headquarters definitions, they were not sufficiently reliable to assess these functions. IGCA Inventory guidance calls for components to assign headquarters-related DOD function codes to positions based on a headquarters definition that, while derived from DOD Instruction 5100.73, does not include all elements of the definition in that instruction. As a result, we found that the data on the number and functions of DOD’s military and civilian headquarters positions have varying levels of accuracy. For example, in fiscal year 2014, only 79 percent of authorized positions in OSD were considered headquarters within the IGCA Inventory, even though OSD is considered a headquarters organization in its entirety under both the definition provided in DOD Instruction 5100.73 and the new definition. Officials from all four military services informed us that, from fiscal year 2010 through fiscal year 2014, they discovered some positions that had been incorrectly coded as headquarters and undertook varying efforts to correct them. As a result, we have more confidence in data presented in the later years of the 2010 to 2014 period we reviewed, but data limitations in the earlier years covered by our review precluded us from assessing trends of these functions over time. While service officials told us they had taken steps to improve consistency of the headquarters-related DOD function codes in the IGCA Inventory, DOD does not have plans to update the data set to ensure that the headquarters-related DOD function codes in the IGCA Inventory are also consistent with the new headquarters definition. According to OSD officials, they have no plans to do so because the IGCA Inventory is not the department’s authoritative source for headquarters data. However, DOD and service officials have noted that, over time, officials have inconsistently interpreted what should be counted as headquarters according to the instruction, resulting in varying counts of headquarters positions depending on the source of the data. For example, in its Fiscal Year 2015 Defense Manpower Requirements Report, DOD included an estimate for fiscal year 2014 of 108,073 headquarters positions across the department, that is, OSD, the military services, the Joint Staff and combatant command headquarters, and the defense agencies and DOD field activities. In contrast, for these same organizations, DOD reported a total of 74,221 headquarters positions in its IGCA Inventory for fiscal year 2014 and a total of 61,046 headquarters positions in a May 2015 headquarters-related report, known as the Section 904 report. We believe that alignment of data sets containing headquarters-related codes, such as the IGCA Inventory, with the department-wide headquarters definition will provide senior DOD officials with the relevant, reliable, and timely information they need to make operating decisions, monitor performance, and allocate resources. Without alignment of data on department-wide military and civilian positions that have headquarters- related DOD function codes with the authoritative, revised definition of major DOD headquarters activities, the department will not have reliable data to enable senior officials to accurately assess headquarters functions, target specific functional areas for further analysis, or identify potential streamlining opportunities. ODCMO officials stated that, once they have finalized the headquarters definition, they plan to complete an effort to improve the accuracy of the resource levels attached to headquarters organizations by ensuring that organizations are appropriately designated as headquarters in the FYDP and, as needed, breaking these organizations down into smaller headquarters and nonheadquarters program element codes. However, these actions will not provide reliable information on the costs associated with the various functions within those headquarters organizations. According to ODCMO officials, the department does not have plans to collect such information because it believes that improving the accuracy of the resources associated with headquarters organizations will be sufficient to support any future DOD assessments of headquarters. We believe, however, that, detailed information that provides visibility into the costs associated with functions within headquarters organizations would better facilitate identification of opportunities for consolidation or elimination across organizational boundaries. Moreover, the defense committees have previously noted that, to achieve significant savings, the department must focus on consolidating and eliminating organizations and personnel that perform similar functions and missions. Army officials have also noted that being able to track the Army’s manpower by function could be useful to understand cost drivers in the budget, and could provide a starting point to help them determine the best application of structure and manpower. In addition, the National Defense Authorization Act for Fiscal Year 2016 directs the Secretary of Defense to conduct a comprehensive review of DOD headquarters, among other things, for purposes of consolidating and streamlining headquarters functions. This functional review is to address the extent to which certain groupings of DOD headquarters organizations—such as OSD, the military departments, the defense agencies, and other organizations—have duplicative staff functions and services and could therefore be consolidated, eliminated, or otherwise streamlined. We have previously identified key steps to help analysts and policymakers conduct reviews to identify and evaluate instances of duplication, fragmentation, and overlap. One step in conducting such a review is to identify the potential positive and negative effects of any duplication, fragmentation, or overlap by assessing program implementation, outcomes and impact, and cost-effectiveness. In particular, we found that assessing and comparing the performance and cost-effectiveness of programs can help analysts determine which programs, or aspects of programs, to recommend for actions such as consolidation or elimination. In the absence of reliable data on the costs of functions with headquarters organizations, we obtained data from each of the military services’ manpower databases on all military and civilian headquarters positions for fiscal years 2010 through 2014. However, we could not reliably calculate the estimated costs to DOD of filling those positions due to inconsistencies and incomplete information in the pay grade data we collected from the Army and the Air Force. For example, the Army could not provide data to distinguish whether the 15 percent of its headquarters positions allocated to its reserve components in 2014 were full- or part- time—a factor needed to estimate costs. In the Air Force, we were unable to match civilian pay scales to 16 percent of Air Force civilian headquarters positions in 2014. For the Fourth Estate headquarters positions, DOD was unable to provide pay grade and location information from its Fourth Estate data system in time for our review due to other ongoing, headquarters-related initiatives. However, according to our analysis of data-reliability questionnaires sent to Fourth Estate organizations, 25 of the 38 Fourth Estate organizations, or 66 percent, reported that their data in the Fourth Estate data system for the period from fiscal year 2010 through fiscal year 2014 were incomplete or inaccurate. Once the definition of major DOD headquarters activities is published in DOD guidance, without reliable information on the costs associated with functions within headquarters organizations—through revisions to the IGCA Inventory or another method—the department will not be able to accurately estimate resources associated with specific headquarters functions, which in turn could help senior officials identify streamlining opportunities, make decisions, monitor performance, and allocate resources. As it faces a potentially extended period of fiscal constraints, DOD has concluded that reducing the resources it devotes to headquarters is an area where cost savings can be achieved. The defense committees agree, but have expressed concern about DOD’s ability to identify significant cost savings given the department’s poor visibility into the total resources being devoted across organizations to similar functions and missions. Further, Congress has recently directed the Secretary of Defense to ensure that the department achieves savings in the total funding available for major DOD headquarters activities by fiscal year 2020 that are not less than 25 percent of the baseline amount, and to implement a plan to ensure the department achieves not less than $10 billion in cost savings from its headquarters, administrative, and support activities by fiscal year 2019. Since 2014, DOD has undertaken concurrent initiatives of varying scope that include improving the efficiency of headquarters organizations and identifying related cost savings, but, because they are not yet completed, it is unclear to what extent these initiatives will help the department to achieve the $62 billion to $84 billion in cost savings opportunities that it has identified. DOD’s limited information on which positions perform which headquarters functions and their associated costs hinders its ability to identify potential cost savings associated with opportunities to consolidate and streamline these headquarters functions. While the department has taken steps to respond to a new headquarters definition and has begun to align its key resource database—the FYDP—to better reflect the new definition, these efforts are not yet completed and the department does not have plans to align these efforts with the existing data on department-wide military and civilian positions that have headquarters-related DOD function codes or to collect information on the costs associated with functions within headquarters organizations. Without such alignment and such information, the department will not be well-positioned to reliably conduct an assessment of its headquarters workforce by function to identify opportunities for streamlining and related cost savings. Conducting such functional analysis could allow DOD officials to raise questions about the number and types of positions with particular headquarters functions and to better understand cost drivers and identify efficiency-related opportunities within the department. To further DOD’s efforts to identify opportunities for more efficient use of headquarters-related resources, we recommend that the Secretary of Defense direct the Deputy Chief Management Officer, in coordination with the Under Secretary of Defense for Personnel and Readiness, the Chairman of the Joint Chiefs of Staff, the Secretaries of the military departments, and the heads of the defense agencies and DOD field activities, to take the following two actions: align DOD’s data on department-wide military and civilian positions that have headquarters-related DOD function codes with the revised definition of major DOD headquarters activities in order to provide the department with reliable data to accurately assess headquarters functions and identify opportunities for streamlining or further analysis; and once this definition is published in DOD guidance, collect reliable information on the costs associated with functions within headquarters organizations—through revisions to the IGCA Inventory or another method—in order to provide the department with detailed information for use in estimating resources associated with specific headquarters functions, and in making decisions, monitoring performance, and allocating resources. We provided a draft of this report to DOD for review and comment. In written comments on a draft of this report, DOD concurred with our two recommendations. DOD’s comments are summarized below and reprinted in their entirety in appendix VI. DOD concurred with our recommendations to (1) align DOD’s data on department-wide military and civilian positions that have headquarters- related DOD function codes with the revised definition of major DOD headquarters activities, and (2) once this definition is published in DOD guidance, collect reliable information on the costs associated with functions within headquarters organizations—through revisions to the IGCA Inventory or another method. In its response, DOD stated that it is currently updating civilian and military manpower and total obligation authority baselines for major DOD headquarters activities to align with the new headquarters-related definition and framework. The department stated that this effort includes updating data architecture for coding major DOD headquarters activities, by program element code, in the Future Years Defense Program, and noted that this data architecture will serve as the authoritative methodology to account for headquarters manpower and resources in the future. Further, DOD stated that, once those efforts are complete and the new framework is codified in an update to DOD Instruction 5100.73, the department will determine how best to align the function code taxonomy, which is the source of data for the IGCA Inventory, with the revised framework and definitions. We agree that determining how to align the data set from the IGCA Inventory with the revised framework and definitions is an important first step and, if implemented, would address the intent of our first recommendation. Finally, DOD stated in its comments that the updated data architecture will enable the department to collect consistent, comprehensive, and authoritative information on the costs associated with major DOD headquarters activities. We also agree that updating the data architecture for coding major DOD headquarters activities will help improve the department’s visibility of headquarters-related resources. As the department works to complete this effort, we believe that it should develop a means of collecting reliable information on the costs associated with functions within headquarters organizations. Doing so would provide the department with detailed information for use in estimating resources associated with specific headquarters functions, and, if implemented, would address the intent of our second recommendation. We are sending copies of this report to the appropriate congressional committees, the Secretary of Defense, the Under Secretary of Defense for Personnel and Readiness, the Deputy Chief Management Officer, the Chairman of the Joint Chiefs of Staff, the Secretaries of the military departments, and the heads of the defense agencies and DOD field activities. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3489 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VII. We have issued several reports since 2012 on defense headquarters and on the Department of Defense’s (DOD) challenges in accounting for the resources devoted to headquarters. In March 2012, we found that while DOD has taken some steps to examine its headquarters resources for efficiencies, additional opportunities for savings may exist by further consolidating organizations and centralizing functions. We also found that DOD’s data on its headquarters personnel lacked the completeness and reliability necessary for use in making efficiency assessments and decisions. Recommendations: We recommended that the Secretary of Defense direct the Secretaries of the military departments and the heads of the DOD components to continue to examine opportunities to consolidate commands and to centralize administrative and command support services, functions, or programs. Additionally, we recommended that the Secretary of Defense revise DOD Instruction 5100.73, Major DOD Headquarters Activities, to include all headquarters organizations; specify how contractors performing headquarters functions will be identified and included in headquarters reporting; clarify how components are to compile the information needed for headquarters- reporting requirements; and establish time frames for implementing actions to improve tracking and reporting of headquarters resources. DOD concurred with the first recommendation and partially concurred with the second recommendation in this report. Status: DOD officials have stated that, since 2012, several efforts have been made to consolidate or eliminate commands and to centralize administrative and command support services, functions, or programs. For example, Office of the Secretary of Defense (OSD) officials said that DOD has begun efforts to assess which headquarters organizations are not currently included in its guiding instruction on headquarters, and will update the instruction. However, as of June 2016, DOD has not completed its update of the instruction to include all major headquarters activity organizations. OSD officials stated that they would begin updating this instruction upon completion of the effort to assess headquarters organizations. In addition, DOD has not specified how contractors will be identified and included in headquarters reporting, and has not identified a time frame for action. In May 2013, we found that authorized military and civilian positions at the geographic combatant commands—excluding U.S. Central Command— had increased by about 50 percent from fiscal year 2001 through fiscal year 2012, primarily due to the addition of new organizations, such as the establishment of U.S. Northern Command and U.S. Africa Command, and increased mission requirements for the theater special operations commands. We also found that DOD’s process for sizing its combatant commands had several weaknesses, including the absence of a comprehensive, periodic review of the existing size and structure of these commands and inconsistent use of personnel-management systems to identify and track assigned personnel. Recommendations: We recommended that the Secretary of Defense direct the Chairman of the Joint Chiefs of Staff to revise its guiding instruction on managing joint personnel requirements—Chairman of the Joint Chiefs of Staff Instruction 1001.01A, Joint Manpower and Personnel Program—to require a comprehensive and periodic evaluation of whether the size and structure of the combatant commands meet assigned missions. DOD did not concur with this recommendation, but we continue to believe that institutionalizing a periodic evaluation of all authorized positions would help to systematically align manpower with missions and add rigor to the requirements process. The department concurred with the remaining three recommendations, namely, that the Secretary of Defense: (1) direct the Chairman of the Joint Chiefs of Staff to revise Chairman of the Joint Chiefs of Staff Instruction 1001.01A to require the combatant commands to identify, manage, and track all personnel and to identify specific guidelines and time frames for the combatant commands to consistently input and review personnel data in the system; (2) direct the Chairman of the Joint Chiefs of Staff, in coordination with the combatant commanders and Secretaries of the military departments, to develop and implement a formal process to gather information on authorized manpower and assigned personnel at the service component commands; and (3) direct the Under Secretary of Defense (Comptroller) to revise volume 2, chapter 1, of DOD’s Financial Management Regulation 7000.14R to require the military departments, in their annual budget documents for operation and maintenance, to identify the authorized military positions and civilian and contractor full-time equivalents at each combatant command and provide detailed information on funding required by each command for mission and headquarters-support, such as civilian pay, contract services, travel, and supplies. Status: With regard to the recommendation to revise the instruction to require the commands to improve visibility over all combatant command personnel, DOD has established a new manpower tracking system, the Fourth Estate Manpower Tracking System, that is to track all personnel data, including temporary personnel, and identify specific guidelines and timelines to input/review personnel data. With regard to the recommendation to develop and implement a formal process to gather information on authorized manpower and assigned personnel at the service component commands, as of August 2015, the process outlined by DOD to gather information on authorized and assigned personnel at the service component commands is the same as the one identified during our prior work. With regard to the recommendation to revise DOD’s Financial Management Regulation, in December 2014 DOD indicated that the Office of the Under Secretary of Defense (Comptroller) had reinstituted an existing budgetary document, the President’s Budget 58, Combatant Command Direct Funding, and directed the military services to use this budget exhibit in its guidance on submission of the fiscal years 2016 through 2020 program and budget. The President’s Budget 58 provides the department’s justification and visibility for changes in the level of resources required for each combatant command. While the President’s Budget 58 does not provide detailed information on the number of authorized military or civilian positions and contractor full- time equivalents at each combatant command, it does identify the funding required by each combatant command for mission and headquarters support, which, in general, satisfies the intent of our recommendation. In June 2014, we found that DOD’s functional combatant commands have shown substantial increases in authorized positions and costs to support headquarters operations since fiscal year 2004, primarily to support recent and emerging missions, including military operations to combat terrorism and the emergence of cyberspace as a warfighting domain. Further, we found that DOD did not have a reliable way to determine the resources devoted to management headquarters as a starting point for DOD’s planned 20 percent reduction to headquarters budgets, and thus we concluded that actual savings would be difficult to track. We recommended that DOD reevaluate the decision to focus reductions on management headquarters to ensure meaningful savings and set a clearly defined and consistently applied baseline starting point for the reductions. Further, we recommended that DOD track the reductions against the baselines in order to provide reliable accounting of savings and reporting to Congress. Recommendations: We recommended that the Secretary of Defense reevaluate the decision to focus reductions on management headquarters to ensure the department’s efforts ultimately result in meaningful savings. DOD partially concurred, questioning, in part, the recommendation’s scope. We agreed that the recommendation has implications beyond the functional combatant commands, which was the scope of our review, but the issue we identified is not limited to these commands. We also recommended that the Secretary of Defense (1) set a clearly defined and consistently applied starting point as a baseline for reductions; and (2) track reductions against the baselines in order to provide reliable accounting of savings and reporting to Congress. DOD concurred with these two recommendations. Status: To address the two recommendations with which it concurred, DOD said that it planned to use the Future Years Defense Program data to set the baseline going forward. DOD stated that it was enhancing data elements within a DOD resource database to better identify management headquarters resources to facilitate tracking and reporting across the department. A December 2014 Resource Management Decision directed DOD components to identify and correct inconsistencies in major headquarters activities in authoritative DOD systems and reflect those changes in the fiscal year 2017 program objective memorandums or submit them into the manpower management system, but this effort has not yet been completed. In January 2015, we found that, over the previous decade, authorized military and civilian positions have increased within the DOD headquarters organizations we reviewed—OSD, the Joint Staff, and the Army, Navy, Marine Corps, and Air Force secretariats and staffs—but the size of these organizations has recently leveled off or begun to decline. In addition, we found that the DOD headquarters organizations we reviewed do not determine their personnel requirements as part of a systematic requirements-determination process, nor do they have procedures in place to ensure that they periodically reassess these requirements as outlined in DOD and other guidance. Current personnel levels for these headquarters organizations are traceable to statutory limits enacted in the 1980s and 1990s to force efficiencies and reduce duplication. However, we found that these limits have been waived since fiscal year 2002 and have little practical utility because of statutory exceptions for certain categories of personnel, and because the limits exclude personnel in supporting organizations that perform headquarters- related functions. Recommendations: We recommended that the Secretary of Defense direct the following three actions: (1) conduct a systematic determination of personnel requirements for OSD, the Joint Staff, and the military services’ secretariats and staff, which should include analysis of mission, functions, and tasks, and the minimum personnel needed to accomplish those missions, functions, and tasks; (2) submit these personnel requirements, including information on the number of personnel within OSD and the military services’ secretariats and staffs that count against the statutory limits, along with any applicable adjustments to the statutory limits, to Congress, along with any recommendations needed to modify the existing statutory limits; and (3) establish and implement procedures to conduct periodic reassessments of personnel requirements within OSD and the military services’ secretariats and staffs. DOD partially concurred with all of these recommendations. In addition, we raised a matter for congressional consideration, namely, that Congress should consider using the results of DOD’s review of headquarters personnel requirements to reexamine the statutory limits. Such an examination could consider whether supporting organizations that perform headquarters functions should be included in statutory limits and whether the statutes on personnel limitations within the military services’ secretariats and staffs should be amended to include a prohibition on reassigning headquarters-related functions elsewhere. Status: With regard to the recommendation that DOD conduct a systematic determination of personnel requirements for OSD, the Joint Staff, and the military services’ secretariats and staff, the department stated that it will continue to use the processes and prioritization that are part of the Planning, Programming, Budgeting, and Execution process, and will also investigate other methods for aligning personnel to missions and priorities. However, DOD did not specify whether any of these actions would include a workforce analysis. With regard to the recommendation related to conducting periodic reassessments of personnel requirements within OSD and the military service secretariats and staffs. DOD said that it supports the intent of the recommendation but that such periodic reassessments require additional resources and personnel, which would drive an increase in the number of personnel performing major DOD headquarters activities. Specifically, DOD stated it intends to examine the establishment of requirements determination processes across the department, to include the contractor workforce, but this will require a phased approach across a longer timeframe. In December 2014, the Secretary of Defense directed the Deputy Chief Management Officer to develop and implement a manpower requirements validation process for OSD, the defense agencies, and DOD field activities for military and civilian manpower, but, as of June 2016, this effort has not yet been completed. With regard to the recommendation related to the submission of the personnel requirements to Congress, along with any applicable adjustments and recommended modifications. DOD stated that it has ongoing efforts to refine and improve its reporting capabilities associated with these requirements, noting that the department has to update DOD Instruction 5100.73, Major DOD Headquarters Activities, before it can determine personnel requirements that count against the statutory limits. We previously recommended that the department update this instruction, and, according to DOD officials, they intend to begin updating the instruction in June 2016. In addition, we noted that DOD did not indicate whether the department would submit personnel requirements that count against the statutory limits in the Defense Manpower Requirements Report, as we recommend, once the instruction is finalized. We continue to believe that submitting these personnel requirements to Congress in this DOD report would provide Congress with key information to determine whether the existing statutory limits on military and civilian personnel are effective in limiting headquarters personnel growth. With regard to the matter for congressional consideration, the Senate Armed Services Committee markup of the National Defense Authorization Act for Fiscal Year 2017 includes a provision that would allow the OSD and the military departments to increase their number of military and civilian personnel by 15 percent in times of national emergency. In the Inherently Governmental / Commercial Activities (IGCA) Inventory for fiscal years 2010 through 2014, there are 38 functions, each designated by a specific DOD function code, that have a headquarters designation; of these, 35 are labeled “Management Headquarters,” while 3 are labeled “Combatant Headquarters.” For the purposes of this report, we use the term “headquarters,” rather than “management headquarters” or “combatant headquarters,” when referring to the titles of these 38 functions in the body of the report. Table 1 lists the 38 headquarters functions with accompanying descriptions. House Report 113-446 and Senate Report 113-176 included provisions that we, among other things, identify the Department of Defense’s (DOD) headquarters reduction efforts to date and any trends in personnel and other resources being devoted to selected functional areas within and across related organizations. This report (1) describes the status of DOD’s initiatives since 2014 to improve the efficiency of headquarters organizations and identify related cost savings; and (2) assesses the extent to which DOD has reliable data to assess headquarters functions and their associated costs. To describe the status of DOD’s initiatives to improve the efficiency of headquarters organizations and identify related cost savings, we identified and reviewed DOD headquarters-related efficiency efforts begun since 2014. We obtained documentary and testimonial evidence from senior officials in the Office of the Deputy Chief Management Officer to determine the scope and status of these headquarters-related efficiency efforts and what actions, if any, DOD has taken as a result of the efforts. To assess the extent to which DOD has reliable data to assess headquarters functions and their associated costs, we took two main steps. First, we identified and reviewed DOD-wide sources of information that would provide data on the department’s workforce in terms of whether the workforce is performing headquarters work and the specific headquarters functions that workforce is performing. We discussed with officials from the Office of the Deputy Chief Management Officer, the Office of the Under Secretary of Defense for Personnel and Readiness, and the Cost Analysis and Program Evaluation office, and reviewed data from, several department-wide sources, specifically, the Future Years Defense Program, the Defense Manpower Data Center, the Inherently Governmental / Commercial Activities (IGCA) Inventory, and the Inventory of Contracted Services. For this report, we analyzed data and information related to the IGCA Inventory because it was the only DOD-wide data set identified that allowed us to determine the military and civilian workforce—in the form of authorized positions—by both headquarters and function. However, we found that the data in the IGCA Inventory were submitted by the various DOD organizations at different points in a given fiscal year. To ensure that the data would be as close to the end of each fiscal year as possible, we obtained data from each of the military services’ manpower databases used to populate DOD’s IGCA Inventory for fiscal years 2010 through 2014, which was the most recent 5-year period available during our review. For each service database, we identified the subset of military and civilian positions considered headquarters according to IGCA guidance. We then analyzed these headquarters positions for each organization and for each fiscal year by number of military and civilian positions, function, grade, and location. We discussed the data, and the reasons for any patterns or changes we observed in them, with military service representatives. DOD was unable to provide similar data for organizations outside the military departments known as the Fourth Estate in time for our review, so we collected data on the Fourth Estate’s military and civilian positions directly from the IGCA Inventory for fiscal years 2012 to 2014. We assessed the data we received against federal standards for internal control, which call for an agency to have relevant, reliable, and timely information in order to run and control its operations. Second, we attempted to calculate the approximate costs of the headquarters positions and associated functions. Because the IGCA Inventory does not contain estimated costs for positions, we used the service databases’ pay grade and location information assigned to the military and civilian positions in an attempt to determine the estimated cost to DOD of filling headquarters positions. Specifically, we applied DOD’s military composite standard pay rates and civilian fringe benefits rates to the pay grades we had collected for each position identified in the service databases. We were unable to do a similar calculation for the Fourth Estate because Fourth Estate data on positions came from the IGCA Inventory, which does not contain pay grade and location information needed for a cost calculation. We assessed our and DOD’s efforts to calculate headquarters-related costs against federal standards for internal control on having relevant, reliable, and timely information, and noted the importance of a key step that we have previously identified for conducting fragmentation, overlap, and duplication reviews. Specifically, one of the steps in conducting such a review is to identify the positive and negative effects of any fragmentation, overlap, and duplication by assessing program implementation, outcomes and impact, and cost- effectiveness. We assessed the reliability of the IGCA-related data sets by reviewing responses to data questionnaires sent to knowledgeable service and Fourth Estate officials, discussing the data with these officials, and conducting our own cross-checks of the data to assess their reasonableness. We found the data to be insufficient for identifying trends in the number and type of headquarters positions and for estimating costs associated with headquarters positions. However, we found the data to be sufficiently reliable for presenting 1 year’s worth of data for purposes of illustrating the types of analyses of department-wide headquarters functions that could be conducted if DOD improved the reliability of these data. Finally, we were unable to obtain data on contracted services personnel, either their positions or associated costs, because DOD does not identify contracted services personnel by the type of headquarters function they perform. We interviewed officials or, where appropriate, obtained documentation from the organizations listed below. Office of the Secretary of Defense Office of the Deputy Chief Management Officer Office of Cost Assessment and Program Evaluation Office of the Under Secretary of Defense (Comptroller) Office of the Under Secretary of Defense for Personnel and Readiness Office of the Under Secretary of Defense for Acquisition, J1, Manpower and Personnel Directorate Department of the Army Office of the Assistant Secretary of the Army for Manpower and G1, Office of the Deputy Chief of Staff for Personnel G3/5/7, Operations and Plans Department of the Navy Office of the Assistant Secretary of the Navy for Manpower N1, Office of the Deputy Chief of Naval Operations, Manpower Deputy Commandant for Combat Development and Integration, Department of the Air Force A1, Office of the Deputy Chief of Staff for Personnel U.S. Special Operations Command Defense Agencies / DOD Field Activities Defense Acquisition University Defense Advanced Research Projects Agency Defense Contract Audit Agency Defense Contract Management Agency Defense Finance and Accounting Service Defense Human Resource Activity Defense Information Systems Agency Defense Legal Services Agency Defense POW/MIA Accounting Agency Defense Security Cooperation Agency Defense Technical Information Center Defense Technology Security Administration Defense Threat Reduction Agency Department of Defense Education Activity Department of Defense Inspector General Office of Economic Adjustment Pentagon Force Protection Agency Test Resource Management Center We conducted this performance audit from January 2015 to June 2016 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Department of Defense (DOD) officials identified the following efforts initiated between 2010 and 2014 to realize cost savings related to headquarters. In a May 2010 speech, the Secretary of Defense expressed concerns about the dramatic growth in DOD’s headquarters and support organizations that had occurred since 2001, including increases in spending, staff, and numbers of senior executives and the proliferation of management layers. The Secretary of Defense then directed that DOD was to undertake a department-wide initiative to assess how the department is staffed, organized, and operated, with the goal of reducing excess overhead costs and reinvesting these savings toward sustainment of DOD’s current force structure and modernizing its weapons portfolio. In March 2012, DOD identified additional efficiency initiatives, referred to as More Disciplined Use of Resources initiatives, in its fiscal year 2013 budget request. DOD identified additional More Disciplined Use of Resources initiatives for the fiscal year 2014 budget in April 2013. According to information accompanying its fiscal years 2013 and 2014 budget requests, DOD identified these initiatives by conducting a review of bureaucratic structures, business practices, modernization programs, civilian and military personnel levels, and associated overhead costs. In March 2013, the Secretary of Defense directed the completion of a Strategic Choices Management Review to examine the potential effect of additional, anticipated budget reductions on the department and to develop options for performing DOD missions. According to the Secretary, a tenet of the review was the need to maximize savings from reducing DOD’s overhead, administrative costs, and other institutional expenses. In July 2013, the Secretary of Defense set a target for reducing DOD components’ total management headquarters budgets by 20 percent for fiscal years 2014 through 2019, including costs for civilian personnel and contracted services, while striving for a goal of 20 percent reductions to authorized military and civilian personnel. This effort was designed to streamline DOD’s management of its headquarters through efficiencies and elimination of spending on lower-priority activities. In August 2013, the Secretary and Deputy Secretary of Defense directed an organizational review of the Office of the Secretary of Defense, consistent with the Strategic Choices and Management Review, that was intended to assess and recommend specific adjustments to OSD’s organizational structure. The review resulted in several organizational alignments, such as realigning another office to the Office of the Deputy Chief Management Officer structure, and contributed to the 20 percent headquarters reductions that were captured in DOD’s fiscal year 2015 budget request. This appendix provides our analysis showing the five headquarters functions, in each military service and Fourth Estate component, with the highest number of headquarters positions for fiscal year 2014. Based on our review of the data and discussions with service officials, fiscal year 2014 data is the most reliable data available during the period of our review. The military services are the Army; the Navy; the Marine Corps; and the Air Force. To help meet their respective missions, each military service has both operational and nonoperational headquarters organizations. See table 2 for the percentage of the military services’ headquarters positions by headquarters function for fiscal year 2014. The Fourth Estate is made up of the Department of Defense (DOD) organizations that are separate from the military services. Our review focused on four organizational components that make up the Fourth Estate: (1) the Office of the Secretary of Defense; (2) the Joint Staff, including the North Atlantic Treaty Organization; (3) the combatant commands; and (4) defense agencies and DOD field activities. See table 3 for the percentage of Fourth Estate headquarters positions by headquarters function for fiscal year 2014. In addition to the contact named above, Margaret A. Best (Assistant Director), Tracy Barnes, Timothy Carr, Gabrielle A. Carrington, Cynthia Grant, Mae Jones, Bethann E. Ritter Snyder, Benjamin Sclafani, Michael Silver, Amie Lesser, and Melissa Wohlgemuth made key contributions to this report. | Facing budget pressures, DOD is seeking to reduce its headquarters activities by identifying streamlining opportunities. DOD has multiple layers of headquarters activities with complex, overlapping relationships, such as OSD, the Joint Staff, the military service secretariats and staffs, and defense agencies. Committee reports accompanying bills for the National Defense Authorization Act for Fiscal Year 2015 included provisions for GAO to identify DOD's headquarters reduction efforts to date and patterns in functional areas related to DOD's headquarters activities. This report (1) describes the status of DOD's initiatives since 2014 to improve the efficiency of headquarters organizations and identify related cost savings, and (2) assesses the extent to which DOD has reliable data to assess headquarters functions and their associated costs. GAO assessed DOD-wide headquarters-related efficiency efforts, and a DOD-wide data set that identifies positions with headquarters functions. Since 2014, and in part to respond to congressional direction, the Department of Defense (DOD) has undertaken initiatives intended to improve the efficiency of headquarters organizations and identify related cost savings, but it is unclear to what extent these initiatives will help the department achieve the potential savings it has identified. In a 2015 review of its six business processes, DOD identified $62 billion to $84 billion in potential cumulative savings opportunities for fiscal years 2016 through 2020. According to DOD officials, the department is currently pursuing four headquarters-related initiatives, but these were not completed, or results were not available, in time for GAO to assess their effect. The table below provides a description of these initiatives. Source: GAO analysis of DOD information. GAO-16-286 DOD has taken steps to improve some available data on headquarters organizations, but does not have reliable data for assessing headquarters functions and associated costs. Consistent with a GAO recommendation, DOD has established a framework for major DOD headquarters activities, is working to identify which organizations or portions of organizations meet a new definition of major DOD headquarters activities, and plans to update a key database to improve visibility of headquarters resources. However, the one department-wide data set that identifies military and civilian positions by specific DOD headquarters functions contains unreliable data because DOD has not aligned these data with its revised headquarters definition. Further, DOD does not have plans to collect information on costs associated with functions within headquarters organizations. This may hinder DOD's ability to conduct an in-depth review for purposes of consolidating and streamlining headquarters functions. Without alignment of headquarters function data with the revised headquarters definition and collection of reliable information on costs associated with headquarters functions, DOD may be unable to accurately assess specific functional areas or identify potential streamlining and cost savings opportunities. To further DOD's efforts to identify headquarters-related efficiency opportunities, GAO recommends that DOD align its data on positions that have headquarters-related DOD function codes with the revised definition of major DOD headquarters activities and collect information on costs associated with functions within headquarters organizations. DOD concurred with the recommendations. |
In 1978, Congress passed the Inspector General Act, creating Inspector General offices in 12 federal agencies. This followed growing reports of serious and widespread breakdowns in agencies’ internal controls. These new OIGs were established as independent and objective offices within their respective agencies to promote economy, efficiency, and effectiveness in government programs and operations and to prevent and detect fraud and abuse. In addition, they were created to keep agency heads and Congress fully informed about problems and deficiencies in program operations, as well as needed corrective action. Over the years, the act has been amended to increase the number of inspectors general. The President, with the advice and consent of the Senate, appoints inspectors general at cabinet-level departments and other large agencies, including HHS. The inspectors general at smaller, independent agencies and other federal entities are appointed by the heads of their organizations and have essentially the same authorities and duties as those appointed by the President. Presently, there are 28 inspectors general appointed by the President and 29 appointed by their agency heads. Inspectors general hold a unique place in the executive branch of government. They report to and are subject to the general supervision of their agency heads, but carry out their duties independently. In addition, they have reporting obligations to both the heads of their agencies and Congress. Those that are presidentially appointed are among the few such appointees that are to be selected “without regard to political affiliation and solely on the basis of integrity and demonstrated ability.” To help maintain their independence and fulfill their mission—which often involves being publicly critical of their own departments—inspectors general must familiarize their departmental colleagues with their special role. Because they are charged with independently protecting the integrity of federal programs, inspectors general must be impartial in fact and appearance. Government Auditing Standards, effective in January 2003, call for auditors to “be free, both in fact and appearance from personal, external, and organizational impairments to independence.” These standards also require that auditors “avoid situations that could lead reasonable third parties with knowledge of the relevant facts and circumstances to conclude that the auditor is not capable of exercising objective and impartial judgment . . ..” Given that their independence and impartiality is so critical, inspectors general need to be sensitive to how their actions might be perceived and interpreted by their staffs, the administration, Congress, and the public. About 300 of the approximately 1,600 HHS OIG employees are employed in its Washington D.C. headquarters. The remainder work in its 8 regional offices and 85 field offices in all 50 states. The OIG consists of five components, or major units, each headed by a deputy inspector general. The office is led by 13 Senior Executive Service level employees, who all work in headquarters, and about 60 GS-15 level employees. About two- thirds of the GS-15 employees are spread across the various components in headquarters with the remaining third located in the OIG’s regional offices. Consistent with the act, the OIG maintains the Office of Audit Services (OAS) and the Office of Investigations (OI). They each represent about 40 percent of the OIG’s budget. OAS is responsible for auditing a variety of HHS health care programs and generally spends about 80 percent of its resources on projects related to the Medicare and Medicaid programs. Its findings can result in program improvements and the return of overpayments to the federal government. In addition, OAS provides audit support to OI. OI investigators typically pursue allegations of criminal conduct that they receive from contractors that process Medicare claims, state Medicaid Fraud Control Units, officials involved in administering HHS’s many grant programs, and others. When investigators find evidence of potential wrongdoing, they refer the matter to DOJ for possible prosecution or the OIG may opt to impose other sanctions. The OIG has established three additional components to enable it to fulfill its mission. The Office of Evaluation and Inspections (OEI) conducts short-term management evaluations of HHS programs that generally involve significant expenditures and services to beneficiaries or in which important management issues have surfaced. Its reports are expected to identify opportunities for improvement in departmental programs. While OAS may audit the same federal programs examined by OEI, the scope of OEI studies is typically broader and would more likely involve the use of surveys, interviews, and other qualitative research methods. A relatively small component, OEI represents about 10 percent of the office’s resources. The Office of Counsel to the Inspector General (OCIG) provides legal services to the OIG. Among other things, it renders advisory opinions to health care providers and develops model industry guidance for compliance with relevant laws and regulations. It also has several sanctions at its disposal to penalize those who abuse HHS programs. Finally, the Office of Management and Policy (OMP) is responsible for the administration of the office, which includes overseeing the budget, supporting the office’s information technology needs, and working with the media. It is also responsible for the OIG’s human resource management activities, but obtains significant personnel support from the department’s centralized Program Support Center. OCIG and OMP each represent about 5 percent of the OIG’s budget. The OIG plays an instrumental role in identifying and investigating individuals and entities that may have abused HHS programs. It may make referrals to DOJ for possible prosecution under applicable criminal statutes. In addition, health care providers who violate federal laws and regulations may face a variety of civil sanctions. The OIG may make use of the False Claims Act—the federal government’s primary civil remedy for false or fraudulent claims—and refer such matters to DOJ. The act imposes substantial penalties on those who knowingly submit false claims to Medicare and other federal programs. If a provider has filed a false claim that DOJ opts not to pursue through the use of the False Claims Act, the OIG may impose other sanctions, such as civil monetary penalties (CMP), against that health care provider. CMPs are also imposed for other types of improper conduct, such as violations of statutory prohibitions on “kickbacks” in connection with patient referrals. The OIG also can assess CMPs against hospitals for “patient dumping,” that is, failing to provide appropriate treatment to patients presenting a medical emergency. The amount of the CMP imposed is related to each provider’s specific violation. The OIG may also exclude health care providers from participating in Medicare, Medicaid, and other federal health programs if they have, for example, been convicted of a criminal offense related to Medicare—including health care fraud or patient abuse and neglect—or had their license suspended or revoked. OCIG may also opt to negotiate corporate integrity agreements with health care providers. Although the OIG focuses the majority of its attention on health care programs, its activities extend to other areas as well. For example, the OIG has made the detection, investigation, and prosecution of absent parents who fail to pay court-ordered child support a priority. The OIG works with other federal, state, and local agencies to expedite the collection of these payments. Parents who repeatedly fail to honor such obligations are subject to criminal prosecution. The OIG’s recent activities with respect to parents who have defaulted on their child support payments resulted in 152 convictions and more than $7 million in court-ordered criminal restitution in fiscal year 2002. We examined the independence that was reflected in the Inspector General’s decision-making during her tenure. In addition, we reviewed personnel changes that she initiated and evaluated her judgment in several instances. We interviewed appropriate staff, including the Inspector General herself, and examined relevant documentation. Current and former OIG headquarters employees frequently expressed concerns about the Inspector General’s independence. These concerns centered on several incidents—some of which were widely reported by the media. Employees also identified other audits and investigations that they felt may have suffered from inappropriate management intervention. We concluded that the following four incidents involved actions on the part of the Inspector General that at least contributed to the perception of a lack of independence. In the spring of 2002, the OIG was scheduled to begin an audit of the Florida Retirement System. The objective was to evaluate whether the state appropriately charged the federal government for the pension expenses of state agency employees who help administer federal programs. The auditors specifically wanted to determine whether funds designated as federal contributions to the retirement system were used to provide for pension expenses, and whether the federal contribution rates were reasonable. The OIG’s first meeting to discuss this audit with Florida pension officials was scheduled for April 16, 2002. The day before, the Chief of Staff to the Florida governor placed an urgent call to the Office of the HHS Secretary, requesting that the audit be delayed to accommodate the new pension department director who was going to assume his position in a few weeks. This call was ultimately referred to the Inspector General, who instructed her Deputy for Audit Services to delay the audit for a few days. The Inspector General subsequently ordered a second delay until July. Due to subsequent scheduling problems affecting both OIG and Florida pension staff, the audit team did not begin its work until September 2002. Allegations made by OIG employees and the media suggested that the federal government’s contributions to the Florida retirement system could be excessive and that a report on these contributions might affect the outcome of the Florida governor’s race that November. When asked about the incident, the Inspector General stated that she agreed to temporarily postpone the audit until she could determine the appropriate response to the request and did not have any involvement in subsequent delays. She also insisted that audits are frequently delayed, that her decision to delay the audit was not politically motivated, and that, even if the audit had begun in April, it would not have been completed before the election. She told us that, in hindsight, she could have handled the situation differently by referring the request to the Deputy for Audit Services, but she did not believe she acted inappropriately in these circumstances. We believe that the Inspector General did not appropriately investigate the implications of her decision before agreeing to delay what ultimately resulted in a report containing significant monetary findings. First, Florida pension department officials could have known that a substantial overpayment existed, and that a delay in the OIG’s audit could have benefited the state by changing the time frames used to calculate the amount it owed. In fact, the draft report on the Florida pension audit contains a finding that there were excessive federal contributions totaling about $517 million, which the state will be required to return or offset against the amount of future federal contributions to the retirement fund. Second, given that the team was scheduled to begin its work in April 2002 and had estimated that the audit report would be drafted in 6 months, it is conceivable that the report could have been available by election day, if the audit had begun when originally planned. Finally, contrary to the Inspector General’s recollection, we found that she sent an e-mail message to her Deputy for Audit Services in April 2002 instructing him to postpone the audit until July 2002. The Inspector General acknowledged that, although short delays in commencing audits are common, it was admittedly unusual for a request for a delay to be directed to, and resolved at, her level. In February 2000, the OIG alleged that York Hospital—located in York, Pennsylvania—had submitted improper claims for services provided to Medicare beneficiaries. The OIG had notified the hospital that it planned to impose a CMP and was engaged in negotiations with the hospital when the Inspector General assumed office. The OIG attorneys had estimated that York Hospital’s potential liability was $726,000. Soon after taking office, the Inspector General received a letter from three members of Congress encouraging her to settle the case quickly. According to the former Chief Counsel, the Inspector General told him, “I hate this case; get rid of it.” Feeling as though they had to move fast, OIG attorneys lost the benefit of time—which they explained is a key factor in resolving a case in the government’s favor—and quickly settled the matter. The former Chief Counsel also noted that the settlement amount of $270,000 was far less than the attorneys believed the government could have received had negotiations proceeded as they had planned. The Inspector General indicated that she in no way directed a settlement or personally involved herself in the York Hospital negotiations. She also stated that if her OCIG staff perceived that they were under pressure to settle the case quickly, they misinterpreted her instructions. She told us that she simply wanted to settle this case in a timely manner. Although the Inspector General said she did not intend to pressure her staff, the former Chief Counsel told us that he and those responsible for negotiating with hospital officials clearly perceived a sense of urgency. He also told us that her staff perceived that timing, rather than maximizing the settlement amount, was her main concern. We believe that her staff acted accordingly, possibly against the government’s financial interest. Two medical societies representing providers of lithotripsy services threatened to sue the Centers for Medicare & Medicaid Services (CMS) over a regulation resulting in the denial of claims submitted for payment to the Medicare program. The CMS regulation implemented statutory restrictions on physician referrals to providers in which the physicians have an ownership interest and included lithotripsy services within the scope of these restrictions. The medical societies maintained that Congress did not intend to include lithotripsy services within the scope of the statute and intended to litigate this matter, if a settlement could not be reached quickly. A partner in the law firm representing the two medical societies, who was also a friend of the Inspector General, contacted her for assistance in expediting this case. The Inspector General directed her former Chief Counsel to contact the law firm and begin negotiating the matter, which was under the jurisdiction of CMS and not the OIG. The former OIG Chief Counsel was hesitant to intervene until the appropriate attorney representing CMS in this matter could be consulted. Because CMS’s attorney was unavailable for about a week, the former Chief Counsel took no action during this time. According to the former Chief Counsel, the Inspector General admonished him severely when she discovered that he had not followed her instructions to immediately contact the law firm. The Inspector General asserted that her office had a legitimate role in this matter. Although the issue was being disputed between the medical societies representing the lithotripsy providers and CMS, the Inspector General believed that her OCIG staff, which advised Congress on physician referral matters, was in a unique position to resolve the issue. She pointed out that she did not personally involve herself in the matter, nor instruct her staff about how to resolve the issue. Instead, she stated that her goal was to help resolve a matter in which her attorneys had vast expertise. Despite the OIG’s expertise in this matter, we agree with the former Chief Counsel that it would have been inappropriate for the OIG to intervene by contacting the law firm to initiate discussions, particularly in the absence of CMS’s attorney. If the Inspector General wanted OCIG’s expertise to be offered to CMS, it would have made sense for OCIG to contact CMS’s attorney before proceeding. CMS’s attorney responsible for handling this matter told us that she would have been troubled if the OIG had commenced discussions without her agency’s participation. Given the Inspector General’s personal relationship with the medical societies’ attorney and the OIG’s lack of jurisdiction in the matter, her actions created the impression that she was more interested in helping a friend than offering advice to CMS, which called her independence into question. On February 20, 2001, the OIG sent its draft report on adjusted community rate proposals for Medicare+Choice organizations to CMS for comment. This report was of potentially significant interest to congressional committees, which were then considering the adequacy of payments in the Medicare+Choice program. While OIG guidelines generally provide up to 45 days for audited entities to comment on its draft reports, the publication of this report was delayed for 14 months while the OIG waited for comments from CMS. Ultimately CMS agreed with the OIG’s findings in written comments on April 16, 2002. Some employees alleged that the delay in issuing this report reflected a lack of independence on the Inspector General’s part. They suggested that the Inspector General should have taken a more active role in expediting the report’s issuance. They pointed out that the CMS Administrator initially disagreed with the draft report’s findings and hired a consultant to validate the OIG’s results. According to these employees, it took CMS more than a year to replicate the OIG’s work and determine that it agreed with the report’s findings. OIG employees told us that the Inspector General tolerated this situation because she was unwilling to issue a relatively controversial report without the benefit of CMS’s agreement. The delay in issuing this report diminished its usefulness because congressional committees were focused on other concerns by the time the report was finalized. The Inspector General stated that she was only vaguely familiar with this project but was certain that she did not direct her audit team to delay the report’s issuance. Although she recalled that the CMS Administrator initially disagreed with the report’s conclusions, she told us that she did not remember the specific time frames associated with it. Our evidence shows that the Inspector General’s staff tried to enlist her assistance in expediting CMS’s comments to no avail. By permitting CMS to delay the report’s publication, the Inspector General created the appearance among her staff of being unduly influenced by CMS. In our view, a time sensitive report of congressional interest should have, at the very least, garnered more of the Inspector General’s attention. During the Inspector General’s tenure, staff turnover among the OIG senior headquarters staff has been considerable. Between September 2001 and November 2002, at least 20 OIG senior managers retired, resigned, or were reassigned. Ten of these were Senior Executive Service employees, most of whom had over 25 years of government service and had played an important leadership role at the OIG for many years. The others were GS-15 employees who were instrumental in carrying out specific office functions. The Inspector General’s representative characterized these changes as voluntary and beneficial to the overall mission of the office. The Inspector General told us that these changes were made to provide senior managers with new insights into agency operations and to capitalize on the fresh perspectives they could bring to their new jobs. However, we found that the sudden and unexplained nature of many of the Inspector General’s actions resulted in a widespread perception of unfairness among her staff. In addition, the promotion of a close advisor to the Inspector General, to the position of Director of Public and Congressional Affairs, raises a legal concern. We found the circumstances surrounding the departures of eight senior OIG managers to be particularly troubling. Four of these eight managers who left the OIG or were detailed elsewhere were members of the Senior Executive Service. One of the four took an early retirement after the Inspector General proposed that the department assign him to a position outside of his local commuting area with the assumption that he would retire instead. Another retired after most of his responsibilities were reassigned to another official or eliminated. A third resigned about 6 weeks after the Inspector General reassigned his job responsibilities and directed that he not report to his office and instead spend his time seeking new employment. Finally, one manager was detailed to a temporary position within HHS and was also instructed not to return to his OIG office. He is currently seeking new employment. These four individuals told us that the Inspector General had not informed them of specific deficiencies in their performance, given them any opportunity to improve their performance, worked with them to find a mutually satisfactory resolution to her concerns, or provided an adequate rationale for her decisions to remove them from their positions. Moreover, three of these managers told us that they were shocked with the urgency she displayed when asking them to leave the OIG, and two perceived that a single event ultimately led to the Inspector General’s decision to remove them. For example, in one instance, a senior manager linked his removal to an incident in which a problem had to be resolved in the Inspector General’s absence. Although he successfully contacted her and proposed a solution, she did not wish to address the matter until her return to the office. He delayed taking action, as she directed. However, according to this official, when the Inspector General returned, she was angry and suggested that he had tried to pressure her into accepting his proposed solution, essentially excluding her from the decision-making process. Describing their departures from the OIG, these four individuals told us that they felt they had no alternative but to leave their positions. Other OIG staff also told us that these four changes—all of which were initiated by the Inspector General—were involuntary. The other four individuals whose departures were particularly troubling were GS-15 level managers from OMP, OCIG, OI, and the Inspector General’s Immediate Office. One manager resigned after being reassigned twice within 9 months. According to several OIG employees, the purpose of this manager’s second reassignment was to accommodate the Inspector General’s preference that this manager no longer work in the OIG headquarters building. The Inspector General gave no explanation why she wanted this individual to work in a remote location. A second was reassigned to an interagency task force for an indefinite period after his position was abolished. The Inspector General reportedly no longer wanted him in the OIG headquarters building. The third individual was temporarily reassigned to a position at another HHS agency and subsequently resigned. He told us that his duties were curtailed following a briefing of congressional staff in which he voiced an official OIG opinion that conflicted with that of CMS. The fourth individual retired after being reassigned from the Inspector General’s Immediate Office to another component. Some staff members perceived that the reassignment of this individual resulted, in part, from her requesting—without the Inspector General’s knowledge—a gun safe to properly store a firearm that the Inspector General had recently acquired. Like the reassignments at the senior executive level, the Inspector General initiated these changes. Some of the employees we interviewed were skeptical that these changes were necessary and asserted that they actually damaged the organization’s effectiveness. Specifically, they were concerned with the sheer number of personnel moves made in a relatively brief period of time and that their new component heads lacked experience in the areas that they were going to lead. They also expressed concerns about the Inspector General’s motivations because they felt that the changes generally had not been adequately explained to the employees involved. The abruptness of these changes and the lack of any overall explanation for them heightened employees’ mistrust. Although some employees were supportive of the Inspector General’s organizational changes or felt unaffected by her actions, comments made during our interviews and in our employee survey highlighted the frustration many employees—especially at headquarters—felt due to the perception of unfairness associated with these personnel changes. We found that the magnitude and abruptness of the Inspector General’s actions raised fear and anxiety among her staff. We asked the Inspector General about each of the individuals to obtain her rationale in making these personnel decisions. The Inspector General told us that she was concerned about the individuals’ privacy and that she was uncomfortable discussing the circumstances involving these managers with us. Finally, we identified one matter giving rise to a legal concern. We obtained information suggesting that a member of the OIG’s staff may have been preselected for a GS-15 position as the Director of Public and Congressional Affairs. Specifically, as explained below, e-mail communication by one of the Inspector General’s closest advisors implies that a decision had been made to promote this employee to the GS-15 level prior to the initiation of a competitive selection process. Citing the individual’s outstanding performance as a GS-14 in the same office, the Inspector General had directed the employee’s supervisor to promote her to a GS-15 at the earliest opportunity. Shortly thereafter, an advisor to the Inspector General contacted the individual’s supervisor and emphasized that the Inspector General believed that it was important for the individual to have a GS-15 in her current position. The advisor urged him to initiate the promotion process so that the GS-15 would be effective on the date of her eligibility for promotion, or soon thereafter. The advisor further explained that the Inspector General had made a commitment when the individual agreed to take the GS-14 position that she would be promoted to a GS-15 one year later. In addition, the OIG included a “selective placement factor” in the GS-15 position description, reportedly to favor the employee. OIG staff told us that, although the GS-15 position was advertised both inside and outside of the agency, there was a widespread perception that the selection had already been made. This perception may account for the fact that there was only one applicant for the position. While the information we obtained raises concern about a possible preselection, we have not conducted the type of formal, factual inquiry that would ultimately be necessary to determine whether the Inspector General’s actions were unlawful. We identified several matters that raised concerns about the adequacy of the Inspector General’s leadership. Some employees questioned the Inspector General’s judgment in regard to her possession of a firearm in the office, as well as law enforcement credentials. Others raised concerns about the manner in which she conducted her business travel. In addition, several employees interpreted some of the Inspector General’s actions as demonstrating a lack of interest in key office operations. In the fall of 2002, the Integrity Committee of the President’s Council on Integrity and Efficiency (PCIE) received an allegation that the Inspector General had improperly requested and obtained a firearm from her Deputy Inspector General for Investigations. Subsequently, the Integrity Committee received a second allegation that the Inspector General had improperly obtained supervisory special agent law enforcement credentials. After consulting with DOJ officials, who declined to pursue these allegations, the Integrity Committee proceeded with its investigation. The PCIE forwarded its report to the Deputy Secretary of HHS on April 4, 2003. The PCIE found that the Inspector General had obtained a firearm from an OIG special agent and maintained it in her Washington, D.C. office for a short period of time. An OIG Memorandum of Understanding (MOU) with DOJ and the Federal Bureau of Investigation set forth a process for deputizing OIG special agents to allow them to carry firearms, make arrests, and execute warrants when carrying out their law enforcement functions. However, the PCIE found that the Inspector General had not met the job classification and training requirements outlined in the MOU and had not been deputized. In an interview with PCIE investigators, the Inspector General stated that she believed that inspectors general were statutorily authorized to possess firearms and that she had not reviewed the MOU for deputation of OIG special agents. In regard to the second allegation, the PCIE found that the Deputy Inspector General for Investigations obtained supervisory special agent credentials for the Inspector General because she did not want the Inspector General to have any difficulty gaining access to secured areas in the event of a terrorist incident. The Inspector General told PCIE investigators that other inspectors general did not seem to know how to handle the issue of access to secured areas in the event of a terrorist attack, but she had never asked them if they had law enforcement credentials. She also told investigators that she had the credentials in her possession for a short time, and returned them to her Deputy for Investigations to store in a safe. (Before the PCIE investigated this issue, concerns about the ease with which OIG credentials could be obtained came to our attention. We examined the internal controls for the credentialing system and identified several weaknesses, which are described in appendix II. OIG officials have since told us that they have taken steps to correct these weaknesses.) The PCIE report identified several criminal statutes as relevant to the allegations, including provisions of federal and District of Columbia law concerning the possession of firearms, which are applicable to those working in federal buildings. At the conclusion of the investigation, DOJ officials advised the PCIE that it declined to prosecute the Inspector General for any possible violations of criminal statutes regarding the possession of a firearm or law enforcement credentials. In addition, in the letter to the Deputy Secretary of HHS accompanying its report, the PCIE advised that the Inspector General’s resignation mooted the need to take any administrative actions against her. It also expressed deep concern about the actions of some OIG employees who facilitated the Inspector General’s acquisition of these items. Another issue that persistently surfaced during our review was perceptions of the propriety of the Inspector General’s business travel. As the head of a large organization with offices nationwide, the Inspector General is entitled—and expected—to periodically visit these offices to provide oversight, guidance, and support to her staff. In addition, the Inspector General may engage in other business-related travel, such as attending conferences and meeting with provider organizations and other external groups. Inspectors general—like other government employees— are not prohibited from planning personal travel in conjunction with their business trips. However, we spoke with current and former inspectors general from other federal agencies, and they told us that they generally refrain from including personal travel with their business trips for fear of raising suspicion about their motivation or integrity. While no one alleged that the Inspector General violated travel regulations, some current and former officials questioned her motivation for planning certain trips that included a personal element, such as sightseeing activities—sometimes with two senior OIG managers. To better understand the purpose of the Inspector General’s travel, we examined all of the documentation related to her trips, including travel orders, vouchers, and detailed itineraries prepared by her office. We found that during the first 4 months of the Inspector General’s tenure she took four trips outside of the Washington D.C. area. None of these trips included a personal element or any companions. However, over the next 12 months, the Inspector General traveled eight more times and included personal activities on half of these trips. In addition, she invited one or two senior managers to accompany her on six of these eight trips. Three of the Inspector General’s trips in particular raised concerns, arising from a perception that this travel was motivated by other than official duties. In some of these cases, large blocks of time could not always be accounted for. For example, the Inspector General took one trip to San Francisco and Phoenix that spanned 8 days and included 2 days of personal time on a weekend. In examining the business portion of this trip, we were only able to determine that the Inspector General made two half- hour speeches and traveled between these cities and Washington, D.C. Further, in some cases, personal activities—sometimes involving the participation of the two senior managers—were included. While we did not validate the managers’ activities on these trips beyond their own assertions, we believe that it is appropriate for the Inspector General to ask managers to accompany her as needed on business-related travel. However, including her colleagues in her personal activities during travel contributed to a perception that the business reasons for these trips were pretexts and that the trips were planned solely for nonbusiness purposes. In responding to our inquiries regarding the Inspector General’s travel, she indicated that all of her trips were made for legitimate business purposes. She also told us that she was not concerned with any perceptions OIG employees may have had about her travel. Finally, in a written response to our inquiry regarding approximately 3 days of unaccounted time during her San Francisco and Phoenix trip, she indicated that she spent her time performing office work and preparing for one of her two speeches. She offered no other elaboration on her business activity. During our study, the Deputy Inspectors General were grappling with a major budgetary shortfall due to aggressive hiring in fiscal year 2002, lower than expected attrition throughout the OIG, and uncertain funding levels for fiscal year 2003 that had yet to be resolved. Senior OIG officials told us that they were concerned that, without a quick solution, they might ultimately violate the Antideficiency Act. In February 2003, the Deputy Inspectors General were developing various proposals to react to their forecasted budget shortfall. The deputies had severely limited travel, training, and other human resource activities in their components. In addition, they were reallocating staff positions to accommodate the budget—regardless of where the positions were actually needed. Positions that became vacant through attrition were transferred to the overstaffed components. By gaining the vacant positions, the overstaffed components were able to reduce the number of staff considered to be in excess in their units. Some of the deputies expressed strong resentment about the chaos this situation caused within their components. For example, a relatively small component that lost a key member of one of its functional teams could not replace that individual, and instead had to continue to meet mission goals with one fewer supervisor. Other component heads explained that the lack of funds to perform routine duties in the field affected morale and could impact long-term productivity. This situation could have been avoided if OIG leadership had developed a human resource hiring and development plan that contained realistic budget projections and hiring goals that all deputies would have to follow. Historically, the Inspector General’s Principal Deputy was responsible for ensuring that component heads worked together to carry out such a plan, but the Principal Deputy position had been vacant for months. As a result, component heads we spoke with felt that they did not have the authority to fill the leadership void that developed in this instance, and relied on the Inspector General to impose whatever fiscal constraints were necessary to establish an equitable budget allocation among the components. While the Inspector General expressed concern about funding issues, she did not take aggressive steps to remedy the situation. Although the deputies ultimately resolved their financial situation, at the time of her resignation, the component heads were still struggling among themselves with these budgetary challenges. The OIG conducts a variety of activities that aim to improve program operations, identify and recover overpayments, and investigate and sanction those who violate statutes and regulations governing HHS programs. Evaluating the effect of the Inspector General’s recent actions on productivity is difficult to assess in the short term. For example, in addition to the decisions she made and the personnel moves she initiated, a variety of other factors contribute to productivity. Two factors make it impossible to reach an overall conclusion about OIG productivity for any limited period of time. First, fluctuations in performance are to be expected in any given year, given the multitude of the OIG’s activities. Second, it is difficult to compare performance from one year to the next because the results in one period are heavily dependent on work in the pipeline that was initiated in prior years. For example, it could take 2 or 3 years from the time a project is initiated until a recommendation is made and subsequently implemented; investigating potential criminal activity and prosecuting the individuals involved could take even longer. Many of the OIG’s productivity measures remain comparable to prior years or showed increases, but we found that several other key indicators of performance have declined since the Inspector General took office. We analyzed a wide variety of performance measures to evaluate the OIG’s effectiveness and found that many of these measures indicated that the OIG may be performing well, as table 1 shows. For example, in its semiannual reports covering fiscal year 2002, the OIG identified almost $22 billion in savings attributable to its work. The OIG consistently reported increases in these savings since fiscal year 1997. In addition, the number of OAS reports published has increased each year since fiscal year 2000. Also, the number of convictions resulting from the OIG’s investigative referrals has steadily increased over the last 6 years. OI officials, who told us that the number of convictions is an important measure of their success, also said that they appear to be on target in achieving even more convictions in fiscal year 2003. At the midpoint of the current fiscal year—March 31, 2003—the OIG reported 320 convictions. Although it is difficult to measure the “sentinel” effect of some of the OIG’s activities, it has taken steps to encourage lawful and ethical conduct by the health care industry, which we believe should be acknowledged. For example, in recent years the OIG has actively worked with the private sector to develop compliance guidance to prevent the submission of improper claims and to discourage inappropriate conduct by providers. In March 2003, the OIG issued compliance guidance for ambulance suppliers. This was followed by the publication of compliance guidance for pharmaceutical manufacturers in April 2003. Like convictions, the number of providers excluded from the Medicare program is a strong indicator of OI effectiveness. Although the number of exclusions imposed declined in fiscal year 2002, reversing a trend of increases since fiscal year 1999, we were unable to determine whether this decline reflects diminishing productivity. The OIG Chief Counsel explained that, in 2002, the Department of Education became responsible for processing most of the exclusions of health care providers who had defaulted on the repayment of their federally funded student loans. The Chief Counsel told us that in 2001, when the OIG still had this responsibility, it excluded 518 providers who had defaulted on these loans. In 2002—the transition year—the number of such providers excluded by the OIG dropped to 166. Table 2 shows the OIG’s exclusions imposed since fiscal year 1997. We found declines in the use of sanctions available to the OIG. For example, we noted reductions in the number of settlements and recovery amounts that result from the OIG’s False Claims Act referrals to DOJ. Similarly, there were declines in the number of CMPs and CIAs recently imposed. Table 3 shows that both the number of settlements and amount of recoveries declined significantly in fiscal year 2002, compared to fiscal years 2000 and 2001. OIG officials told us that its False Claims Act cases are strongly tied to DOJ’s efforts to combat health care fraud, which have had to compete with investigative resources dedicated to the September 11, 2001, terrorist attacks. In addition, DOJ has reduced the number of its national health care antifraud initiatives in recent years as well as the number of individual cases that it pursues under the auspices of each initiative. OIG officials also attribute this decline to its increasing emphasis on program compliance, which the OIG believes has had a sentinel effect on providers. Although the number of False Claims Act settlements and recoveries have declined, DOJ officials and the Medicaid Fraud Control Unit representatives we spoke to told us that they were pleased with the quality of the support they received from the OIG in pursuing abusive or fraudulent providers. However, several of these officials were concerned that the OIG could not devote more resources to assist them in their investigations. Another important indicator of OIG productivity is the imposition of CMPs. As shown in table 4, the number of these cases had a marked decline since fiscal year 2000. In explaining the declining number of CMPs imposed, OIG officials offered two explanations. First, they told us that the increase in convictions may account for the decline in CMPs, which are typically imposed when more stringent penalties cannot be used. Because convictions have recently increased, there would be fewer opportunities to impose CMPs. Second, officials suggested that the office’s previous aggressiveness in pursuing patient dumping cases—which generally made up between 65 and 90 percent of all CMPs imposed each year—has been a strong deterrent. The officials also emphasized that patient dumping cases have proven to be resource intensive. As a result, the OIG can only afford to pursue the most egregious cases. CIAs, typically negotiated in conjunction with False Claims Act settlements, are also an indicator of the OIG’s productivity. CIAs consist of “integrity provisions” that are intended to ensure that a provider’s future transactions with Medicare and other federal health care programs are proper and valid. Such provisions include implementing an OIG-approved compliance program, use of an independent review organization to annually review provider billings, and other periodic monitoring and reporting requirements. Providers accept the imposition of the CIAs and, in turn, OCIG agrees not to seek additional administrative sanctions. As table 5 shows, the number of active CIAs, as well as the number of newly negotiated CIAs, has declined since 2001. OCIG officials attributed the most recent decline to several factors. First, the number of civil False Claims Act settlements declined between 2001 and 2002, resulting in fewer providers with whom to negotiate CIAs. Second, in fiscal year 2002, OCIG began implementing the Inspector General’s November 20, 2001, “Open Letter to Health Care Providers” regarding CIAs. CIAs had long been a concern of providers because of the costs associated with implementing the specified integrity provisions— such as retaining an independent review organization each year to review a statistically valid sample of billings. The November open letter announced that the OIG’s policies and practices regarding CIAs were being modified in response to those concerns. The letter noted, in part, that the OIG would no longer seek to negotiate CIAs with every provider settling a False Claims Act case with the government. In some situations, corporate compliance matters would be negotiated separately, after settlement of the False Claims Act case. The letter also indicated that the OIG would consider increasing its reliance on providers’ internal audit capabilities. For example, some providers may not be required to retain an independent review organization. Similarly, not all billing reviews would be subject to statistically valid random sampling. Instead, these providers would be able to self-certify compliance based on the error rate indicated by reviewing an initial sample of their billings. Further, the new approach to CIAs could also be applied to previously negotiated CIAs. As a result, in fiscal year 2002, OCIG renegotiated 94 existing CIAs associated with False Claims Act settlements. The revised CIAs contained “certification agreements,” permitting providers to self-certify their compliance with the specific provisions contained in their agreements, instead of retaining an external review organization for this verification. We also found that there has been a considerable drop in the testimonies and outreach and education activities performed by OIG employees. Prior to the current Inspector General’s tenure, the OIG frequently provided assistance to congressional staff developing legislative proposals related to HHS programs, offered informal advice about program oversight, and testified at congressional hearings. In addition, OIG employees routinely presented the results of their work at conferences, meetings, and in other educational forums. However, as shown in table 6, the number of testimonies and speeches and other presentations by OIG employees revealed a significant decline in the assistance provided during the last fiscal year—especially among OCIG employees. We spoke with several congressional staff working for committees with jurisdiction over HHS programs who told us that they were not satisfied with the level of support they were currently receiving from the OIG. While formal requests for assistance were fulfilled, congressional staff indicated that OIG employees no longer discussed issues with them informally, as they had in the past. In our interviews, primarily at headquarters, several OIG employees recognized that they were no longer providing what congressional staff members considered to be a valuable service and what they considered to be a meaningful part of their work. OIG officials emphasized that their responsiveness to Congress is still an extremely high priority. They explained that the Inspector General instituted a more centralized approach to providing assistance to congressional staff and other external groups than had her predecessors in an attempt to ensure the quality and appropriateness of the assistance provided. In response to the declining number of testimonies, OIG senior officials told us that they are very willing to appear at congressional hearings when they have relevant material to present. However, they explained that the Inspector General does not consider the number of testimonies to be a relevant performance measure. In regard to speeches and other presentations, the decline was partly due to a policy change in the spring of 2002 that moved approval authority for these activities from the individual component heads to the Director of Public and Congressional Affairs. A lack of travel funds for collateral activities in the first half of the fiscal year also limited OIG’s staff participation in discretionary events. According to this Director, because she could not approve all of the requests, she considered the nature and size of the audience, in addition to the cost of the trip, in deciding whether approval would be granted. A number of employees of OEI told us that they have been frustrated with the cancelation of projects since the Inspector General took office. According to these individuals, many projects were well under way at the time of their termination. Although OEI managers could not tell us how many projects have been canceled under the current Inspector General’s tenure, they could tell us how many of the OEI projects begun in fiscal years 2000, 2001, and 2002 were subsequently canceled. As table 7 shows, 27 reports, or about 26 percent of reports started in 2002, were canceled by the end of February 2003. According to OEI management, although some projects have been canceled, the work performed on these projects has been used by OEI teams involved in related OEI projects. We followed up on several projects that recently had been canceled to better understand management’s rationale for doing so. Staff members brought these projects to our attention during the course of our work. In one instance, a project was canceled 7 months after the team had conducted the exit conference with the agency. More than 4,000 staff hours had been expended on this project, which included three full-time and one part-time staff and a paid intern. The Deputy Inspector General ultimately told the team that the report lacked sufficient evidence and would not be presented to the Inspector General for signature. Although the team subsequently prepared two memoranda as substitutes for the report, no product was ever issued—despite interest from the provider community and relevant agency. We have learned that OEI projects continue to be canceled. For example, in March 2003 the Inspector General took the unusual step of recalling a draft report, which had been sent to the relevant agency for comment in February 2003. Both the Deputy Inspector General for OEI and the Inspector General approved this draft. Also in March 2003, a related project, which had begun in fiscal year 2002, was canceled as the OEI team prepared for an exit conference with the agency it had evaluated. OEI management decided to combine the results of both projects into a single report. Although the OEI staff involved with these projects contend that they briefed management several times over the course of these assignments, the Deputy Inspector General for OEI explained that he made this decision once he realized there were inconsistencies between the two projects that needed to be reconciled. As of late April 2003, no report had been published. In conversations with the Inspector General and the Deputy for OEI, we learned that they had been particularly concerned with the appropriateness of criteria used by OEI staff in evaluations. They told us that they were uncomfortable with the policy-oriented work that OEI had done and were taking actions in the pipeline of OEI reports to address what they viewed as shortcomings in the accuracy and sufficiency of evidence in OEI products. The Deputy for OEI also explained that they were providing training to all OEI staff on evidence standards with the hope of improving the quality of future projects. OEI managers and staff that we spoke to expressed surprise and frustration at these concerns and pointed out that in the past, OEI had been recognized and praised by Congress, the public, and the press for its high-quality evaluation work. Based on our survey and extensive interviews, we found in the aggregate that employee views about the organization, management, and their personal job satisfaction remained positive and relatively unchanged between 2002 and 2003. However, we identified several groups of employees whose morale was of concern, namely, employees working at headquarters, those at the highest levels of management, and staff working in two OIG components. Our analysis of open-ended survey comments also revealed areas of dissatisfaction that were not fully captured by other items on our survey. Our survey and interviews found, in the aggregate, a high level of satisfaction among OIG employees. Overall, positive responses to survey items in both 2002 and 2003 averaged over 80 percent and no item responses changed more than 5 percentage points between the 2 years. Positive responses were especially prevalent both years for statements such as “All things considered, my component is a good place to work” (89 percent and 87 percent, respectively) and “I believe that my work is important to the success of the component” (94 percent and 93 percent, respectively). Similarly, our interviews revealed an overall high level of job satisfaction, typified by comments such as “I believe my work makes a difference.” Staff repeatedly cited their close relationships with their immediate work groups and their involvement on important issues as reasons for their job satisfaction. We also identified some examples of improvement. For instance, in both the survey and interviews, OI employees indicated there had been an increase in communication with upper management in their component over the last year. We found that positive responses to most survey items were lower for headquarters employees than for field staff. For example, we found that there was a marked difference in positive responses—10 percentage points—to the statement that “Everyone is treated with respect.” We also found a 14 percentage point difference in positive responses to the statement, “I have confidence and trust in my organization.” This pattern of more positive responses from the field was consistent with statements made during our interviews. Whereas many headquarters staff expressed concern about the Inspector General’s actions, most field employees told us that they felt insulated from, and largely unaffected by, the personnel and other changes that occurred in headquarters. In addition, our survey indicated that senior management staff— specifically members of the Senior Executive Service and GS-15 employees—were considerably more concerned than all other employees about OIG leadership. While 88 percent of employees at the GS-14 and lower levels agreed with the statement, “As an organization, the OIG has clear goals,” only 67 percent of the senior management staff—those at the GS-15 level and members of the Senior Executive Service—responded positively to that statement. Further, about 70 percent of the employees at the GS-14 level and lower levels indicated that they had confidence and trust in the organization. On the other hand, only 56 percent of senior managers agreed with that statement. In our interviews, some senior management staff were extremely clear about, and supportive of, the Inspector General’s goals, but others expressed confusion about the Inspector General’s priorities for their components. Many in senior management were disquieted by the decisions that resulted in some of their colleagues retiring, resigning, or being reassigned during 2002. These managers explained that they were uncomfortable because they did not fully understand the motivations behind the Inspector General’s actions. Our survey revealed a substantial deterioration in OEI employees’ views of the organization, management, and their personal job satisfaction. For example, a statement focusing on whether “upper management clearly communicates the goals of my component,” elicited an almost 50 percentage point drop in positive responses between January 2002 and February 2003 (compared to a 1 percentage point decrease in the aggregate). Similarly, there was a 34 percentage point drop in positive responses to the statement about being “fully informed about major issues affecting my job” (compared to a 5 percentage point drop overall). Finally, about 62 percent of OEI employees indicated a lack of trust and confidence in their organization (compared to 30 percent overall). The decline in the overall climate in OEI can be linked to a number of changes that profoundly affected the staff in that component. OEI staff told us that they were negatively affected by the abrupt departure of the Deputy Inspector General, decreased communications from headquarters management, changes and delays in the report review process, canceled projects, and a narrowing of the scope of their work. In addition, OEI staff explained that they have been disappointed by a decrease in the number of their assignments that has resulted in what are considered to be “high- profile” products—those signed by the Inspector General, those issued as standard blue-cover reports, and those placed on the OIG’s Web site. Our employee survey also identified a distinct decline in positive responses to survey items among OCIG employees—almost all of whom work in headquarters. Of particular concern were answers to survey statements addressing the adequacy of communication and job satisfaction. For example, compared with 2002 survey results, there was a 22 percentage point drop in positive responses to the statement about being kept fully informed about major job issues. OCIG employees also reported a 16 percentage point drop in positive responses to the item “I am satisfied with my job” and a 12 percentage point drop in their opinion that “everyone is treated with respect,” compared with last year’s survey. Our results also showed that 54 percent of OCIG employees lack trust and confidence in their organization. The decline in the views of OCIG staff can, in part, be attributed to changes implemented by the Inspector General, and the atmosphere of anxiety and distrust that her actions created. OCIG employees expressed concern about the circumstances under which the former Chief Counsel and other senior managers left the OIG. In addition, we were told that the curtailment of education and outreach activities and contact with congressional committee staff had an adverse effect on OCIG employee morale. Finally, we analyzed the written comments that some employees opted to write in the comment box provided on our survey. In total, 578 of the 1,451 survey respondents (40 percent) elected to write comments, which allowed them to express opinions about issues that were not covered in detail in our other survey items. Our analysis of these comments showed that the majority were negative in tone (75 percent). Overall, the most frequently mentioned categories were: morale (82 percent negative), recent changes in headquarters management (61 percent negative), sufficiency of training or equipment (85 percent negative), and quality of headquarters management (80 percent negative). The demographic characteristics of those who wrote comments were generally similar to the overall sample of respondents, although those planning to leave the OIG in the next 5 years and OEI staff were more likely to provide comments than other survey respondents. We met with officials from the OIG and the Office of the HHS Secretary and briefed them on our findings. We also provided them with a copy of our draft report. In written comments on a draft of this report, the Inspector General disagreed with some of our findings and characterizations of certain events. The Office of the Secretary did not provide comments. In reference to our discussion about the OIG’s productivity, the Inspector General stated that the OIG had achieved substantial accomplishments under her leadership and direction and cited the savings attributable to its work in fiscal year 2002. In addition, she highlighted some of the OIG’s nonmonetary achievements during her tenure. As we noted in our draft report, many of the OIG’s productivity measures have remained steady or improved, including those cited in the Inspector General’s letter. However, we also pointed out that making a conclusive determination regarding productivity in the short term is extremely difficult because current savings are often the result of efforts started in prior years. Our draft also identified declines in other important areas, such as settlements and recoveries. In addressing our findings related to employee morale, the Inspector General pointed out that our survey of OIG employees showed that employee morale remained positive and relatively unchanged during her tenure. However, our survey also identified several groups of employees whose morale was of concern. For example, senior managers were considerably more disturbed than all other employees about OIG leadership. Further, headquarters employees expressed less satisfaction with the organization and leadership than their counterparts in the field. While the majority of OIG staff are located in field offices and generally were more satisfied with their work environment than headquarters employees, they also felt less affected by the changes instituted by the Inspector General than their colleagues in headquarters. A striking exception to field office employee satisfaction, as discussed in our draft, was staff in OEI, whose dissatisfaction increased substantially compared to last year. The Inspector General also took issue with our discussion of the circumstances surrounding the delay in beginning the Florida pension audit. We included this example of her decision-making in our draft because we believe that it demonstrated a lack of awareness and appreciation of the need for the Inspector General to closely safeguard her independence. We believe it is imperative that an inspector general perform due diligence when responding to external requests—particularly where independence could be questioned. We continue to believe that the Inspector General’s decision to intervene at the request of senior officials in the Florida governor’s office and her subsequent instructions to her staff to delay the audit created a perception that her independence was compromised. The Inspector General did not address the issue of her independence in her comments. Instead, she disagreed with our suggestion that the OIG’s report could have been available prior to the November 2002 election, if the audit had begun 7 months earlier, in April 2002, as initially planned. While we cannot be certain that the final report would have been issued by the election, we believe that it is likely that the findings would have been made public—particularly since the actual findings of the audit were reported by the media in March 2003, 6 months after the work commenced. Regarding the York Hospital matter, the Inspector General stated that she discussed her concerns about the proposed settlement with her staff and that she believed that seeking a larger settlement was not fair or justifiable. However, during the course of our work, the Inspector General told us that she did not direct a settlement or involve herself in negotiations with the hospital. In any case, we believe that the Inspector General’s actions in response to a letter from several members of Congress contributed to the perception that she was not independent. The Inspector General stated in her comments that she discussed this matter with her attorneys and determined the OIG’s case was weak. However, the former Chief Counsel and other OCIG attorneys told us that when she instructed them to “get rid of” the case, she did not address the specific facts or sufficiency of the evidence collected in this matter. Further, the former OIG Chief Counsel did not share the Inspector General’s belief that this was a weak case, and told us that he believed the government could have obtained a higher settlement, absent any pressure to close the case quickly. Concerning the OIG’s delayed report on the adjusted community rate proposals, the Inspector General pointed out that the report was already delayed 7 months by the time she took office. While we acknowledge this fact, in our view, the already lengthy delay should have prompted her to take more aggressive action to either obtain CMS’s comments or publish the OIG’s report without them. Although the Inspector General stated that she relied on the advice of her senior staff in delaying the issuance of this report, our evidence indicates that some of her senior managers were very concerned that she took little action to expedite CMS’s comments. The Inspector General indicated that she spoke to the CMS administrator regarding this matter, but she did not indicate when this discussion occurred or how CMS responded. However, the Inspector General did not indicate—nor did we find any evidence to suggest—that she took more rigorous steps to obtain CMS’s comments, such as imposing a deadline for the publication of the report, regardless of the status of the comments. The Inspector General also stressed that the delay in publishing the OIG’s report had nothing to do with her independence. However, the fact that CMS strenuously objected to the OIG’s findings, and that CMS was allowed to delay its comments for over a year, in our view, at least contributed to the perception that the Inspector General was not independent. In addition, the Inspector General disputed our statement that this report was a time sensitive one of congressional interest. We disagree. During the summer and fall of 2001, Medicare+Choice legislative proposals were developed in both the House and Senate. Also congressional hearings were held on the status of the Medicare+Choice progam, which included the issue of adjusted community ratings. Regarding our assessment of personnel changes in the OIG, the Inspector General stated that her actions were appropriate and that the nature of the Senior Executive Service encourages rotations among staff. While we do not dispute the Inspector General’s authority to reassign staff to meet office needs, the manner in which she made these changes clearly created an atmosphere of anxiety in the OIG. The Inspector General stated that she explained the rationale for her decisions “over and over again.” However, our discussions with staff members revealed that they did not understand why many of the changes had been made. Moreover, most of the eight senior managers whose departures we found particularly troubling told us that the Inspector General never explained to them why she wanted them to leave their positions. The Inspector General also commented that our employee survey suggested that there were no widespread negative perceptions among staff concerning her personnel decisions. We disagree with this observation because our survey did not contain a question related to her personnel changes. Instead, our survey focused on employee satisfaction within their immediate work groups— most of which are in the field where the consequences of the Inspector General’s changes were least felt. The Inspector General noted that most of the individuals who left the OIG following her changes were in new positions that were “at least equal to or better than” the ones they occupied at the OIG and that she always promoted from within the organization. We do not think that the current employment situations of these former staff members are relevant to the Inspector General’s personnel decisions, nor is her practice of promoting other employees from within the organization. In our draft report, we also discussed the OIG’s budgetary difficulties. In her comments, the Inspector General described her efforts to respond to this situation, which primarily consisted of directing one of her senior managers—who was in an acting deputy position—to develop strategies for resolving the OIG’s financial problems and to work with other senior OIG managers to develop a spending plan. While we would fully expect that the Inspector General would want to call on her management team to confront the agency’s budgetary problems, our concern was that she personally played only a minor role in resolving this matter, particularly in the absence of a Principal Deputy. Given the Inspector General’s limited personal involvement, the OIG’s senior management team lacked a leader with sufficient authority to mediate any disagreements between them and to take aggressive steps to identify appropriate solutions to the organization’s fiscal challenges. Finally, the Inspector General’s comments pointed out that OI had taken steps to correct the deficiencies we noted in its credentialing system. We acknowledged that corrective action has been initiated and this was reflected in our draft report. We have reprinted the Inspector General’s letter in appendix III. We are sending copies of this report to the Secretary of HHS, the HHS Acting Principal Deputy Inspector General, the former Inspector General, and other interested parties. We will also make copies available to others upon request. In addition, this report will be available at no charge on GAO’s Web site at http://www.gao.gov. We will also make copies available to others upon request. If you or your staffs have any questions about this report, please call me at (202) 512-7114. Additional GAO contacts and other staff members who made key contributions to this report are listed in appendix IV. To conduct our review, we focused on three key areas—the leadership exhibited by the current Inspector General, Janet Rehnquist, the productivity of the Office of Inspector General (OIG) in recent years, and employee morale. To do our work, we became familiar with the organization and structure of the OIG and many of its policies and procedures related to its budgeting, work planning, and report processing activities. We also examined its personnel practices and controls over certain OIG operations. As part of our efforts, we interviewed over 200 current and former OIG employees—including the Inspector General—and conducted a Web-based survey of all employees to obtain their views about their work environment. We also interviewed two current and one former inspectors general from other federal agencies to better understand their unique role and the principles they embraced to manage their offices. Our review included the examination of more than 8,000 pages of documents, including material related to the OIG’s general policies and procedures, human resource management, productivity measures, and reporting standards. Many of these documents were given to us by OIG managers and other employees. In addition, we requested—and were given access to—the e-mail accounts of eight senior OIG managers. This enabled us to retrieve selected messages that these individuals sent or received for approximately a 6-month period on a wide variety of topics affecting the management of the office. We also obtained documentation from other organizations, including the President’s Council on Integrity and Efficiency (PCIE), which recently issued a report on some of the Inspector General’s actions. To obtain the views of OIG employees, we conducted a series of semistructured interviews. These interviews relied on open-ended questions regarding the Inspector General’s leadership, productivity, morale, and other OIG operations. We interviewed three categories of employees—those who were selected randomly, those who volunteered for interviews, and those we selected because of their knowledge or position within the OIG. The randomly selected staff were chosen for interviews from five of the OIG’s eight regional offices as well as employees in OIG headquarters. This provided us with a broad geographic representation of OIG employees. Our regional interviews were conducted in Atlanta, Boston, Chicago, Dallas, and San Francisco. In order to afford confidentiality to interviewees, we conducted our regional interviews in GAO offices in those cities or in other non-OIG space. Some regional interviews were also conducted by telephone. Headquarters staff were given the option of being interviewed in either the OIG headquarters or GAO headquarters building. At each of the five regional offices we visited, we interviewed approximately 20 randomly selected employees who ranged from the GS-7 through the GS-15 levels. One hundred and six randomly selected regional staff members were interviewed in total. Interviewees were selected using a stratified, random sampling technique. Employees from the Office of Audit Services (OAS), the Office of Investigations (OI), and the Office of Evaluation and Inspections (OEI) were included in our random interviews at each regional location. We also interviewed 32 randomly selected staff from the OIG’s headquarters in Washington, D.C. and in nearby field offices, including those in Baltimore, Columbia, and Rockville, Maryland. To supplement our random interviews and to enhance identification of issues of concern to all OIG employees, regardless of their location, we invited all employees, through an OIG officewide e-mail, to contact us if they wished to participate in an interview. We received 28 requests for interviews and conducted many of these by telephone. We generally used the same set of questions that were posed during the random interviews. In both the random interviews and in discussions with those employees who requested to be interviewed, we asked individuals to bring to our attention any topic that they felt was noteworthy but which our questions did not address. Some interviewees provided us with supporting documentation that they felt was relevant. In some instances, interviewees were reluctant to provide us with documentary evidence and were also concerned about confidentiality. In these situations, we attempted to corroborate the information they shared with us through other means, without jeopardizing their confidentiality. As our work progressed, we identified a number of individuals whom we believed would be able to supply us with important information in areas we had identified as potential areas of concern, including the independence of the Inspector General, turnover among senior OIG personnel, and changes in productivity and morale. In total, we interviewed 44 such individuals, many of whom were current or former OIG employees with first-hand knowledge about issues central to our review. To determine the extent to which policies and procedures were in place to ensure that all OIG employees maintained a high degree of independence, we reviewed existing OIG policies, procedures, and protocols. We also reviewed guidance issued to the Inspector General community by the PCIE and the Government Auditing Standards pertaining to independence. We also discussed the OIG’s protocols for responding to requests for information or assistance from external entities with selected current and former senior-level OIG officials. In addition, we obtained information regarding specific instances concerning the Inspector General’s independence from interviews with current and former OIG officials as well as the Inspector General. To evaluate recent personnel changes among OIG officials, we examined detailed personnel information for 24 current or former OIG employees who had resigned, retired, been reassigned, or promoted during the Inspector General’s tenure. We reviewed the official personnel files for these individuals and collected relevant information including their history of government service; time employed by the OIG; and any awards, bonuses, and letters of commendation that they had received. We also reviewed the performance appraisals these individuals had received for the prior 3 years. Finally, we reviewed documentation specifically concerning the promotion of an OIG staff member to the position of Director of Public and Congressional Affairs. Among other things, we examined relevant position descriptions, job announcements, and e-mail communications. We also interviewed OIG officials regarding this and other personnel decisions made during the Inspector General’s tenure. To understand the purpose, frequency, and duration of the Inspector General’s travel, we examined the itineraries, travel orders, and travel vouchers for all of the trips she had taken from August 2001 through November 2002. For trips for which the itineraries lacked sufficient information about the Inspector General’s business activities, we requested additional information and discussed these trips with the Inspector General. We also identified all OIG employees that accompanied her when she traveled. We obtained similar travel records for two senior staff members who accompanied the Inspector General on several occasions and discussed their roles during these trips with them. To determine whether the OIG has experienced any changes in productivity since the current Inspector General took office in August 2001, we reviewed OIG publications, such as its semi-annual reports, to determine how savings, recommendations, and other performance indicators changed since fiscal year 2000. From OAS and OEI, we collected data about the number of projects initiated, reports published, and reports canceled in fiscal year 2002. We compared these data to the number of reports that were initiated, published, and canceled from fiscal years 2000 and 2001—before the current Inspector General’s tenure. To measure productivity in OI and OCIG, we reviewed data on investigations, prosecutions, and convictions, and exclusions from fiscal year 1997 through fiscal year 2002. We also examined relevant monetary accomplishments including the number and amounts of fines and penalties assessed, civil settlements and judgments, cost savings claimed, and recoveries and court-ordered restitutions. Our review included an examination of OCIG files pertaining to eight civil monetary penalty cases. We also judgmentally selected 18 corporate integrity agreements instituted since fiscal year 2000, to determine the extent to which new policies outlined in the Inspector General’s November 20, 2001, open letter to providers had been implemented. In addition, we discussed the OIG’s productivity with some of its partners in the law enforcement community to determine whether there have been recent changes in the level of OI’s or OCIG’s support. Specifically, we spoke to officials from the Department of Justice and seven of its U.S. Attorneys’ Offices. We also discussed this matter with officials from Medicaid Fraud Control Units in California, Florida, Illinois, and New York and a representative from the National Association of Medicaid Fraud Control Units. Finally, we assessed the OIG’s productivity in terms of its outreach and education activities. To do this, we collected information regarding the number of speeches, presentations, and testimonies given by various OIG employees. We also discussed this matter with OIG employees and professional staff members at several congressional committees with jurisdiction over Medicare and other federal health programs. To elicit broad-based views of OIG employees on morale and other issues, we conducted a Web-based survey. We solicited OIG employee participation by e-mail, using an e-mail list provided by the OIG. We first sent a notification e-mail alerting the employees to the upcoming survey and to check for inaccurate e-mail addresses. We verified with the OIG that the individuals whose e-mails were returned as “not deliverable” were no longer active OIG employees. We then sent an activation e-mail to each employee, containing a unique user name, password, and instructions for accessing the survey on the GAO Web site. We sent three follow-up reminder e-mails to nonrespondents. Employees were given 1 month to complete the survey. Of the 1,621 employees on our list, 1,451 completed the survey for a response rate of 90 percent. The survey contained 29 items asking employees for their views on the organization, management, and their personal job satisfaction. The four possible responses were: strongly agree, somewhat agree, somewhat disagree, and strongly disagree. The first 26 items on the survey were identical to those from an employee survey conducted by the OIG in January 2002, which we used as a basis for comparing our survey results. We included three additional items: “Overall, the OIG is improving as a place to work and make a difference,” “I have confidence and trust in my organization,” and “In the last 15 months, morale in my work group has improved.” We also included seven demographic items and provided an open-ended comment box. We included a final item for the respondent to mark the survey as “Completed,” which, if checked, indicated that the respondent gave us permission to include his or her responses in our analyses. In total, 578 of the 1,451 survey respondents (40 percent) elected to write open-ended comments. We coded 573 of the comments for tone (positive, negative, neutral) and content. To code content, we used 36 categories related to morale, productivity, management, personnel issues, independence, propriety, and other topics. The comments of three respondents were not coded because they did not fit into any of our coding categories. The comments of two additional respondents were not coded because they did not mark their surveys as “Completed.” The unit of analysis was the comment—not the respondent. For example, if one respondent made several comments that fell into different categories, each comment was coded separately. In response to allegations that certain employees, including the Inspector General, possessed improper credentials, we evaluated the security of the OIG’s credentialing system. OIG employees are issued credentials that display their photographs, signatures, job titles, and, in the case of OI investigators, their status as law enforcement officers. Because adequate internal controls are key to preventing mismanagement and operational problems, our evaluation centered on the controls governing this computer-based system, physically located in the OIG headquarters building. In addition, recent advances in information technology have heightened the importance of ensuring that controls over electronically stored information are frequently reviewed and updated to minimize the threat of improper use. Changes in information technology led to revisions in Standards for Internal Controls in the Federal Government, which became effective at the beginning of fiscal year 2000, to reflect new guidance for modern computer systems. Our work revealed serious weaknesses in the internal controls governing the OIG’s credentialing system. The physical security of the computer system used to produce credentials was inadequate. The system was housed in a public file room with unrestricted access. Because the room also contained a copier machine, many individuals routinely entered the area. The system’s backup tapes were located in an unlocked drawer in the credentialing system desk. In addition, we also found the stock paper containing the agency’s insignia, used in the production of all credentials, stored unlocked in a cabinet in the same room. In addition, we found deficiencies in the system itself, making it even more vulnerable to misuse. For example, we found that neither the computer’s screen saver nor the credentialing software programs on the computer were password protected, and the employee photo and signature files were not adequately protected. The system also did not have the capability to create a history log or audit trail to identify past users. Given the system’s unsecured location, we determined that the system itself was easily susceptible to unauthorized access through the use of several techniques, such as a device that could identify recent keystrokes to capture the names of recent users and their passwords. When we visited the credentialing room we found it empty, the computer on, and the screensaver active. By touching the computer’s mouse we were able to cancel the screensaver and observed an open record on display. We found that we could access, copy, modify, and delete sensitive files including employee photos, digital signatures, and personnel information with little likelihood of detection or system recovery. It would also have been possible to create a false, unauthorized set of credentials. OIG officials have since told us that they have taken steps to correct these weaknesses. Major contributors to this report were Enchelle D. Bolden, Helen Desaulniers, Curtis Groves, Shirin Hormozi, Behn Kelly, Terry Richardson, Christi Turner, and Anne Welch. | Janet Rehnquist became the Inspector General of the Department of Health and Human Services (HHS) in August 2001. GAO was asked to conduct a review of the Inspector General's organization and assess her leadership, independence, and judgment in carrying out the mission of the Office of Inspector General (OIG). GAO examined indicators of the OIG's productivity and compared them to the organization's past performance. GAO also determined whether employee morale has been sustained by surveying all OIG employees and comparing the results to those obtained through an identical survey administered in 2002. On March 4, 2003, the Inspector General resigned her office effective June 1, 2003. However, in this report we refer to Ms. Rehnquist as the Inspector General. The credibility of inspectors general is largely premised on their ability to act objectively and impartially--both in substance and in perception. Some of the HHS Inspector General's actions--including her decision to delay a politically sensitive audit--created the perception that she lacked appropriate independence in certain situations. The Inspector General exhibited serious lapses in judgment that further troubled many OIG staff. For example, she inappropriately obtained a firearm that she briefly possessed at her workplace and OIG credentials that identified her as a law enforcement officer. The Inspector General also initiated a variety of personnel changes in a manner that resulted in the resignation or retirement of a significant portion of senior management, disillusioned a number of higher level OIG officials and other employees, and fostered an atmosphere of anxiety and distrust. Ultimately, the collective effect of these actions compromised her ability to serve as an effective leader of HHS's Office of Inspector General. Examining productivity trends is difficult because the work of the OIG often involves multiyear efforts and the results recorded for a single year are heavily dependent on work initiated in prior years. Similarly, savings achieved in any one year can be attributable to the culmination of efforts made over several years. Given these constraints, GAO noted that productivity at the OIG over the last 3 years increased in some areas and declined in others. Overall savings attributable to the OIG's efforts--as reported in its semiannual reports to the Congress--increased from $15.6 billion in fiscal year 2000 to $21.8 billion in fiscal year 2002. The number of individuals convicted for violating HHS program statutes and regulations--another key indicator of the OIG's performance--also increased. On the other hand, declines were noted in the number of settlements with providers who submitted false claims to the government and the OIG's education and outreach activities. GAO's survey results showed that employees' overall views of the organization, management, and their personal job satisfaction generally remained positive and relatively unchanged between 2002 and 2003. However, field office staff and those in lower level positions were considerably more positive in their views of the organization than their counterparts in headquarters and at the highest levels of management. Two units in particular--the OIG's Office of Counsel and the Office of Evaluation and Inspections--also had marked declines in morale. Both reported significantly lower levels of trust and confidence in the organization and less job satisfaction, compared to 1 year earlier. The Inspector General generally disagreed with some of our findings. In our response, we address why these findings raise concerns about the management of the OIG. We also provided our draft report to the Office of the HHS Secretary, but did not receive comments. |
First authorized in 1971, the program currently known as HCOP was last reauthorized in 1998. The Secretary of Health and Human Services is authorized to make HCOP grants “for the purpose of assisting individuals from disadvantaged backgrounds . . . to undertake education to enter a health profession.” A wide range of entities are eligible to receive HCOP grants, including, for example, schools of medicine, dentistry, and pharmacy; schools with graduate programs in behavioral and mental health; programs to train physician assistants; and other public or private nonprofit health or educational entities. HCOP grant funds may be used for a variety of activities, such as recruiting individuals from disadvantaged backgrounds interested in health careers; facilitating their entry into health professions schools; providing counseling, mentoring, and other support activities designed to assist them to complete this education; providing information on financial aid; and providing experience in primary health care settings. The 1998 reauthorization of HCOP emphasized the importance of outreach activities by adding a funding preference for HCOP applications for projects that “involve a comprehensive approach by several public or private nonprofit health or educational entities to establish, enhance and expand educational programs that will result in the development of a competitive applicant pool of individuals from disadvantaged backgrounds who desire to pursue health professions careers.” Applications qualifying for this funding preference have an advantage because they must be considered for funding ahead of applications that do not. Projects supported by HCOP grants focus on individuals from disadvantaged backgrounds, and Congress has recognized that such individuals may be members of minority groups. The Public Health Service Act directs the Secretary of Health and Human Services “to the extent practicable, ensure that services and activities are adequately allocated among the various racial and ethnic populations who are from disadvantaged backgrounds.” Section 739 of the Public Health Service Act does not specify any particular populations or methods that HRSA must use to ensure this allocation, leaving these decisions to the agency’s discretion. According to HRSA officials, in the 1990s, the agency allocated additional points to the scores of applications from historically black colleges and universities, Hispanic-serving institutions, and tribal colleges and universities to improve their chances of receiving an HCOP grant. HRSA reported that for 1997 this practice resulted in its awarding eight more HCOP grants to historically black colleges and universities than it had awarded for the previous year. For fiscal years 2002 through 2005, HRSA followed a standard process to award HCOP grant funds, distributing the program’s available funds on a noncompetitive basis to continue funding existing grant projects, then awarding the remaining funds on a competitive basis. For competitive HCOP grants, HRSA published criteria and relied on the assessment of independent reviewers. Grants were awarded in accordance with the applications’ rank order as determined by the independent reviewers. The amount of HCOP funds HRSA distributed each year on a noncompetitive basis to continue funding existing grant projects determined the amount that remained available for competitive grants and, consequently, the number of competitive grants HRSA awarded. For fiscal years 2002 through 2005, the amounts HRSA made available for HCOP grants from its annual appropriations remained relatively stable, with an average of about $34 million a year over the 4 fiscal years. Before making competitive awards, HRSA distributed funds each year on a noncompetitive basis to support existing HCOP grant projects in their second or subsequent years. These noncompetitive continuation awards were subject to HRSA officials’ approval after the agency reviewed each grantee’s annual progress report. Once the noncompetitive continuation awards were made, HRSA awarded the remaining HCOP funds on a competitive basis, including new grants to entities that did not have an HCOP grant for a particular project and competitive continuation grants to entities that applied for continued funding after the end of their authorized HCOP grant period. As shown in figure 1, the amounts distributed on a noncompetitive basis to continue funding existing grant projects varied, from a low of $18 million for fiscal year 2005 to a high of $30 million for fiscal year 2003, and the remaining funds awarded as competitive grants ranged from a low of $4 million for fiscal year 2003 to a high of $15 million for fiscal year 2005. For each of fiscal years 2002 through 2005, HRSA published a notification of upcoming grant opportunities, including those for HCOP grants. This notification provided an overview of the HCOP program, including the entities eligible to receive HCOP grants and a description of the funding preference for projects with a comprehensive approach. For detailed review criteria, the annual notification referred prospective HCOP applicants to the HCOP program guidance available on request or, for fiscal year 2005, through HRSA’s Web site. The review criteria HRSA published in its HCOP program guidance addressed different aspects of a successful HCOP project. Each criterion carried a specified number of potential points, for a maximum total score of 100. For some criteria, the point values differed according to whether the application was for a new grant or a competitive continuation grant. This difference reflected the fact that applications for competitive continuation grants were required to include a summary of the grantee’s management of its previous HCOP grant project and of progress toward meeting its objectives. For all applications for competitive grants—both new and competitive continuations—HRSA assigned the greatest number of potential points to the criterion that addressed plans to implement the HCOP activities authorized in the Public Health Service Act. Table 1 summarizes the criteria used by reviewers to assess HCOP applications for fiscal year 2005. The HCOP program guidance also included information on how to apply for, and receive, the funding preference for projects involving a comprehensive approach. To receive the funding preference, applicants were required to meet all four of the following statutory requirements: Demonstrate a commitment to a comprehensive approach through formal signed agreements that specify common objectives and establish partnerships with institutions of higher education, school districts, and other community-based entities. Enter into formal signed agreements reflecting the coordination of educational activities and support services and the consolidation of resources within a specific area. Design activities that establish a competitive health professions applicant pool of individuals from disadvantaged backgrounds by focusing on both academic and social preparation for health careers. Describe educational activities that focus on developing a culturally competent health care workforce to serve needy populations in the geographic area. HRSA’s HCOP program guidance for fiscal years 2002 through 2005 specified that, to receive the funding preference, copies of formal agreements between applicants and community-based partners must be included with the application. For fiscal years 2002 through 2005, HRSA’s standard process for awarding competitive HCOP grants relied on independent reviewers to assess applications against the agency’s published review criteria. HRSA officials generally limited their own review of applications for competitive HCOP grants to screening for applicant eligibility and compliance with technical requirements such as format and length. After determining which applications met basic eligibility requirements, HRSA officials forwarded all eligible HCOP applications to the agency’s Division of Independent Review to arrange for assessment and scoring. To assess HCOP applications, the division selected reviewers with health- related educational, counseling, academic, or project management experience who were not employed by HRSA and who were free from conflicts of interest, including employment or consulting arrangements with any entity that was applying for an HCOP grant for that fiscal year. The division sent each reviewer about eight applications to read in advance, then convened multiple panels in which reviewers met to discuss the merits of those applications. The reviewers were instructed to apply the published HCOP review criteria and reach consensus within each panel on their funding recommendations. The reviewers did not recommend for approval those applications they determined were not responsive to the review criteria. For each application recommended for approval, the reviewers assigned a score and determined whether the application qualified for the funding preference. The reviewers also had the opportunity to comment on applications’ proposed budgets and to recommend adjustments for reasonableness. After the independent reviewers completed their assessments, HRSA officials used a statistical method to standardize the results from all HCOP review panels for a given year into a single ranked list, placing all applications receiving the funding preference ahead as a group, from highest to lowest score, followed by applications without the funding preference, from highest to lowest score. HRSA officials used this rank- order list as their basis for recommending which applications should receive grants for a given fiscal year and the amount of each award. The HRSA officials’ recommendations were included in memorandums to the HRSA Administrator, who made the final award decisions for fiscal years 2002 through 2005. Figure 2 provides an overview of the process for awarding competitive HCOP grants. When awarding HCOP grants, HRSA had the discretion to consider additional factors, such as geographic diversity, targeted health professions, and the allocation of HCOP-funded services and activities among minority populations who are disadvantaged. According to a HRSA official responsible for administering the HCOP program, the agency could have used this discretion to depart from the rank-order list resulting from the independent review process but did not do so for fiscal years 2002 through 2005. This official said that 80 percent of HCOP program participants in fiscal year 2004 came from disadvantaged minority groups, regardless of the entity that received the HCOP grants, and that HRSA had concluded that no divergence from the rank-order list was required since the allocation of HCOP-funded activities among minority populations was consistent with the Public Health Service Act. For fiscal year 2004, however, HRSA reduced all competitive HCOP grant budgets by 10 percent—an action that enabled the agency to fund five additional grants, including three at historically black colleges and universities that would not otherwise have been funded. For fiscal years 2002 through 2005, HRSA reviewed a total of 439 applications for competitive HCOP grants and awarded 99 HCOP grants. The number of competitive HCOP grants awarded depended on the availability of funds each year, and HRSA was unable to fund many high- scoring applications that received the funding preference. Over the 4 fiscal years, minority-serving institutions submitted 25 percent of the applications for competitive HCOP grants and received 30 percent of the awards. Both the number of applications and the number of competitive grants awarded varied from year to year (see table 2). Overall, for fiscal years 2002 through 2005, applications for new HCOP grants outnumbered applications for competitive continuations by nearly three to one, but applications for new grants received about the same number of awards as applications for competitive continuation grants. The number of competitive grants awarded in a given year depended more on the availability of funds for competitive HCOP grants than on the applications’ scores. Each year, the score of the lowest-scoring application receiving a grant differed little from the score of the next application on the list, which did not receive a grant. While all applications that received grants for fiscal years 2002 through 2005 qualified for the funding preference for comprehensive projects, the preference did not guarantee that an application would be funded. In some years, applications that received the funding preference and scored in the 80s (out of 100 possible points) were not funded. As shown in figure 3, the majority of applications that were approved for funding by the independent reviewers received the funding preference, but not all were funded. For fiscal years 2002 through 2005, minority-serving institutions submitted a total of 25 percent of all applications for competitive HCOP grants and received about 30 percent of awards. Although minority-serving institutions received awards in greater proportion than their representation among all applications for HCOP grants over the 4 fiscal years, the proportions varied from year to year. For fiscal years 2002, 2004, and 2005, minority-serving institutions were represented among grantees in the same, or in greater, proportion than they were among applications, submitting 25–28 percent of applications and receiving 25–35 percent of grants. Fiscal year 2003 stands out because of the smaller number of competitive grants awarded; that year, 10 competitive HCOP grants were awarded, 1 of which was awarded to a minority-serving institution (see table 3). The smaller number of competitive grants was mainly due to the relatively high number of noncompetitive continuation grants that received funding for that fiscal year. Among minority-serving institutions, historically black colleges and universities submitted the most applications and received the most awards, followed by Hispanic-serving institutions (see table 4). Some entities submitted more than one application over the 4 fiscal years of our review, and a given entity may have received more than one grant. For example, an entity may have applied for an HCOP grant for fiscal year 2002 and failed to receive a grant, then tried again in subsequent years. A new fiscal year 2002 grantee would have had to apply for a competitive continuation grant for fiscal year 2005 after the end of its 3-year project period. It is also possible for the same entity to have had more than one HCOP grant at the same time, provided that each grant had a distinct purpose and budget. In written comments on a draft of this report (see app. III), HRSA stated that the report met the goals of describing the award process and outlining the number and characteristics of HCOP applicants and grantees. HRSA suggested that, due to the small number of grantees, the summary of findings on our Highlights page present the numbers, rather than percentages, of minority institutions that were awarded grants between 2005 and 2006. For the summary, we believe it is appropriate to use percentages to convey that applications from minority-serving institutions generally received grants in greater proportion than all applications. As noted in the draft report, the percentages we present are for the 4-year period of fiscal years 2002 through 2005. HRSA provided two other comments suggesting revisions to clarify our discussion, which we generally incorporated. In addition, HRSA provided technical comments which we incorporated as appropriate. We are sending copies of this report to the Administrator of HRSA and appropriate congressional committees. We will also provide copies to others upon request. In addition, the report is available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (312) 220-7600 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. We determined whether Health Careers Opportunity Program (HCOP) applicants and grantees were minority-serving institutions by using statutory definitions, lists of institutions that fall under these statutory definitions, and data from the Department of Education. The term “minority-serving institution” refers to an accredited institution of higher education eligible for federal support under title III or title V of the Higher Education Act of 1965; this support is administered by the Department of Education. These institutions include historically black colleges and universities, American Indian tribally controlled (or tribal) colleges and universities, Hispanic-serving institutions, Native Hawaiian–serving institutions, and Alaska Native–serving institutions. For our review, we defined historically black colleges and universities and tribal colleges and universities as institutions that met certain statutory definitions for institutions eligible to receive federal support under title III of the Higher Education Act of 1965. To identify an HCOP applicant or grantee as a historically black college or university, we compared a list of historically black colleges and universities published by the White House Initiative on Historically Black Colleges and Universities with the data we obtained from the Health Resources and Services Administration (HRSA) on HCOP grant applicants and recipients. To identify HCOP applicants and grantees that were designated as a tribal college or university, we compared a list published by the White House Initiative on Tribal Colleges and Universities with the data we obtained from HRSA on HCOP grant applicants and recipients. Hispanic-serving institutions, Native Hawaiian–serving institutions, and Alaska Native–serving institutions are eligible for federal funding under title III or title V of the Higher Education Act of 1965. Unlike historically black colleges and universities and tribal colleges and universities, however, eligibility of these institutions for funding is based on the percentage of enrolled minority students. As a result, the number of institutions that qualify as Hispanic-serving institutions, Native Hawaiian– serving institutions, and Alaska Native–serving institutions can vary from year to year. For our review, we defined Hispanic-serving institutions as those that received grants through the Developing Hispanic-Serving Institutions Program under title V of the Higher Education Act of 1965 for fiscal years 2002 through 2005. That is, we determined an institution’s status as a Hispanic-serving institution for a particular fiscal year on the basis of whether the institution had a title V grant that year. To identify HCOP applicants and grantees that were Hispanic-serving institutions at the time of our review, we obtained lists of title V grantees for the Developing Hispanic-Serving Institutions Program from the Department of Education’s Web site for fiscal years 1999 through 2005. We cross-checked the title V grantee lists with the membership of the Hispanic Association of Colleges and Universities and with lists of schools with significant Hispanic enrollment from the Department of Education’s Office of Civil Rights. We compared these lists with the data we obtained from HRSA on HCOP grant applicants and recipients. In addition, we counted all HCOP applicants and grantees located in Puerto Rico as Hispanic-serving institutions. Because not all institutions that could be eligible for grants under title V of the Higher Education Act of 1965 apply for or receive title V grants, our counts of Hispanic-serving institutions at a given time are likely to be conservative. Likewise, we defined Native Hawaiian–serving institutions and Alaska Native–serving institutions as those that were eligible to receive grants under title III of the Higher Education Act of 1965 and that received such grants for fiscal years 2002 through 2005. As noted above, the exact number of entities designated as minority- serving institutions may vary from year to year. While we were able to classify the HCOP applicants and grantees for fiscal years 2002 through 2005, table 5 summarizes the different minority-serving institution designations and provides approximate counts for fiscal year 2005, the most recent year for which total counts were available. Table 6 shows, by location, applications and awards for competitive HCOP grants for fiscal years 2002 through 2005. The numbers represent applications, rather than individual applicant entities. An entity may have applied for a competitive HCOP grant more than once, and a single entity may have had more than one HCOP grant for separate and distinct HCOP projects. The locations are those of the grant applicants, although partnerships may cross state lines and result in HCOP-funded activities and services in more than one state. In addition to the contact named above, Kim Yamane, Assistant Director; Matt Byer; Ellen W. Chu; Karlin Richardson; Suzanne Rubins; and Hemi Tewarson made key contributions to this report. Health Professions Education Programs: Action Still Needed to Measure Impact. GAO-06-55. Washington, D.C.: February 28, 2006. Low-Income and Minority-Serving Institutions: Department of Education Could Improve its Monitoring and Assistance. GAO-04-961. Washington, D.C.: September 21, 2004. Health Professions Education: Clarifying the Role of Title VII and VIII Programs Could Improve Accountability. GAO/T-HEHS-97-117. Washington, D.C.: April 25, 1997. | To support the education and training of health professionals, the Health Resources and Services Administration (HRSA), in the Department of Health and Human Services (HHS), administers health professions education programs authorized under title VII of the Public Health Service Act. One of these programs, the Health Careers Opportunity Program (HCOP), provides grants to health professions schools and other entities to help students from disadvantaged backgrounds prepare for health professions education and training. Funding preference is given to grant applications that demonstrate a comprehensive approach involving other educational or health-related partners. Congressional committees have encouraged HRSA to give priority to applications from schools with a historic mission of educating minority students for health professions. In 2004, the appropriations conference committee asked GAO to review HRSA's process for awarding grants. This report addresses, for fiscal years 2002 through 2005, (1) HRSA's process for awarding HCOP grants and (2) the number and characteristics of HCOP applicants and grantees. GAO reviewed data from HRSA, interviewed HRSA officials, and reviewed relevant federal laws and agency documents from HHS and the Department of Education. HRSA followed a standard process to award HCOP grants, distributing funds on a noncompetitive basis to continue funding existing HCOP grants within their approved project periods, and then awarding the remaining funds on a competitive basis. For each of fiscal years 2002-05, HRSA competitively awarded between $4 million and $15 million from the approximately $34 million annually available for HCOP. To award competitive grants, HRSA used independent reviewers who assessed applications against published criteria, scored applications that met minimum criteria, and determined if they qualified for the funding preference. HRSA ranked the applications from highest to lowest score--putting those with the funding preference first--and awarded grants in decreasing rank order until the available funds were exhausted. Although HRSA had discretion to award grants out of rank order, the agency did not do so for fiscal years 2002-05. For fiscal years 2002-05, HRSA awarded a total of 99 competitive HCOP grants from 439 grant applications reviewed. Overall, minority-serving institutions submitted about 25 percent of the applications reviewed and received about 30 percent of the competitive grants; historically black colleges and universities were the most numerous grantees among minority-serving institutions, followed by Hispanic-serving institutions. HRSA commented that a draft of this report met the goals of describing the award process and outlining the number and characteristics of HCOP applicants and grantees. |
The U.S. Army School of the Americas, located at Fort Benning, Georgia, is a military educational institution that has trained over 57,000 officers, cadets, noncommissioned officers (NCO), and civilians from Latin America and the United States over the past 50 years. According to the State Department, the training provided by the School is intended to be a long-term investment in a positive relationship with Latin America. Today’s School is derived from several predecessor institutions, beginning with a 1946 Army school established primarily to provide technical instruction to U.S. personnel, with limited training for Latin Americans. In 1987, under Public Law 100-180 (10 U.S.C. 4415), Congress formally authorized the Secretary of the Army to operate the School with the purpose of providing military education and training to military personnel of Central American, South American, and Caribbean countries. Appendix I provides a chronology of the School’s history. The School is funded from two sources: (1) the Army’s operations and maintenance account, which covers overhead costs such as civilians’ pay, guest instructor programs, supplies and equipment, certain travel expenses, and contracts, and (2) reimbursements from U.S. security assistance provided to Latin American countries, which cover costs associated with presenting the courses, including instructional supplies and materials; required travel for courses; and support for the School’s library and publications. In fiscal year 1995, the School received $2.6 million from the Army’s operations and maintenance account. In addition, the School’s courses generated $1.2 million from foreign militaries using U.S. security assistance grant funds. The School retains about 35 percent of this amount to defray its costs for course offerings. Fort Benning uses another 37 percent to defray costs associated with infrastructure maintenance, and the remainder is transferred to Department of the Army headquarters. The last decade has seen remarkable change in Latin America as countries throughout the region have embraced political and economic freedom. Today, all Latin American nations, except Cuba, have democratically elected leaders, increasingly open economies, and increased political freedoms. It is within this changing political, military, and economic environment that the School of the Americas has been operating. The end of the Cold War and the spread of democratic government around the world have accelerated dramatic change in Latin America. Over the past 15 years, the region has seen a significant shift away from dictatorships and military regimes. Today, virtually all Latin American countries have representative governments, although the democratic institutions in many of these countries are in their embryonic stage.Reflecting the fragile nature of democracy in some countries, the 1991 Santiago Resolution of the Organization of American States called for the preservation and strengthening of democratic systems and was reinforced at the 1995 Defense Ministerial of the Americas in Williamsburg, Virginia. However, it remains unclear whether the democratic gains of the 1980s can be sustained. In some countries, civilian institutions are relatively weak and fragmented and are vulnerable to economic and social instability. Corruption within the governments, including military and law enforcement agencies, also threatens the continued stability of democratic governments. The move toward democratically elected governments has caused the role of the militaries in Latin America to undergo significant change. The militaries were frequently political, and largely autonomous, actors in regional affairs and often played a dominant role in their societies. In recent years, however, the militaries appear to have become less prone to political intervention. The concern exists, however, that this inclination is not permanent, and that democratization is not irreversible. The recent coup attempt in Paraguay, while rebuffed, demonstrates the fragile nature of democracy in Latin America. Further, human rights violations continue to be a concern in the region. The 1995 State Department report on human rights states that even though progress has been made, widespread abuses of human rights continue in some Latin American countries. For example, although progress was made in negotiations between the Guatemalan government and guerrillas and human rights activists were elected to the country’s congress, serious human rights abuses continued to occur in Guatemala in 1995. In Mexico, serious problems also remain, such as extrajudicial killings by the police and illegal arrests. Colombia is another country in the region that continues to face major human rights problems associated with its military, including killings, torture, and disappearances. The State Department has expressed concerns about human rights violators’ impunity from prosecution. The State Department’s recent report on Colombia noted that the military has usually failed to prosecute human rights abuse cases involving military personnel. Several sources, including the Organization of American States, have expressed concern about Colombia’s human rights record. In response, during its 1996 session, the United Nations Commission on Human Rights, of which Colombia is a member, authorized the High Commissioner on Human Rights to establish an office in Colombia—an unusual step. The office is expected to monitor and assess the human rights situation in Colombia, including Colombia’s progress in correcting its human rights abuses; provide assistance to Colombia to correct those abuses; and report its findings at next year’s convention. Economically, the region is shifting from protectionist and statist economic models to free markets and export-oriented growth. Leaders throughout the region recognize the need to achieve macroeconomic stability, and many countries are enduring painful economic adjustments. In some cases, economic reforms have further exacerbated the concentration of income and wealth and thus widened the already large disparity between the rich and the poor. Although the region’s total gross domestic product increased between 1991 and 1993, an estimated 45 percent of the people are living in poverty. The end of the Cold War presented the United States with a new foreign policy opportunity in Latin America. The United States no longer needs to bolster the militaries to stop communism and has begun focusing more efforts on promoting economic and political freedom. At the December 1994 Summit of the Americas hosted by the United States, 34 democratically elected leaders from Latin America gathered to commit their governments to open new markets, create a free trade area throughout the hemisphere, strengthen the movement to democracy, and improve the quality of life for all people of the region. The United States is working through multilateral institutions to further the goals of the Summit of the Americas. In recent testimony, for example, the State Department described how the Inter-American Development Bank is working for sustainable development and promoting specific Summit mandates in the fields of health and education. Consistent with the changing political and economic environment, the United States is approaching security issues in the region in terms of mutual cooperation. Today, the U.S. policy reflects the retreat of the Communist threat and the political transformation in the Latin American region. It emphasizes support for democratically elected governments, defense cooperation, confidence-building measures, and the mitigation of transnational threats such as narcotrafficking and international terrorism. The United States considers educating and training foreign militaries and civilians a critical part of its national security strategy to pursue the specific goal of promoting democracy in the Latin American and Caribbean region. Senior Army officials told us that international military training programs expose students to U.S. military doctrine and practices and include instruction for foreign military members and civilians on developing defense resource management systems, regard for democratic values and civilian control of military, respect for human rights and the rule of law, and counterdrug operations. In the U.S. Security Strategy for the Americas, DOD identifies the School of the Americas and two other military training institutions as regional assets through which the United States can engage its counterparts in the region. Although the School of the Americas is one option among the many Army schools and installations offering courses to foreign military students, it is the predominant training choice for Latin Americans. While the number of students at the School has decreased over the past few years because of reduced U.S. funding for international military training, School officials expect an increase this year due to increases in training funding for 1996. Students at the School come primarily from their countries’ military or police forces, with a significant proportion from military or police academies. Although some countries have sent more students to the School than others, the predominant countries represented at the School typically reflect U.S. interests in the region at a particular time. Of the 5,895 foreign students that came to the United States to attend U.S. Army training courses in fiscal year 1995, 842 (14 percent) were from Spanish-speaking Latin American and Caribbean countries. Of the 842, 745 (88 percent) of those attended the School of the Americas. The 97 Latin Americans that did not receive their Army training at the School attended courses at 24 other Army installations. Some of these students took courses not offered at the School, while others took similar courses but received their instruction in English. The 745 students who attended the School in 1995 represented a reduction in enrollment. Between 1984 and 1993, an average of 1,371 students attended the School each year, with attendance ranging from 996 in 1985 to 1,763 in 1992. According to School officials, a reduction in funding for international military training contributed to the decrease in the number of students. The IMET program funds allocated to the Latin American region were reduced from the 1993 level of $11.3 million to about $5.1 million in 1994 and about $4.8 million in 1995. This reduced allocation reflects the reduction in total IMET funding for those years—from $42.5 million in 1993 to about $22.3 million in 1994 and about $26.4 million in 1995. However, officials at the School project an increase in the number of students for 1996 since IMET funding for the Latin American region for 1996 was increased to $9.1 million. School officials said that the effect of the reduction in international training funds was further compounded by increases in the cost of the courses. Inflation particularly affected certain cost components, such as ammunition, flight support, course-related travel, and the publication of training materials. According to School officials, the cost of some courses has doubled over the past 5 or 6 years, in large measure because of increases in the cost components. As a result, foreign militaries could not afford to send higher numbers of students to the courses. According to School officials, because the curriculum is taught in Spanish, Latin American and Caribbean military forces can select students based on their military training needs without considering their English language skills. This allows the countries to save funds that might have to be spent for preparatory English language courses. Candidates are identified by foreign military officials and approved by U.S. officials at the U.S. embassies in Latin America. Instructions issued by the Secretary of State in January 1994, require U.S. officials to review records of prospective students for all U.S. schools to identify any past actions or affiliations considered undesirable, such as criminal activity, human rights abuses, or corruption. According to School officials, all prospective foreign students are subject to the same screening and selection criteria and procedures, whether they will attend the School or other U.S. military training institutions. Virtually all of the students selected for the School of the Americas have been members of their countries’ military or police forces, with less than one percent civilian students. Officials at the School said that even though courses intended for civilian participation are offered, increasing civilian attendance is difficult for two reasons. First, government departments in many countries tend to be understaffed, and it is difficult for key civilian officials to leave their positions for several weeks to attend courses in the United States. Second, some foreign militaries and defense ministries prefer to spend available military training funds on members of the armed forces rather than civilians, despite encouragement from U.S. officials to select some civilians for relevant courses. Between 1990 and 1995, about 41 percent of the students were cadetsfrom Latin American military or police academies. Cadet-level courses are not new; they have been offered at the School as far back as the 1950s. According to School officials, instructing cadets is consistent with the mission of the School, as these students represent the next generation of military officers. Also, some countries have identified their military or police cadets as a top training priority. Since 1991, Chile has sent cadets to the School for an 8-day course specifically developed for them. According to School officials, Chile used a large proportion of its IMET funds for this one course in 1995. Students from 22 Latin American and Caribbean countries have attended courses at the School of the Americas since its inception. However, about half of those students have come from five countries—Colombia (17 percent), El Salvador (12 percent), Nicaragua (8 percent), Peru (7 percent), and Panama (6 percent). The countries that send more students to the School are generally the same countries receiving a higher level of U.S. military assistance, which can be used for training. For example, when the United States was providing large amounts of foreign assistance, including training, to El Salvador’s military to counter the insurgent threat in the 1980s, about one-third of the students at the School came from El Salvador. Between 1991 and 1995, most of the students at the School came from Colombia, Honduras, and Chile. The curriculum of the School has changed from its early days, when automotive and radio repair, artillery mechanics, and cooking were taught along with infantry, artillery, and military police courses. By the 1970s, the curriculum included courses on counterinsurgency operations to train Latin American armed forces in their efforts to confront insurgencies in the region. The current curriculum encompasses a variety of courses that enhance combat and combat support skills, encourage the development of appropriate civil-military relations, and strengthen defense resource management techniques. Since 1990, the School has added nine new courses that reflect current U.S. interests in the region. Two of the new courses—democratic sustainment and civil-military operations—along with the existing resource management and command and general staff officer courses—meet DOD’s criteria for the Expanded IMET program. Other new courses were developed to meet unique or urgent needs in the region. For example, at the request of the Organization of American States, the School developed a countermine course to train students to recognize, detect, and neutralize minefields and to be able to train demining teams in their countries. Since 1993, 25 students from nine countries have taken the course, and DOD officials told us that this training is currently being used in demining operations in Central America. According to DOD, the new Peace Operations course was developed in response to the expanding presence of peacekeeping operations around the world and to present U.S. doctrine and policy for peacekeeping to the Latin American forces. In 1995, 21 students, including 5 civilians, from nine countries attended the course. Other new training includes the executive and field grade logistics, border observation, and computer literacy courses as well as cadet-level intelligence and counterdrug courses. In 1996, the School at Fort Benning is offering 32 courses, 23 of which are targeted toward noncommissioned and junior to mid-level officers. The remaining nine courses are targeted toward cadets—eight for military cadets and one for police cadets. While none of the courses are intended solely for civilians, 10 courses include civilians in the targeted audience. The Helicopter School Battalion at Fort Rucker, Alabama, is offering 20 courses in helicopter flight operations and maintenance. Table 1 provides a brief description of the courses offered in 1996 and the number of students that attended these courses in 1995. The School of the Americas’ curriculum is based on U.S. military doctrine and practices and uses materials from courses presented to U.S. military personnel. School officials told us that it is like other U.S. military institutions’ curricula, except that it is presented in Spanish. For example, the military intelligence officer course at the School uses doctrine and materials developed by the U.S. Army Intelligence Center and School at Fort Huachuca, Arizona, and the executive logistics course uses material from the Defense Logistics Command and the U.S. Army Logistics Management College at Fort Lee, Virginia. Further, U.S. military students who attend the command and general staff officer course at the School receive the same professional military education credit as the U.S. military personnel who attend the course at Fort Leavenworth. Officials at the School pointed out that because all international training courses are based on U.S. doctrine, foreign students from other regions receive training in similar subjects as the students at the School. For example, in 1995, the U.S. Army Ranger course was provided to 43 foreign students from 17 countries, which exposed those students to similar training and exercises as the 17 students that attended the School’s commando course. Similarly, the Army’s infantry officer basic course was taught to 44 students from 21 countries, and similar training was provided to 17 students from Latin America at the School. (See app. II.) Instructional staff at the School can customize segments of the courses to incorporate case studies and practical exercises relevant to Latin America. For example, officials at the School said that civic action exercises conducted in Central America by U.S. and Latin American armed forces are discussed in the civil affairs segment of the command and general staff officer course. The course also includes 24 hours of instruction on the historical perspective of the roles of the family, church, government, and military in Latin America—instruction not included in the U.S. course. Reflecting the history of the region, School officials emphasized that the School provides instruction on human rights principles to all students. This human rights instruction is not presented at any other Army school. All of the School’s courses, except the computer literacy course, include a mandatory 4-hour block of instruction on human rights issues in military operations, including law of land warfare, military law and ethics, civilian control of the military, and democratization. This instruction is expanded in some courses. For example, the command and general staff officer course devotes 3 days of instruction to the subject, and uses the My Lai massacre in Vietnam as a case study. School officials told us that they consider this case study an excellent illustration of issues related to professional military behavior, command and control, and changes in U.S. military attitudes and acceptance of the principles of human rights. They said that incidents in which Latin American militaries have been involved, such as the El Mozote massacre of hundreds of peasants in El Salvador in 1981, are also discussed. Courses at the School are taught by U.S. and Latin American military members as well as some civilian instructors. The School requires that instructors possess the appropriate skills and military background in such areas as logistics, infantry, engineering or helicopter operations. All instructors must also pass a special human rights instructor program before teaching any course. Instructors from Latin America are involved with all of the courses in the curriculum and work with U.S. instructors to develop and prepare instructional materials and teach segments of the courses. The School identifies the requirements for each foreign instructor position, including rank; branch qualifications, such as combat arms or airborne; and other prerequisites, such as graduation from a command and general staff officer college. The School sends these requirements to the U.S. embassies in Latin America to solicit nominations of foreign military members that meet the requirements. Like the process used to nominate students, the foreign militaries identify prospective instructors, who are subject to approval by U.S. officials at the embassies. Officials at the School said that the Latin American instructors have become increasingly important over the past several years. These instructors provide additional opportunities for the students and other instructors at the School to establish valuable military to military contacts. Salaries of the Latin American instructors are paid by their home country. While the School’s staff levels fluctuate throughout the year, as of October 1995, a total of 239 staff were assigned to the School at Fort Benning, including 50 U.S. instructors and 33 instructors from Latin America. In 1995, TRADOC contracted for a study to analyze and develop recommendations concerning the future need for the School of the Americas and what purposes the School should serve. The study examined the issue of whether providing Spanish-language instruction to Latin Americans is still a valid requirement of the School. In addition, the study examined the appropriateness of organizationally placing the School under TRADOC, given the School’s role as a foreign policy tool and different focus compared to other TRADOC installations. The report, issued in October 1995, concluded that the School is strategically important to the United States and supports short- and long-term U.S. economic, political, and military interests in Latin America. The report acknowledged that Spanish language instruction was an important factor allowing the School to contribute effectively to implementing U.S. foreign policy in Latin America and said that the Army should reaffirm Spanish as the language of instruction. However, it noted that concerns about the continued need for the School in the post-Cold War period have surfaced, driven in part by adverse publicity over human rights violations associated with past students of the School. The study recommended that responsibility for the School be transferred from TRADOC to the U.S. Southern Command because the School’s role as a foreign policy tool makes it significantly different from other TRADOC installations. The study also acknowledged that negative publicity about the School would probably continue and that a new name for the School may be an appropriate way to break with the past. It suggested that the Department of Army provide additional opportunities for lower- and mid-grade civil servants from Latin America and make this an important thrust of the School. It also suggested that the Departments of Army and State study the desirability of establishing a Western Hemisphere Center for research, study, and instruction. This center would incorporate the School and other Spanish language military training schools and would be affiliated with the Inter-American Defense College. DOD officials told us they agree in principle with many of the recommendations in the study and are considering how best to implement some of them. For example, TRADOC has acted on the recommendation to establish a board of visitors, which met for the first time in May 1996, and the Office of the Secretary of Defense is considering establishing a security studies center for the region. For some of the recommendations on which DOD has agreed in principle, (1) the conditions prompting the recommendation have changed, (2) DOD is not the cognizant authority for action, or (3) organizational or legal hurdles impede action. DOD officials have recognized that the dearth of civilian experts in military and security affairs is a serious barrier to further democratization of Latin American defense establishments. In response, DOD is pursuing plans to open an Inter-American Center for Defense Studies in fiscal year 1998 to attract a new generation of civilians to careers in ministries of defense and foreign affairs as well as parliamentary committee staff. The Center intends to provide practical courses for promising civilians with university degrees, although military officers may attend. The curriculum would include courses on the development of threat assessments, strategic plans, budgets and acquisition plans, civil-military relations, and methods of legislative oversight. The Center would have features similar to the already established DOD centers for the study of regional security issues at the Marshall Center in Garmisch, Germany, and at the Asia-Pacific Center in Honolulu, Hawaii. The Center is not intended as a replacement or substitute for the School of the Americas. DOD officials contend that the School will continue to provide important training and links to Latin American militaries, which remain influential forces even as their roles in their societies evolve from dominance to integration. DOD concurred with our findings. Where appropriate, we have incorporated technical changes provided by DOD. DOD’s comments are presented in appendix III. We developed information on the political, military, and economic characteristics in Latin America by talking to Latin American experts from both inside and outside the federal government, reviewing literature on the region, and using findings from other GAO reports. We discussed issues related to the School and the political, social, and economic environment with representatives from the Organization of American States, the Washington Office on Latin America, Demilitarization for Democracy Project, Latin American Working Group, North-South Center (affiliated with the University of Miami), Latin American Program of the Woodrow Wilson Center, Institute for National Strategic Studies, the Latin American Center of Stanford University, and area experts at the University of California, Irvine; American University; and the University of Colorado. To obtain information on the operations of the School of the Americas, we met with officials at the School, including instructional and administrative staff. These officials provided us with documentation on the history and current operations of the School, including attendance, curriculum, and budget information. We performed a detailed review of course contents in order to understand instructional objectives. We also compared the course curriculum and attendance at the school with the student attendance of Army security assistance-funded training used by all other countries in the world. To develop the data on students attending the School, the courses they took, and the countries they came from, we relied on documentation provided by School officials. To develop similar data on the courses and students at other Army schools in fiscal 1995, we relied on automated data prepared by TRADOC in Hampton, Virginia. We did not independently verify the accuracy of the data provided to us. We conducted our review from November 1995 to June 1996 in accordance with generally accepted government auditing standards. We plan no further distribution of this report until 15 days after its issue date. At that time, we will send copies of this report to the Secretaries of Defense and State and appropriate congressional committees. We will also send copies to other interested parties upon request. Please contact me at (202) 512-4128 if you or your staff have any questions concerning this report. Major contributors to this report are listed in appendix IV. The U.S. Army established the Latin America Center-Ground Division in the Panama Canal Zone to provide instruction to U.S. Army personnel in garrison technical skills such as food preparation, maintenance, and other support functions, with limited training for Latin Americans. U.S. Army renamed the institution the U.S. Army Caribbean School-Spanish Instruction and identified a secondary mission of instructing Latin American military personnel. Increased Latin American interest in U.S. military training led to the elimination of English language instruction to focus on instructing Latin American personnel. The institution became the U.S. Army School of the Americas, with Spanish declared the official language of the School. The School relocated to its current location at Fort Benning, Georgia, due to a conflict between U.S. and Panamanian officials regarding the operation and command of the School. The Army reassigned operational control of the School from the U.S. Southern Command to the U.S. Army Training and Doctrine Command. Under Public Law 100-180, Congress formally authorized the Secretary of the Army to operate the School with the purpose of providing military education and training to military personnel of Central American, South American, and Caribbean countries. A Helicopter School Battalion at the U.S Army Aviation Center, Fort Rucker, Alabama, was activated as part of the School to provide Spanish language instruction for helicopter pilots and technicians. Nancy T. Toolan Muriel J. Forster Kevin C. Handley F. James Shafer Nancy Ragsdale The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO provided information on the U.S. Army School of the Americas and a Department of Defense (DOD) initiative to strengthen civilian institutions involved in defense and security activities in Latin American countries. GAO found that: (1) the Latin American environment in which the School operates is undergoing radical political and economic change; (2) in addition, the role of the military in many of these societies is beginning to evolve from one of political dominance to a more professional model subordinate to the civilian authority; (3) although the School trains the majority of Latin American students that come to the United States for Army training, primarily because the curriculum is taught in Spanish, it provides a small percent of the training that the Army provides to foreign students from around the world; (4) virtually all of the 745 students attending the School in 1995 represented their countries' military or police forces, with few civilians attending the School; (5) many of the courses at the School provide instruction in military and combat skills; however, since 1990, the curriculum has been broadened to include courses addressing post-Cold War needs of the region; (6) the courses offered at the School are based on U.S. military doctrine, and foreign students from other regions receive basically the same courses at other Army training locations, with the exception of the School's emphasis on human rights; (7) courses are taught by U.S. and Latin American military personnel and some civilian instructors; (8) a recent study contracted for by the Army to determine whether the School should be retained and why concluded that the School should continue but recommended a number of changes; and (9) in response to the emerging post-Cold War need to strengthen civilian institutions in Latin America, DOD is considering establishing a separate institution to focus on civilian-military relations and the development of greater civilian expertise in the region's defense establishments. |
Offshoring generally refers to a company’s purchases from abroad (imports) of goods or services that were previously produced domestically. A company may offshore services either by purchasing services from another company based overseas or by obtaining services in- house through an affiliate located overseas. For example, a U.S.-based company might stop producing parts of its accounting and payroll services in-house and instead outsource them to a foreign-based company. A U.S.- based multinational company might also offshore by moving parts of its accounting and payroll services from its domestic operations to its foreign affiliate, thus keeping the services in-house. Importing services that had previously been acquired domestically or relocating services to foreign affiliates both can result in the displacement of U.S. service production and employment, though as we discuss later, will likely have other economic effects, such as on consumer prices and productivity. However, other business activities that do not directly result in the displacement of U.S. workers are sometimes included in broader definitions of offshoring. Offshoring could include other business activities that may result in foregone job creation domestically but would not result in job losses. For example, a U.S.-based company might expand its accounting and payroll services through a foreign company or affiliate, but do so without affecting its U.S. workforce. Broader definitions of offshoring sometimes include the movement of production offshore. This definition of offshoring focuses on U.S. companies’ investing in overseas affiliates. Offshoring defined in this way could but would not necessarily result in the displacement of U.S. service production or employment. For example, a U.S.-based company investing in its overseas affiliate to produce accounting and payroll services to sell to other companies abroad might do so without affecting its production and employment levels in the U.S. Types of services associated with offshoring tend to be those that are capable of being performed at a distance and whose product can be delivered through relatively new forms of advanced telecommunications. Examples of these business functions include software programming and design, call center operations, accounting and payroll operations, medical records transcription, paralegal services, and software research and testing. More than three-quarters of U.S. private-sector employees are in service- providing industries; however, not all services jobs are likely to be at risk from offshoring. Many services jobs, such as child care providers and hairdressers, require face-to-face contact with customers. Other jobs, such as transportation workers, construction workers, and auto mechanics, require hands-on contact with physical equipment. In addition, some work, such as marketing and creative design, may be done more efficiently and productively in close proximity to customers and other workers. While government data on trade and foreign direct investment offer limited insight into the extent of offshoring, the data provide some evidence that services imports are growing. Trade data from the Department of Commerce’s Bureau of Economic Analysis (BEA) show that imports of services associated with offshoring are growing. For example, U.S. imports of business, professional, and technical services grew from $20.8 billion in 1997 to $40.7 billion in 2004—an increase of about 10% per year. It is important to note that these import data show that U.S. entities have been purchasing these services offshore, but the data do not indicate whether these entities had previously been purchasing these services from domestic U.S. sources. The U.S., Canada, and the United Kingdom are among the world’s leading exporters of services. According to World Trade Organization data, the U.S. was the world’s largest exporter of commercial services in 2004. BEA data show that in 2004 Canada and the United Kingdom accounted for 42 percent of the U.S.’s imports of unaffiliated business, professional, and technical (BPT) services, or BPT services traded between firms that are separate entities from each other. The U.S. currently exports more services than it imports and therefore maintains a trade surplus in services overall. In 2004, this surplus was nearly $48 billion, according to BEA data. However, since 1997, the trade surplus in services has generally been shrinking. At the same time, the overall trade deficit has generally been expanding (see fig. 1), though imported services comprise a small share (about 17 percent) of total U.S. imports of goods and services. BEA data on direct investment abroad capture U.S. multinational companies’ establishment of affiliates abroad, including establishment of affiliates to produce services. The data suggest that most services produced abroad by U.S. majority-owned foreign affiliates are sold to foreign markets rather than to the U.S. In addition, the data show that U.S. direct investment abroad tends to be concentrated in other developed countries, rather than in developing countries frequently associated with services offshoring. For example, according to BEA data, 61 percent of U.S. direct investment abroad in 2004 took place in the European Union, Canada, and Japan. In the same year, U.S. direct investment in developing countries that are frequently cited as suppliers of offshore services (e.g., India, the Philippines, Malaysia, and China) was relatively small—about 1 percent or less of total U.S. direct investments in each case. In addition, BEA data from 2003 show that over nine-tenths of services sold by U.S.- majority-owned nonbank foreign affiliates are sold to foreign markets rather than to the U.S. BEA data also show that the U.S. receives large amounts of direct investment by other countries. In 2004, the U.S. received nearly $96 billion in foreign direct investment. The countries that are the largest recipients of U.S. foreign direct investment abroad are also the largest foreign direct investors in the U.S., with the European Union, Japan, and Canada accounting for 82 percent of foreign direct investment in the U.S. Foreign firms investing in the U.S. employ U.S. workers. U.S. affiliates of foreign multinational corporations employed 5.3 million U.S. workers in 2003, accounting for 5 percent of total U.S. employment in private industries. Firms have been offshoring long before the recent trend in services offshoring. In previous decades, U.S. manufacturing companies were motivated to offshore because of the low costs and availability of skilled labor, production and supply networks in some developing countries, and reductions in cost of transporting goods. At the same time, U.S. companies divided their production processes into discrete pieces, which allowed them to offshore some of the components. As a result, some businesses offshored total production, and others offshored parts of the production process. Firms generally retained higher-end, higher-skilled services functions in the U.S., such as management, finance, marketing, and research and development. Offshoring has recently expanded into services due to three key factors. First, technological advances, such as advances in telecommunications and the emergence of the Internet, have enabled workers in different locations in the world to communicate and be connected electronically and has also facilitated the digitization and standardization of activities needed to complete business processes. These changes in turn have allowed business processes to be divided into smaller components, some of which could be done in different locations. For example, standardized software has made it possible for firms to outsource financial or human resources activities to a separate overseas company that performs them for many clients, rather than handling the functions internally. Thus, in many cases, the offshoring of services constitutes an outgrowth of outsourcing business functions. Second, countries such as India, China, Russia, and much of Eastern Europe have increasingly opened their borders to the global economy. Third, other countries have highly educated populations with the technical skills for performing services and technology-related work. According to several business studies, a primary reason that organizations engage in offshoring is to reduce costs. The cost savings from offshoring are primarily the result of differences between the U.S. and developing countries in the unit cost of labor, the worker compensation (wages and benefits) that must be paid to produce one unit of goods or services. Unit labor costs are lower for certain services in developing countries primarily because workers’ wages in those countries are lower than in the U.S. However, unit labor costs also depend upon the productivity levels of workers. Although labor costs in a developing country may be lower than in the U.S., it may still be possible for the unit cost of labor to be lower in the U.S. than the other country if U.S. workers’ productivity is much higher, meaning than the U.S. worker can produce many more or higher quality products within a certain time frame than a worker in the other country. Differences in unit labor costs can also result from differences in costs of employee benefits, such as health care and pension benefits. In addition, cost savings can be affected by currency exchange rates, countries’ tax policies, and government-provided incentives such as tax rebates. Aside from cost savings, firms may have other incentives to offshore. Access to a workforce in different time zones across the globe may enable companies to conduct work around the clock and consequently meet worldwide customer needs. Establishing a presence in foreign countries can provide companies access to overseas markets. In addition, offshoring non-core services can enable companies to focus their resources on their core functions. By outsourcing non-core functions to overseas firms that specialize in them, businesses may also experience improvements in the quality of these functions. Although firms may have many incentives to offshore, they may also face disincentives to offshore. Offshoring has several costs associated with it, including costs to start up an offshore operation and to manage and train an offshore workforce. In addition, some experts have noted that wages of workers in developing countries are rising more rapidly than U.S. wages, therefore shrinking the cost savings of offshoring over time. Furthermore, offshoring carries potential risks, such as possible political instability in overseas locations, less reliable civil infrastructure, exchange rate volatility, less developed legal and regulatory systems, and risks to intellectual property. In the last few decades, the Congress has enacted various pieces of legislation related to trade and increasing global interdependence, primarily due to concerns about their effects on the manufacturing sector. (See fig. 2.) This legislation sought to expand U.S. exports; establish fair trading practices; assist workers, firms, and communities adversely affected by trade; and improve U.S. competitiveness through support for education and research and development. For example, trade acts of 1962, 1974, and 1979 sought to expand U.S. exports by establishing mechanisms for negotiating and entering into trade agreements. The trade acts also established remedies for industries hurt by import competition through unfair trade practices. The Trade Act of 1974, as amended, established a trade adjustment assistance program to provide financial assistance and retraining to workers involved in the manufacturing of articles who lost their jobs due to foreign competition. In addition, the act also established a program that enabled manufacturing firms and communities hurt by trade to receive technical assistance and financial support to develop new strategies to improve their competitiveness. Congress enacted various other legislation to enhance the competitiveness of the U.S. economy by improving education and supporting research and development. Among others, these included the Stevenson-Wydler Technology Innovation Act of 1980, which authorized the creation of various technology centers. With regard to services specifically, the Trade and Tariff Act of 1984 required the Commerce Department to establish a program on international trade in services and to issue a report every 2 years. In addition, the Omnibus Trade and Competitiveness Act of 1988 directs the Secretary of Commerce to conduct a benchmark survey of services transactions. Aside from these laws, other legislation enacted by the Congress may address some concerns raised by trade and globalization. For example, under the Workforce Investment Act of 1998 (WIA), the Department of Labor oversees an employment and training system operated by states and localities to assist displaced workers in obtaining new jobs, which could include workers who become displaced due to trade-related reasons. WIA funds may also be used to provide training for employed workers to upgrade their skills. Traditional economic theory predicts that expansion of international trade, including offshoring, will have beneficial effects on the U.S. economy, but a number of concerns have also been raised about the potential economic and social impacts of offshoring. We have identified four areas of concern about the potential impacts of offshoring: potential impacts on the average U.S. standard of living, including average wages; employment and job displacement among American workers; the distribution of income; and national security and consumer privacy. Economists and other policy analysts have expressed in literature and in interviews with us a range of views about the likely impacts of offshoring on each of these areas. This diversity of views reflects several factors: the fact that services offshoring is a relatively recent development in international trade whose impact is not yet fully known; the limitations of currently available data about the extent of offshoring and its impacts; and different theoretical expectations about the likely impact of expanded trade in services on the U.S. economy. The issues identified in this section may not be exhaustive; others may raise concerns about offshoring that are not discussed in this report. Figure 3 summarizes experts’ different views about the four areas of potential impact for the U.S. that we identify. Traditional economic theory on international trade predicts that offshoring is likely to be beneficial for the average U.S. standard of living in the long run; however, some economists have argued that offshoring could harm U.S. living standards. Economists who contend that offshoring will increase average U.S. living standards expect that it will do so through raising productivity (and thereby increasing national income), increasing average wages for American workers, and providing consumers with lower prices and access to a broader range of goods and services. In addition, they expect that U.S. companies will respond to the challenges of international competition by developing new areas of specialization in the global economy. Economists who argue that offshoring may lower U.S. average living standards focus on the possibility that offshoring may contribute to a decline in the strength of some U.S. industries and may threaten U.S. leadership in innovation and technological development. Some economists also focus on the possibility that offshoring may lead to downward pressure on U.S. wages even if it has positive effects on the U.S. economy overall. Underlying these disagreements are different predictions about what areas will emerge as new sources of comparative advantage in the global economy, as well as different assessments about whether offshoring is contributing to downward pressure on U.S. wages. Effects on Productivity: Offshoring of services represents an expansion of trade into sectors of the economy that in the past were relatively untraded; as such, many economists we interviewed or who have published literature on offshoring expect offshoring to increase productivity in these sectors. Offshoring is expected to lead to productivity increases through several mechanisms. First, increased competition could lead to pressures for greater efficiency, causing least productive firms to exit the market so that firms that remain in the market are increasingly focused on managing for greatest productivity. Second, offshoring—like domestic outsourcing—could enable U.S. firms to specialize in the core functions in which they add the greatest value, while moving lower-value job functions out of the country. As U.S. firms reallocate resources toward higher-value activities, moving lower-value activities overseas, the U.S. economy overall could see productivity gains. Third, offshoring could enhance productivity by promoting reductions in the costs of technology and other inputs that improve the efficiency of business processes. For example, some economists have argued that offshoring of IT services will reduce the cost of these services, making IT-enabled products and services more affordable and leading to increased diffusion of productivity-enhancing technology throughout many industries. For instance, the lower cost of offshored health-record transcription services might encourage more health care providers to keep digitized medical records, improving the efficiency and productivity of the health care industry. Because the acceleration in services offshoring is a relatively recent phenomenon, empirical evidence about its effects on the productivity of the U.S. economy remains preliminary. However, the effects of offshoring in manufacturing have been observed over many years and can shed some light on the potential impact of services offshoring on U.S. productivity. A number of research studies suggest that offshore outsourcing contributed to productivity improvements in U.S. manufacturing. Catherine Mann, among others, has argued that offshoring in the production of computer hardware—along with domestic innovation—kept prices of new hardware low and thereby played a role in the deepening of IT investment throughout the U.S. during the 1980s and 1990s. Since the mid-1990s, the U.S. has experienced a period of unusually rapid productivity growth, which many attribute to accelerating investment in IT and the rapid diffusion of new applications and uses that occurred in the 1980s and 1990s. New Areas of Comparative Advantage: Traditional economic theory also predicts that increased trade—including offshoring—will increase economic growth, and therefore average living standards in the long run, by driving the economy to develop new innovative and high-value areas of comparative advantage—that is, to specialize in the creation of high-value goods and services that are produced most efficiently in the U.S. Although increased competition due to offshoring and other trade may lead to contraction of production and employment within some U.S. industries, trade is also expected to reallocate the resources of the U.S. economy to sectors that are comparatively more efficient, such that U.S. companies are expected to eventually develop new areas of comparative advantage in the global economy that will lead to continued economic growth. Some economists contend that advantages that the U.S. has over less developed countries, such as a relatively high-skilled workforce, abundance of capital, and well-developed financial markets and investment opportunities will enable the U.S. economy to specialize in higher-value work. In particular, they expect that offshoring will contribute to the reduction or elimination of certain lower-skilled occupations in the U.S., but lead to the creation of new jobs in occupations that require higher levels of skill, shifting U.S. production and the distribution of employment to fields with higher returns. Some empirical studies suggest that the U.S. economy has historically developed new high-value areas of comparative advantage as trade has increased. The process of the U.S. developing higher-value areas of comparative advantage as lower-value work is moved offshore has been observed over many years in some manufacturing industries. For example, in the semiconductor industry, assembly work that was originally conducted in the U.S. began to be moved offshore in the 1960s. Although this offshoring did lead to job losses in the U.S., economists Clair Brown and Greg Linden assert in their research that this movement also kept the U.S. semiconductor industry competitive and permitted the U.S. industry to specialize in higher-value work within the industry. According to Brown and Linden, as chip assembly moved offshore, U.S. firms specialized in higher-value fabrication work, and when fabrication work began to move offshore, U.S. firms specialized in design. Offshoring of services has not been occurring long enough to observe the relationship between offshoring and the emergence of new areas of specialization; however, economists J. Bradford Jensen and Lori Kletzer have argued that recent data demonstrates that workers in industries and occupations that are more likely to be affected by international trade tend to have higher wages and higher skills than workers in “non-tradable” service sector jobs, which is consistent with the hypothesis that offshoring and globalization is leading the U.S. economy to specialize in higher-value work. Historical trends also suggest that openness to trade has increased the economy’s aggregate output in the past. The U.S. economy has grown as trade has expanded, and internationally, there is some evidence that countries that are more open to trade typically experience faster growth than those that are more closed. Effects on Wages: Some economists also argue that offshoring could increase average living standards by contributing to growth in average real wages for U.S. workers, corresponding to offshoring’s effects on productivity. Economic theory predicts that average real wages should typically rise with average productivity rates, as workers are compensated for producing more per hour of work. Wages are expected to move with productivity growth in the long run if the share of national income that accrues to workers versus the share that accrues to firms’ profits and other income remains fairly constant. Historically, wage growth in the U.S. has broadly tracked productivity growth, although changes in wages and productivity may have diverged for periods of time (see fig. 4). During the post-World War II period, the share of national income spent on total compensation—wages and benefits—rose throughout the 1950s, 1960s, and 1970s, and has been fairly constant since 1980, averaging about 66 percent of national income, with the remainder accruing to corporate profits, proprietor’s income, rental income, and net interest. Since 1970, an increasing amount of total labor compensation has been spent on benefits rather than wages and salaries. In recent years—since the end of the 2001 recession—wages have not moved up with productivity growth, and total labor compensation as a share of national income has declined somewhat, from 66 percent in 2001 to 64 percent in 2004. During this time, wages and salaries as a share of national income declined from 55 percent in 2001 to 52 percent in 2004. Some economists have argued that this divergence of compensation growth from productivity growth is problematic and runs counter to assertions that increased productivity gains from offshoring will necessarily raise average living standards; however, this fluctuation is considered by other economists to fall within recent norms. Effects on Prices and Availability of Consumer Goods and Services: Traditional economic theory also predicts that offshoring will improve average U.S. living standards by lowering consumer prices and providing consumers access to a wider range of goods and services than would otherwise be available. Many economists expect that competition will lead companies to pass the cost savings from offshoring onto consumers in the form of lower prices. However, economic theory also predicts that the extent to which cost savings are passed onto consumers depends on how competitive the market is for particular goods and services. While firms in highly competitive markets are likely to pass most of the cost savings from offshoring through to the purchasers of the service, in less competitive markets, economic theory predicts that firms may retain some or all of the cost savings. Although the most commonly cited economic trade theories predict that offshoring will likely have positive effects on the average U.S. living standard, some trade models generate scenarios under which the U.S. could lose either its absolute or relative position in the global economy, and some economists have argued that services offshoring is better described by these latter types of economic models. Models in which the U.S. could face potential losses from increased trade such as offshoring reflect the possibility that as our trading partners become more productive in creating goods and services that the U.S. specializes in, the economic position of the U.S. could be undermined. For example, Ralph Gomory and William Baumol have described scenarios in which a trading partner experiences productivity improvements in an important U.S. export industry, resulting in declines in U.S. national income because U.S. firms lose their position as the most competitive producers in the industry. The impact on the U.S. workforce, in this model, is particularly detrimental if the industry in which the U.S. is challenged is highly profitable and pays high wages, such as industries in which the U.S. has long held technological superiority or an industry that is difficult to enter. Other economists have developed different models in which productivity changes abroad lead to losses in the absolute or relative position of the U.S. in the global economy. The negative results of increased trade in these models are not specific to offshoring—they could result from other forms of trade too, but they are sometimes cited when concerns about offshoring are raised because services offshoring raises the specter of the movement of high-value work from the U.S. to foreign trading partners. Some have raised concerns that offshoring poses risks to U.S. leadership in innovation, particularly in high-value areas such as technology fields and research and development, raising the possibility that the global economic position of the U.S. could be eroded over time. Economists and other offshoring observers have suggested a range of mechanisms through which offshoring could have a negative impact on U.S. innovation. Some argue that innovation results from solving technical problems during manufacturing, design, and research and development. To the extent that this work is conducted overseas, offshoring could promote faster technological diffusion to foreign firms, which may over time lead to foreign competitors coming to dominate an industry in which the U.S. was once the technological leader. Some contend that offshoring portions of the research and development infrastructure could threaten U.S. technological leadership by disrupting important innovation networks in the U.S., such as the IT cluster in Silicon Valley in California, or the biotechnology cluster in Cambridge, Massachusetts, and promoting the emergence of such networks abroad. In addition, some express concern that offshoring routine or entry-level work in some technical industries could hurt the U.S.’s ability to maintain an innovative workforce by closing off career prospects for some U.S. workers and discouraging U.S. students from entering those fields. Another concern raised by some economists is that offshoring could reduce average living standards for American workers by slowing the growth of average wages. These economists raise the concern that even if offshoring promotes economic growth and productivity, it could decrease labor’s share of national income by subjecting American workers to direct competition with foreign workers, leading to slower growth or even a decline in average wages. As we previously noted, recent statistics show a dip in the share of national income accruing to total worker compensation in recent years, and some economists believe that offshoring may be contributing to this trend. Finally, some question whether firms will use the cost savings from offshoring in ways that lead immediately to the productivity improvements and consumer price reductions predicted by trade theory. Under certain market conditions, an individual firm could retain supernormal profits (profits above the usual for their particular industry and product) for a period of time, distributing these gains to shareholders or their remaining employees, rather than passing on cost savings to consumers in the form of price reductions or investing their cost savings in productivity- enhancing reorganization or new technology. Although economic theory predicts that under many market conditions competitive forces will constrain the ability of firms to earn supernormal profits on an ongoing basis, the assumption that individual firms face perfectly competitive market conditions may not necessarily be accurate. Thus, some offshoring experts stress the importance of examining firm-level decisions to determine whether, how, and how quickly offshoring leads to price reductions and the reorganization of firms and industries toward specialization in higher-productivity activity. Underlying the debate about the effects of offshoring on the average U.S. standard of living are different perspectives on the following questions: What new areas of comparative advantage will the U.S. economy develop to compensate for declines, if any, in areas threatened by offshoring? How will offshoring affect average U.S. wages? Will the possible benefits of productivity gains offset the possible downward pressure exerted by increased exposure of U.S. workers to global competition? Many economists agree that offshoring is not likely to affect aggregate U.S. employment in the long run, but acknowledge that in the short run, workers will lose their jobs when employers relocate production abroad. At the same time, some economists have commented that offshoring may cause structural changes in the labor market because increased trade alters the mix of goods and services produced in the U.S. These structural changes could generate permanent changes in the types of work conducted by the U.S. labor force and could also possibly have longer- term effects on the U.S. unemployment rate. There is disagreement about the expected direction of any structural changes in the labor market due to offshoring, the expected magnitude of job displacement due to offshoring, and the implications of this displacement for those workers who are directly affected by it. Underlying these disagreements are different estimates about the projected extent of job losses due to offshoring, which types of jobs will be offshored, which areas of the economy will generate growth in job opportunites, and the re-employment experiences of workers whose jobs are offshored. Economic theory predicts that expansions in trade, including offshoring, typically should not affect the overall employment level (net employment) in the U.S. in the long run. Some economists argue that the U.S. labor market is generally expected to adjust quickly to changes in economic conditions because new jobs will be created as jobs are lost, and as a result, those who lose their jobs due to economic changes such as offshoring are expected to readily find new work. Given a flexible labor market, these economists theorize that the primary determinant of fluctuations in the employment rate is aggregate demand in the overall economy, observed in the business cycle. Historically, the U.S. economy has rarely experienced unemployment rates higher than 10 percent of the labor force, with the exception of unique periods such as the Great Depression. According to Bureau of Labor Statistics (BLS) data, since 1947, the civilian employment rate has increased gradually from around 59 percent in the 1940s and 1950s, to an average of 66 percent over the past 20 years. During this period, the unemployment rate has generally fluctuated between about 4 percent to 8 percent, averaging 5.6 percent per year, even though the U.S. labor force has grown by, on average, 1.4 million people per year. Furthermore, the U.S. employment rate has not been correlated with trade or imports. While traded goods and services have increased from about 4 percent of the gross domestic product (GDP) to about 14 percent of GDP over the past 60 years, employment rates have steadily increased. Even shocks to the percentage of the economy that is open to trade, such as the passage of major trade agreements, have not been correlated with significant changes in employment rates. Some have argued that while balanced trade may not affect employment levels, large and continued trade deficits put American jobs at risk. Historically, however, although employment in certain sectors of the economy is sensitive to trade balances, there has been no evidence of a correlation between trade deficits and overall employment. Although there is dispute over the number of jobs likely to be lost due to offshoring in years to come, even the larger estimates generally represent a small enough fraction of the total number of jobs destroyed and created in the U.S. that many believe the U.S. labor market is likely to be able to absorb the change. For example, some private sector studies estimate that between 100,000 to 500,000 information technology jobs will be displaced over the next few years, and potentially several million jobs across all occupations could shift outside the U.S. over the next decade. Several economists have pointed out that even the larger job loss estimates represent a relatively small percentage of the total number of jobs destroyed and created annually in the U.S. According to BLS statistics, since the end of the last recession in the fourth quarter of 2001, the U.S. has shed an average of 7.64 million jobs per quarter, while creating an average of 7.77 million jobs per quarter. Viewed in this context, some note that estimates of the number of jobs that could be lost due to offshoring do not appear to be as large of a shock to the economy. Moreover, some maintain that job losses due to offshoring should also be viewed in the context of the two-way flow of trade. Jobs are created as a result of U.S. firms exporting goods and services to other countries and foreign firms locating their production in the U.S. through direct foreign investment. Although some economists argue that trade, including offshoring, is unlikely to affect long-term employment rates, others have noted that increases in offshoring and globalization could lead to changes in the structure of employment, which could lead to changes in the number of jobs available in different occupations and industries and could also potentially increase unemployment. Structural changes to employment involve the permanent reallocation of workers and resources throughout the economy. Offshoring could contribute to structural changes in employment by changing employers’ demand for different skill-sets and occupations within certain industries. For example, offshoring could lead to substantial reductions in low-skilled IT-based services work while generating increases in high-skilled work such as IT systems management. It may take a long time for the economy to replace jobs lost to structural changes with new jobs because workers must switch industries, locations, or skills in order to find re-employment and because employers must create new jobs. Although the workforce should eventually adjust to the structural changes in the economy, a significant structural change could potentially lead to an increase in unemployment in the meantime. Regardless of the impact of offshoring on aggregate employment and the unemployment rate, many economists acknowledge that offshoring and increased trade could produce structural changes that could generate permanent shifts within the U.S. labor market. Some economists believe these structural changes will lead to the U.S. workforce gaining better jobs overall, as U.S. businesses respond to offshoring and globalization by creating jobs in new areas of specialization that capitalize on the relatively highly skilled workforce and abundance of capital of the U.S. economy. For example, some note that while the U.S. has lost significant numbers of computer programming jobs, potentially due to offshoring, the U.S. economy at the same time has experienced an increase in the number of more sophisticated computer-related occupations, such as computer software engineers. Other economists suggest that structural changes could lead to lower-quality jobs if the U.S. develops comparative advantage in areas that primarily produce low-skilled jobs. Research has been done on the extent to which job gains and losses in recent years have resulted from structural changes in the economy; however, this research does not indicate whether the structural changes were due to offshoring. For example, in their study of the recent U.S. labor market, Erica Groshen and Simon Potter found evidence of structural change following the end of the 2001 recession, although they did not investigate whether offshoring was a cause of the structural change. Although many economists believe that aggregate employment will not be significantly affected by offshoring, there is widespread recognition that offshoring may nevertheless displace at least some workers from their jobs, leading to adjustment costs incurred by these workers and their families as they seek re-employment. In other words, although net job loss due to offshoring may be minimal, with losses in some industries and occupations offset by employment growth in other areas, gross job losses due to offshoring could be significant. Limited data make it difficult to draw conclusions about the current extent of job loss due to offshoring. The data limitations have led to conflicting claims, with some arguing that offshoring is a minor phenomenon and others arguing that it is being underestimated. For example, some cite data from the Mass Layoff Statistics (MLS) program produced by the Bureau of Labor Statistics, which showed that about 16,000 manufacturing and services job separations—less than 3 percent of the nonseasonal mass layoffs that took place in 2004—resulted from “movement of work” to locations outside the U.S. However, the MLS undercounts total job separations due to offshoring because it is designed to capture only mass layoffs, not total layoffs. In contrast, others cite privately collected data that suggests that the extent of offshoring is much greater. For example, some have cited data collected by Kate Bronfenbrenner and Stephanie Luce, who attempted to measure the extent of offshoring with data collection from media reports and other sources. Extrapolating from a three month period, they estimate that as many as 406,000 manufacturing and services jobs were shifted from the U.S. to other countries in 2004. Although there is considerable uncertainty about the number of jobs that have been lost due to offshoring, a number of economists expect that offshoring is likely to expand in the future, both in absolute numbers and in types of work. For example, Cynthia Kroll has estimated that nearly 15 million people, or 12 percent of the employed labor force, are in white- collar occupations at risk to offshoring, though she notes that not all jobs in these occupations are likely to be offshored. Private sector studies have also attempted to create forecasts of the effects of offshoring on employment in “at-risk” occupations; some of these studies project that between 100,000 and 500,000 IT jobs will be displaced within the next few years, and potentially several million jobs across all occupations will shift outside the U.S. over the next decade. However, these studies face challenges in estimating the effects of offshoring because they are often based on federal statistics that currently provide limited information on the level and effects of offshoring. Some economists have expressed concerns about the potential size of the dislocation costs for workers who lose their jobs due to offshoring, based in part on the experiences of manufacturing workers whose jobs were lost due to trade; others argue that the costs of displacement might not be as large for services workers as they have been for manufacturing workers. Dislocation costs that workers could potentially experience include lost income during their period of unemployment and a lifetime of reduced wages if they cannot find a job that pays as much as the job they lost. Dislocation costs could be higher if job losses are concentrated in geographic areas because it may be difficult for the regional economy to absorb so many job seekers quickly and the local real estate market could be impacted. Research on workers dislocated from jobs in manufacturing industries that faced import competition suggests that workers who lose their jobs due to trade-related employment changes tend to be less likely to find reemployment and to face larger income declines after job displacement than workers displaced from industries that are less trade- sensitive. However, some have raised questions about whether these results are applicable to trade-impacted services workers, who tend to have more desirable labor market characteristics than manufacturing workers. Research by J. Bradford Jensen and Lori Kletzer suggests that in recent years, services workers displaced from “tradable jobs”—jobs in industries and occupations likely to be affected by trade—had labor market advantages over those displaced from “non-tradable” service sector jobs and from manufacturing jobs, such as more education and higher predisplacement earnings. Re-employment rates were slightly higher for displaced service sector workers in tradable jobs, compared to those in non-tradable jobs, and were significantly higher than the reemployment rates for displaced manufacturing workers. Earnings losses were significant for displaced services workers in tradable jobs, however. Of those re-employed, 55 percent experienced a decrease in earnings, with the average re-employed worker experiencing a 30 percent decline in earnings after reemployment. These large losses reflect the fact that displaced services workers in tradable jobs tended to have had relatively high wages prior to displacement. Underlying the debate about the effects of offshoring on employment and job displacement are different perspectives on the following questions: Will offshoring contribute to structural changes in U.S. employment, and how will these changes affect aggregate employment levels and the type of occupations available to U.S. workers? How many workers will be displaced due to offshoring? What are the reemployment experiences of workers dislocated due to offshoring? Some economists have expressed concern that offshoring could accelerate income inequality in the U.S.; however, others argue that changes in the income distribution are driven primarily by factors unrelated to offshoring, and still others point out that offshoring could potentially decrease income inequality. Those who think offshoring might accelerate income inequality believe it could do so by lowering the wages of some lower-wage and middle-class jobs, while potentially increasing the wages of smaller numbers of highly compensated positions. Those who disagree argue that offshoring is unlikely to have significant effects on wages and the U.S. income distribution because changes in demand for different skills are driven more by technological developments than by the changing international division of labor. Those who argue that offshoring could reduce income inequality note that this could occur if offshoring generates wage pressure on high-wage jobs, such as engineering, without significantly affecting the wages of low-wage jobs. Offshoring could also reduce income inequality if it reduces the cost of services that are consumed by primarily lower- and middle-income Americans. Underlying these disagreements are debates about whether, in the long run, offshoring will change the demand for U.S. workers with different skill levels, which sectors of the income distribution are most likely to be affected by this changing demand, and whether offshoring leads to reductions in the cost of services that primarily benefit lower- and middle-income Americans. Because offshoring is expected to have effects on the structure of employment within the national economy, it is expected to affect the distribution of income in the U.S.; however, experts hold differing views about the direction of these effects. Some contend that offshoring will increase income inequality and note several possible ways that it could do so. First, offshoring could increase income inequality if it primarily led to job losses or wage reductions among relatively low-income workers but had less of an effect on the jobs or wages of middle- and higher-income workers. Some offshoring observers argue that offshoring in the service sector has thus far primarily affected lower-wage jobs, such as call-center work and office support functions, rather than middle- or higher-income jobs. Second, some economists and policy analysts have expressed concern that offshoring could reduce wages at the middle of the income distribution and lead to a “hollowing out” of the middle class if it is primarily middle-income jobs that are moved offshore or experience wage declines. For example, some economists and other policy analysts have noted that sophisticated and well-paid job functions, such as computer programming and radiology analysis, are increasingly susceptible to offshoring. In addition, some contend that offshoring will lead to increased inequality by contributing to income growth among those at the high-end of the income distribution. For example, an increase in corporate profits resulting from offshoring may promote growth in high-wage managerial positions and income accruing to business owners. However, some economists contend that offshoring could also reduce income inequality if it leads to job losses or reduced wages among higher- wage occupations, such as engineering, without significantly affecting the jobs and wages of low-wage workers. In addition, some argue that offshoring could reduce inequality if it led to a decline in the wages, and consequently fees charged, by highly compensated workers who provide services to lower- and middle-income households. For example, if offshoring puts downward pressure on the wages of accountants, the resulting decrease in the cost of accounting services represents an increase in real wages for lower- and middle-income households who use these services, reducing inequality. Trade theory can provide a rationale for those who have noted that offshoring could lead to increasing income inequality. One of the most commonly cited models, the Heckscher-Ohlin model, predicts that when the U.S. initiates or expands trade with a country that has a dissimilar workforce, such as a developing country, this trade is likely to have a negative effect on the distribution of wage income within the U.S. workforce. For example, when trade expands between the U.S., a country with a large pool of highly-skilled and educated workers, and a developing country with a large pool of less skilled and educated workers, this model generally predicts that the U.S. will specialize in those goods and services that are best produced by more skilled and educated workers, while the developing country will specialize in those goods and services best produced by less skilled and less educated workers. The implication of this international specialization for U.S. workers is that demand for skilled workers in the U.S. will grow, while demand for less skilled workers in the U.S. will shrink. As a result, wages for more skilled and educated U.S. workers will increase relative to the wages of less skilled and educated U.S. workers, thus increasing income inequality. However, to the extent that services offshoring involves the movement of higher-skilled work to developing countries, more complex versions of this model generate different predictions about income inequality in the U.S.—income inequality could decline if the demand for higher-skilled workers declines relative to the demand for lower-skilled workers. Although many economists agree that international trade, including offshoring, could have some impact on the distribution of income, some argue that these factors are not among the more important determinants of the U.S. income distribution. These economists argue that other factors are much more significant determinants of the changing U.S. income distribution. In particular, technological change is viewed by some economists as the primary determinant of the growing wage gap between more and less skilled workers. Many economists claim that as technological advances have occurred, particularly in computers and IT, requirements for technological skills for workers across a range of occupations have increased, requirements that often translate into increased demand for more educated workers. At the same time, technological advances have permitted some routine work to be automated, decreasing demand for less-skilled workers. Numerous studies have examined whether trade or technological change explained a larger share of the growing wage gap between more and less educated workers during the 1980s and 1990s, with the majority concluding that technological change was a more important determinant than trade. On balance, these studies conclude that trade has made a small contribution to the increase in income inequality. Estimates suggest that trade explains between 10 and 20 percent of the increase in income inequality, with the majority of the increase attributable to other factors such as technological change that favors higher-skilled workers. However, the impact of services offshoring on income inequality has not been examined to the same extent that manufacturing trade has. Underlying the debate about the effects of offshoring on U.S. income distribution are different perspectives on the following questions: What are the characteristics (occupation, skill level, and wages) of jobs that are moving offshore? What are the characteristics of jobs that are being created? Will offshoring reduce the cost of goods and services that are important consumption items for middle and lower income households? Experts express varying degrees of concern that offshoring could pose security risks, including increased risks to national security, critical infrastructure, and personal privacy. Underlying these disagreements are unresolved questions about the extent to which offshore operations pose additional risks than outsourcing services domestically and the extent to which U.S. laws and standards apply and are enforceable for work conducted offshore. Some security and offshoring experts, including the Department of Defense (DOD), have raised concerns that offshoring could pose increased risks to national security and critical infrastructure, but others believe that offshoring will not. National security concerns relate to government programs and systems involved in national defense, particularly military and intelligence operations. Critical infrastructure concerns relate to systems and structures owned by either government or private entities that are essential to the country, such as utilities, transportation, and communications networks. One concern raised by security experts is that offshoring the development of software used in defense systems could pose additional security risks, specifically, that foreign workers with hostile intentions could obtain critical information or introduce malicious code into software products that could interfere with defense or infrastructure systems. There are currently few explicit restrictions on the type of services work that can be sent offshore. DOD’s Defense Security Service has analyzed this issue and identified concerns with the potential exploitation of software developed in foreign research facilities and software companies for projects related to classified or sensitive programs. We have reviewed DOD’s management of software developed overseas for defense weapons systems as well. Our report noted that multiple requirements and guidance acknowledge the inherent risks associated with foreign access to classified or export-controlled information and technology and are intended to protect U.S. national security by managing such access. However, we found that DOD does not require program managers of major weapons systems to identify or manage the potential security risks from foreign suppliers. For instance, DOD guidance for program managers to review computer code from foreign sources not directly controlled by DOD or its contractors is not mandatory. In addition, DOD programs cannot always fully identify all foreign-developed software in their systems. Private-sector groups and government officials have raised similar concerns about the added security risks posed by offshoring to U.S. non- military critical infrastructure, such as nuclear power plants, the electric power grid, transportation, or communications networks. For example, some have noted that sensitive but unclassified information, such as the plans of important U.S. utilities or transport networks, could be sent to foreign locations where it could be released improperly or made available to hostile foreign nationals. Other concerns relate to the offshoring of software development and maintenance. Software security experts in the public sector—including DOD and the Central Intelligence Agency—have expressed concern that organizations and individuals with hostile intentions, such as terrorist organizations and foreign government economic and information warfare units, could gain direct access to software code by infiltrating or otherwise influencing contractor and subcontractor staff, and then use this code to perpetrate attacks on U.S. infrastructure systems or conduct industrial or other forms of espionage. Security experts also note that critical infrastructure systems rely extensively on commercial off the shelf (COTS) software programs that are developed in locations around the world. These programs include exploitable vulnerabilities and potentially even malicious code that can allow indirect access to infrastructure systems to cause the systems to perform in unintended ways. Thus, some experts believe that ongoing use of COTS software modules, whether developed offshore or not, as well as offshoring of software-related services could increase the risk of unauthorized access to critical infrastructure code in comparison to in- house development and maintenance of proprietary programs and code. Security experts also express concerns about longer-term effects of offshoring. For instance, some note that continued offshoring of certain products might make the U.S. dependent on foreign operations for critical civilian or military products, and therefore vulnerable if relations between the U.S. and those countries become hostile. Another concern is the ability to control access to certain civilian technologies with military uses when work on these technologies takes place in foreign locations. Some fear that offshoring certain high-tech work may lead to the transfer of information and technology that could be used by foreign entities to match or counter current U.S. technical and military superiority. The U.S. can control exports of such dual-use technologies by requiring firms to obtain an export license from the Department of Commerce before they can be worked on in foreign locations or by foreign nationals. We have reviewed some aspects of this export licensing program and found key challenges to Commerce’s primary mechanism for ensuring compliance with export licenses. Some representatives of business groups contend that offshoring may not pose major increased security concerns for a variety of reasons. Some believe that protections currently in place are adequate to manage the added risks posed by offshoring. Currently, the Department of Defense has mandatory procedures to safeguard classified information that is released to U.S. government contractors, and firms that offshore certain work related to military technologies are required to obtain export licenses from either the State or Commerce departments. In addition, some argue that foreign workers in offshore locations do not necessarily pose added security risks, relative to U.S. workers in domestic outsourced operations, because domestic workers could also improperly handle information. Some foreign affairs experts also argue that offshoring could have positive effects on national security. They contend that increased international trade may reduce the threat of international tensions because countries with integrated economies have a stake in one another’s well-being. Experts express varying degrees of concern about the impact offshoring may have on personal privacy when medical and financial records become accessible in overseas locations. Privacy advocates, academics, and offshoring researchers have noted concerns with the possibility that personal information sent to foreign locations could be improperly released, leading to identity theft, diversion of funds, and breaches of confidentiality. However, others note that the Gramm-Leach-Bliley Act, which covers the privacy of financial information, limits disclosure of personal information and requires financial institutions to protect the security and confidentiality of their customers’ personal information through written agreements when information is sent to a third-party service provider. The privacy of medical information is covered under the Health Insurance Portability and Accountability Act Privacy Rule, which requires certain entities that hold medical records to receive satisfactory written assurance that any of their business associates will handle information appropriately. We are currently conducting work that examines offshoring of protected health information and related privacy issues. Underlying the debate about the effects of offshoring on security are difference perspectives on the following questions: To what extent does offshoring pose added security risks? Do existing laws, regulations, and controls provide adequate protection from the added risks posed by offshoring that do exist? Offshoring observers have proposed a broad range of policies in response to offshoring, representing a variety of different ideas about how public policies could address the concerns raised by offshoring. We have categorized these proposals into four types on the basis of concerns they seek to address: (1) improving U.S. global competitiveness, (2) addressing effects on the U.S. workforce, (3) addressing security concerns, and (4) reducing the extent of offshoring. Some analysts have proposed policies in more than one of these areas. On the other hand, it is also possible to take the position that services offshoring does not warrant any changes in government policies. While we indicate the rationales that have been presented for the various policy proposals, we do not evaluate the merits and drawbacks of these proposals. Relevant factors to consider in evaluating proposals would include the magnitude of the problems that policy proposals seek to address, likely effectiveness of the proposals, potential negative consequences, financial costs to government, and feasibility of administration. Proponents of policies that seek to improve U.S. global competitiveness view offshoring as one aspect of much broader economic and trade issues and maintain that the debate should be focused on issues broader than the offshoring of work by companies headquartered in the U.S. They contend that the appropriate focus should be on the broader public policy issue of how the U.S. can continue to compete and attract high-paying jobs in a time of rapidly increasing trade and open global markets that allow multinational firms to hire labor from around the world. These proponents have articulated proposals that seek to help the U.S. economy develop new areas of specialization in response to increased foreign competition by fostering the types of industries and businesses that can succeed in a global economy and promote the creation of high-value jobs. In addition, some regard these proposals as important for promoting U.S. economic growth, regardless of the offshoring debate. Many of these proposals have been articulated as broad policy objectives, such as “fostering innovation” or “improving education” rather than as specific policy mechanisms to achieve these objectives. Suggestions for how to improve U.S. global competitiveness include proposals to promote innovation and creative industries, improve human capital and the skill level of the U.S. workforce, reduce the costs of doing business in the U.S., and establish trade practices that promote U.S. exports. Many economists and policy analysts have predicted that for the U.S. economy to successfully adjust to offshoring, it will need to develop and produce new, innovative goods and services that require and reward higher levels of skill, and they believe that government actions can help to bring about this development. In addition, they point out that private companies can lack the incentives and time horizons to invest sufficiently in basic research—research undertaken without specific desired applications but that can lead to innovations. Some have also noted that federal funding for basic research has recently declined as a percentage of GDP and that foreign governments are increasing their research spending to improve their own economies’ innovative capacity. Policies that have been proposed to promote innovation include: Increasing government support for basic research and development projects. Making permanent the current research and development tax credit to encourage companies to increase their own spending. Currently, the tax system allows businesses to obtain a tax credit for certain spending on research and development, but this credit requires regular reauthorization, rather than being a permanent feature of the tax code. Increasing government spending on particular forms of infrastructure and technology that can support innovation, such as broadband Internet connections. Many who emphasize the broad goal of improving U.S. competitiveness also support upgrading the nation’s workforce skills and human capital by improving education, increasing opportunities for worker training, and reforming immigration policy. They contend that for the economy to move into higher-end, innovative products to replace job functions that have been offshored, more American workers will need to develop the knowledge and skills to perform complex, nonroutine work. In particular, they emphasize the importance of education programs in the science, technology, engineering, and mathematics fields. In addition, some have noted that workers will increasingly need to upgrade their skills continually throughout their careers in order to adjust to rapid changes in the modern economy. As a result, many policies proposed in response to offshoring seek to increase the skill level of current and future generations of U.S. workers, including the following proposals: Improving K-12 education, with special attention on increasing achievement in math and science fields. Proponents of these policies argue that U.S. students demonstrate poor achievement in these subjects relative to students in other nations, bringing into question whether the U.S. will have an adequate supply of scientists and engineers to sustain a globally competitive and innovative economy. Expanding and improving lifelong learning through increased federal support of worker training and advanced adult education programs. One specific proposal is instituting “human capital tax credits” that could be offered either to businesses that spend money on worker training programs or to individuals who spend money on their own education. Such tax credits could partially offset the costs to business of training workers who may not stay with a company for long and the costs to workers of learning skills that may not guarantee long-term employment. Encouraging immigration of high-skilled workers. Proponents of these policies note that a large and growing segment of U.S. scientists and engineers are foreign-born. Specific proposals to increase the number of highly educated immigrants in the U.S. include raising the number of temporary work visas that allow high-skilled workers to enter the country and expediting the issuance of green cards for foreign graduates of U.S. universities. Other proposals to improve competitiveness focus on ways to reduce the costs of doing business in the U.S. relative to other countries. Proponents of these policies note that cost reduction is a leading motive for businesses to offshore service-sector work and that higher costs can affect the ability of U.S. firms to compete against foreign firms in the global economy. Proposals to reduce business costs in the U.S. include: Reducing federal taxes and regulatory requirements on businesses. Proponents of these policies argue that complex and high taxes and extensive regulations raise costs for companies to do business in the U.S. These proposals assume that taxes in foreign countries would remain unchanged, so that a decline in U.S. taxes would reduce the cost of doing business in the U.S. relative to the cost of doing business overseas, thus increasing incentives for companies to keep work in the U.S. Reducing costs to businesses of providing health care to employees. Proponents of these policies argue that high health care costs drive up the total cost of labor compensation for employers, although it is possible that increases in U.S. health care costs could be partially or fully offset by decreases in other components of labor compensation. Various approaches have been proposed to decrease health care costs, such as use of improved technology in the management of patient care, establishing association health plans that would allow small businesses greater leverage in negotiations with health insurance providers, and establishing a universal healthcare system. Another type of policy response to offshoring and increasing global interdependence focuses on expanding the market for U.S. exports. Proponents of these policies contend that several factors may be depressing U.S. exports and that more can be done to “level the playing field” of international trade. One concern is that while the U.S. has opened up its markets to foreign competition, some foreign governments have not opened certain of their markets, especially for services in which U.S. companies are globally competitive, such as financial services. Where trade agreements are in place, some have raised concerns that certain foreign governments may be violating them, such as by providing subsidies to their own industries or imposing nontariff barriers to their markets. A further concern that has been expressed is that some foreign governments may be artificially lowering the value of their currencies relative to the dollar so that their exports are relatively inexpensive, while U.S. exports become relatively more expensive. Policies that have been proposed to redress these concerns and enhance U.S. exports include the following: Continuing to negotiate trade agreements that will open foreign markets in which U.S. companies have export opportunities. Taking more aggressive actions to challenge foreign government actions that may violate existing trade agreements, such as bringing actions at the World Trade Organization (WTO) and imposing retaliatory measures allowed under WTO rules. Such violations could include foreign countries’ tax incentives to U.S. companies that offshore or inadequate protection of intellectual property rights of U.S. imports, which harms the sales of U.S. products forced to compete with unlicensed versions. Continuing to persuade countries that may have undervalued currencies to raise their currency values or to otherwise engineer a controlled decline in the value of the dollar. Proposals to address concerns about offshoring’s effects on workers seek to reduce the costs borne by some individuals when an economy becomes increasingly open to foreign trade and competition. Many of these proposals would provide assistance to workers during their period of unemployment and to help them obtain new jobs. While some of these proposals put particularly strong emphasis on retraining displaced workers, not all observers agree that retraining policies would be effective. Other proposals would expand broad social insurance programs that would cover all workers and provide benefits to anyone who loses a job. Many proposals to help workers affected by offshoring focus on programs designed to help workers adjust to job losses and to facilitate their reemployment. These include the following proposals: Amending the Worker Adjustment and Retraining Notification (WARN) Act to increase the notice that employers must give employees from 60 to 90 days when offshoring will cause a mass layoff or plant closure. Extending the Trade Adjustment Assistance (TAA) program to services workers. The TAA program provides extended unemployment benefits and subsidized training to workers involved in the production of articles who can demonstrate that they were displaced due to increased imports or shifts in production to foreign countries. It generally serves workers who have been laid off from the manufacturing sector. Expanding or developing income support and reemployment programs that would assist displaced workers in general, not just those who meet TAA criteria. Several policy advocates and researchers who have studied offshoring have stated that existing government programs to serve displaced workers do not provide adequate protections or assistance for a changing economy in which global trade affects more workers. For instance, they have questioned the effectiveness of existing worker retraining programs or expressed doubts that retraining will be an effective response as international pressures begin to affect higher-skilled occupations and workers who already have advanced educations. Establishing wage insurance, a program that would pay displaced workers who find reemployment at a lower wage a percentage of the difference between their previous and new earnings for a limited time. Proponents of wage insurance contend that it would provide incentives for dislocated workers to reenter the labor market quickly, even if they must do so at lower wages. In addition, proponents maintain that wage insurance could encourage workers to take jobs in unfamiliar fields where their inexperience commands lower wages, but where the job imparts new in- demand skills, and allow them to build new careers. Some have proposed broader reforms to strengthen the social safety net and mitigate some of the hardships generated by the economic insecurity associated with an increasingly integrated global economy. Proponents of these policies emphasize the need to accompany open trade policies with enhanced social protections for all workers who are increasingly exposed to risks by international competition, such as job loss, job insecurity, or downward wage pressure. In addition, proponents contend that government policies should compensate workers who bear the costs of trade-induced economic disruptions. Such proposals would potentially affect large segments of the population and would require extensive rethinking and redesign of U.S. social policy, but proponents maintain that they could increase public acceptance of open trade policies. Such proposals include the following: Making health and pension benefits portable and/or universal so that workers who lose their jobs can retain their access to medical care and retirement plans. Some favor the government’s providing universal health care coverage, and others propose preserving or expanding portable or universal retirement coverage. Requiring employers that move jobs offshore to pay some of the costs for worker assistance programs. Proponents contend that government should play a role in redistributing some of the gains from offshoring to workers who have been negatively affected. Proponents believe that such proposals would serve this principle and could mitigate some concerns about offshoring’s effects on income inequality. Proposals to address concerns about security seek to reduce the added risk that information sent to foreign locations could be used in ways that could impair U.S. national security, critical infrastructure, or personal privacy. Proposals include restrictions on certain types of work with security implications and strengthening standards governing how information is handled. Concerns that offshoring could pose increased risks to national security or critical infrastructure have led to proposals to restrict some services work from being sent to foreign locations or performed by foreign nationals and to improve security standards for work that is performed offshore, including the following proposals: Requiring that certain projects involving defense acquisitions or military equipment be performed exclusively in the U.S. Requiring that work on critical infrastructure projects such as electricity grids or pipelines be done within the U.S. Increasing the standards and review procedures that apply to use of offshore services. For example, GAO has previously recommended that DOD adopt more effective practices for developing software and increasing oversight of software-intensive systems, such as ensuring that risk assessments of weapons programs consider threats to software development from foreign suppliers. Concerns that offshoring could pose added risks to the privacy of personal information have led to a variety of proposals to enhance protections, including the following: Requiring companies to keep work involving sensitive private information in the U.S. Requiring companies to notify and obtain consent from U.S. residents before sending personal information to be processed in other countries. Ensuring that consumers have legal recourse against U.S. firms for privacy breaches by foreign contractors. Strengthening U.S. laws and regulations concerning the handling of personal information, regardless of whether the data are handled domestically or overseas. Those who propose this option contend that U.S. laws and regulations do not provide adequate protections for personal information in general, regardless of where the information is handled. Another type of policy that has been proposed to address the various concerns raised by offshoring focuses on reducing the extent of offshoring. Some of these policy proposals focus on offshoring by government agencies, while others seek to modify firms’ incentives with respect to where they source their work. There have been numerous proposals to limit or constrain offshoring by federal and state governments, including the following examples: Legislation proposed to prohibit federal work or federally funded work from being performed in foreign countries, unless the foreign goods or services are for use in that country. Legislation proposed to require contractors with the U.S. military and executive agencies to have at least 50 percent of their workforce in the U.S. Legislation proposed to prohibit the federal government from providing assistance to, or doing business with, companies that in the last 5 years offshored jobs previously performed in the U.S., unless the company also creates significant replacement jobs in the U.S. Legislation proposed in several states to restrict the procurement of state- funded services from overseas. Proposals to prohibit government contracts from going to countries that have not signed trade agreements with the U.S. on non-discrimination in government procurement. Another proposed means of reducing offshoring is to change tax policy to alter the relative costs of domestic versus foreign production. Many economists and policy analysts believe the current tax system provides incentives for U.S. multinational firms to locate work at their overseas affiliates because it allows them to defer taxes on profits earned on some activities in foreign countries until the profits are brought back to the U.S. However, some note that this tax treatment helps U.S.-owned businesses compete in foreign markets against foreign-owned businesses. Proposals for changing the tax code include: Eliminating the ability of firms to defer foreign-earned income by taxing foreign profits at the same rate as domestic profits in the year they are earned. This proposal would affect only offshoring that takes place between U.S.-based multinational firms and their foreign affiliates. It would not affect offshoring that involves outsourcing work to separate firms located overseas. Establishing a value-added tax (VAT) system, in which a tax could be applied to products imported to the U.S. and rebated on products the U.S. exports. However, as GAO and others have reported, many economists believe that such border tax adjustments would not affect the trade balance in the long run because exchange rates would adjust to offset the border adjustments. Other policy proposals would enhance incentives for firms to locate work domestically. Proponents of these policies note that foreign governments award incentives, such as providing buildings, infrastructure, and tax exemptions, to companies that export service products. In response, some suggest that the U.S. provide similar incentives, including the following proposals: Providing tax reductions or subsidies to companies that employ domestic workers. One specific proposal is a tax credit for companies in certain industries identified as affected by offshoring that would cover the payroll taxes of newly hired employees. Providing federal assistance for regional economic development plans, including infrastructure improvements and grants targeted at attracting work that might otherwise be offshored. Determining appropriate policy responses to the offshoring phenomenon is challenging for several reasons. Services offshoring is a relatively recent phenomenon that raises a broad range of issues. No federal data series directly measure the extent of offshoring or its effects. Moreover, experts have expressed differing views about the potential impacts of offshoring. Nevertheless, there are some key areas where further research might help to provide more information about the impacts and policy implications of services offshoring. These areas include impacts of offshoring on various sectors of the U.S. economy, and especially the sectors that are emerging as new sources of comparative advantage; impacts of offshoring on the workforce, such as numbers of workers displaced and their reemployment experiences; impacts of offshoring on the U.S. income distribution, including trends in wage levels of jobs moving offshore; and any increased security-related risks posed by offshoring and the extent to which these are mitigated by current practices and laws. Further research in these areas could help inform policy making by providing more information about the nature and magnitude of the benefits and costs resulting from offshoring. For example, research on whether offshoring is negatively impacting important sectors of the economy could help to inform the need for new policies to enhance U.S. competitiveness. Further information on the number of job losses resulting from offshoring as well as how workers fare in the labor market after their dislocations could help inform the need for new policies to assist displaced workers and to target these policies appropriately. Research on how offshoring is affecting the distribution of income in the U.S. could help to inform policy makers whether new policies are needed to address income inequality. Research that examines whether offshoring increases risks to national security, critical infrastructure, and consumer privacy can help to inform policy makers whether there is a need for additional security protections. Finally, research in all of these areas may help to advance the debate about whether policies to reduce the extent of offshoring are warranted. Researchers are conducting studies that can shed light on some of these areas. For example, some researchers have conducted case studies that examine the effects of offshoring in the semiconductor, call center, and radiology industries. Among other issues, these studies examined the types of work that are conducted offshore and the types of work that are conducted in the U.S. In their study of the radiology industry, for instance, Frank Levy and Ari Goelman conclude that radiology work conducted overseas is unlikely to displace radiology work done in the U.S., noting that offshore work primarily consists of preliminary readings of radiological images conducted at night when few radiologists in the U.S. would be available. However, the radiology industry may not be comparable with other industries in which offshoring takes place. Other researchers have utilized statistical methods for analyzing existing data series. For example, Martin Baily and Robert Lawrence have used a variety of methods to analyze trade and employment data and examine offshoring’s effects on unemployment. In some instances, researchers may be able to apply statistical methods that were utilized in research on offshoring and trade in the manufacturing sector to conduct research on services offshoring. There may also be opportunities to expand or improve current federal data series to obtain more information on this topic. For example, some have raised concerns that there is a significant discrepancy between data on the levels of services imports from India as reported by U.S. federal government sources and the data reported by India. In a review of BEA and Indian services data, we identified several factors that contributed to this discrepancy, such as differences in each country’s definitions of trade in services. We also recommended ways in which BEA can further improve its services trade data. Other examples of limitations of current databases identified by offshoring researchers are that data on services trade are not available at a sufficiently detailed industry level, trade data may not capture services that are bundled with goods or other services, and data on foreign affiliates of multinational corporations lack information on occupations of workers employed overseas. Table 1 illustrates some key areas where further research might contribute to a better understanding of the effects and policy implications of offshoring. The table identifies some pertinent data sources, though none of the sources can directly answer the research questions. Generally speaking, these data sources can provide information on a phenomenon, such as changes in employment in a given occupation or changes in the output produced by an industry, but they cannot provide information on the extent to which these changes resulted from offshoring. For example, BLS collects data on employment levels in various industries and occupations, but the data capture job losses and gains that occur for all reasons, not only because of offshoring. Table 1 also identifies some of the methodological approaches that have been, or could be, used in these areas of research. These include conducting in-depth studies of firms and industries and using statistical methods for analyzing existing data. Table 1 also highlights some potential challenges and limitations of the various approaches. For example, while in-depth studies of services offshoring in particular industries may shed light on some dynamics of the offshoring phenomenon, their findings are not necessarily reflective of what is occurring nationally. Our overview of research questions, data sources, research methods, and limitations is not meant to be exhaustive. Researchers will continue to pose new questions and approaches to gain further insights into offshoring. Services offshoring is likely to remain an important public policy issue for years to come. The extent of offshoring could increase in the future as technology advances, U.S. firms become more adept at offshoring, and other countries continue to improve their abilities to provide services for the global economy. Because the services offshoring phenomenon is relatively new, little is known about its effects on the U.S. economy and society. Due to limited data and empirical research thus far, the debate about offshoring has largely been theoretical in nature. Policy makers and analysts face data challenges as they seek to assess the wide range of policies that have been proposed in response to offshoring. In making these assessments, they may consider various relevant factors, such as the magnitude of the problems that policy proposals seek to address, likely effectiveness of the proposals, potential negative consequences, financial costs to government, and feasibility of administration. As the offshoring phenomenon continues, researchers in both the public and private sectors are likely to conduct more studies and collect more data that will provide a clearer understanding of offshoring and its effects. We have highlighted some key areas where further research might help advance the debate about the impacts and policy implications of offshoring. While such research faces numerous challenges and limitations, it offers some prospect for additional insights on diverse aspects of services offshoring. We provided a draft of this report to the Departments of Commerce, Labor, Treasury, and the Office of the United States Trade Representative. We received written comments from Commerce, which are reprinted in appendix III. Commerce stated that it appreciated the thoroughness of our review and that the report will be a useful reference starting point for discussions of the causes and impacts of offshoring. Commerce also stated that offshoring may raise living standards for the average American and affect fewer workers than the headlines seem to indicate, but that all of us must be troubled when any American workers lose their jobs, for whatever reason. Commerce added that the most powerful remedy for this problem is a growing economy that can ensure every American who wants a job is able to find one. Commerce, Treasury, and the Office of the U.S. Trade Representative provided technical comments, and we modified the report as appropriate to address these comments. The Department of Labor did not have comments. Copies of this report are being sent to the Departments of Commerce, Labor, and Treasury; the Office of the U.S. Trade Representative; appropriate congressional committees; and other interested parties. Copies will be made available to others upon request. The report is also available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about matters discussed in this report, please contact me at (202) 512-7215 or at [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Other contacts and staff acknowledgments are listed in appendix IV. Our objectives in this study were to: (1) describe experts’ views about the potential effects of services offshoring on the U.S. economy, workforce, national security, and consumer privacy; (2) describe the types of policies that have been proposed in response to offshoring; and (3) discuss areas where further research could advance the debate on offshoring. Our methodology consisted of an extensive literature review and interviews of selected experts. In addition, we attended several conferences on services offshoring during the course of our work. We conducted our work from May 2004 to November 2005 in accordance with generally accepted government auditing standards. We reviewed literature on services offshoring produced by academic experts, think tanks, business groups, labor groups, and government agencies such as the Congressional Research Service. Our literature review built upon work conducted under previous GAO studies of services offshoring. We collected additional literature by reviewing research databases such as Econlit and Proquest and through general Internet searches. We also conducted targeted searches of the literature produced by various think tanks, interest groups, and other government agencies. In addition, we were referred to literature through citations in other literature, through media accounts, and by experts we interviewed. Through the course of our work, we sought to obtain a diverse body of literature that described various views on the potential effects of services offshoring and policy proposals. For studies summarizing empirical research findings, GAO reviewed these studies solely to describe the views of various experts on the effects of offshoring and the research methodologies they used. The inclusion of studies in this report does not imply that we deem them definitive or that the evidence presented in them is conclusive. Additionally some of these studies contain estimates of job losses due to offshoring of services that are of undetermined reliability. These estimates are presented for illustrative purposes and should not be considered in the same manner as the official government data on employment and trade discussed in the report. See the bibliography for a list of key literature reviewed for this report. We interviewed experts from government agencies, academia, think tanks, and organizations representing business and labor interests. We met with government officials at the departments of Commerce, Labor, and Treasury, and at the Office of the U.S. Trade Representative because each of these agencies analyzes issues related to offshoring. We selected other experts to interview based upon literature they published related to the offshoring phenomenon and through referrals by other experts. We strove to obtain a balance of views among the experts we interviewed. In addition to interviewing experts, we also reviewed interviews conducted for other GAO work on services offshoring. See appendix II for a list of experts interviewed for this report. We also attended several conferences related to services offshoring to obtain further viewpoints on this topic, including conferences organized by the Brookings Institution, William Davidson Institute at the University of Michigan Business School, CATO Institute, Labor and Worklife Program at Harvard Law School and the North American Alliance for Fair Employment, Stanford Business School’s Sloan Masters Program and World Affairs Council of Northern California, Asia-Pacific Research Center at Stanford University, and the Bernard and Audre Rapoport Center for Human Rights and Justice of the University of Texas School of Law. Jodie Allen Senior Editor Pew Research Center Robert Atkinson Vice President & Director Technology & New Economy Project Progressive Policy Institute Ashok Bardhan Senior Research Associate Fisher Center for Real Estate & Urban Economics, Haas School of Business, University of California Berkeley William Baumol Professor of Economics New York University Jagdish Bhagwati Professor of Economics Columbia University Josh Bivens Trade Economist Economic Policy Institute Susan Collins Senior Fellow, Economic Studies The Brookings Institution Ralph Gomory President Alfred P. Sloan Foundation Ron Hira Assistant Professor of Public Policy Rochester Institute of Technology and Vice President for Career Activities, Institute of Electrical and Electronics Engineers-USA Josh James Manager of Research American Electronics Association Matthew Kazmierczak Director of Research American Electronics Association Martin Kenney Professor of Human and Community Development University of California Davis Lori Kletzer Professor of Economics University of California, Santa Cruz Cynthia Kroll Senior Regional Economist Fisher Center for Real Estate & Urban Economics, Haas School of Business, University of California Berkeley Jeff Lande Senior Vice President Information Technology Association of America Robert Lawrence Professor of International Trade and Investment Center for Business & Government, John F. Kennedy School of Government, Harvard University Thea Lee Assistant Director for International Economics AFL-CIO Robert Litan Senior Fellow Economic Studies The Brookings Institution Catherine Mann Senior Fellow Institute for International Economics Lee Price Research Director Economic Policy Institute Robert Reich Professor of Social and Economic Policy Brandeis University Dani Rodrik Professor of International Political Economy John F. Kennedy School of Government, Harvard University Enrique Sanchez Director Bank of America (retired) In addition to the contact named above, Andrew Sherrill, Assistant Director; Yunsian Tai and Katrina Ryan, Analysts in Charge; Rhiannon Patterson; Eric Wenner; Margaret Armen; Lawrance Evans, Jr.; and Tovah Rom made significant contributions to this report. Defense Acquisitions: Knowledge of Software Suppliers Needed to Manage Risks. GAO-04-678. Washington, D.C.: May 25, 2004. Defense Trade: Better Information Needed to Support Decisions Affecting Proposed Weapons Transfers. GAO-03-694. Washington, D.C.: July 11, 2003. Export Controls: Post-Shipment Verification Provides Limited Assurance That Dual-Use Items Are Being Properly Used. GAO-04-357. Washington, D.C.: January 12, 2004. Export Controls: Processes for Determining Proper Control of Defense- Related Items Need Improvement. GAO-02-996. Washington, D.C.: September 20, 2002. Federal Procurement: International Agreements Result in Waivers of Some U.S. Domestic Source Restrictions. GAO-05-188. Washington, D.C.: January 26, 2005. Higher Education: Federal Science, Technology, Engineering, and Mathematics Programs and Related Trends. GAO-06-114. Washington, D.C.: October 12, 2005. Highlights of a GAO Forum: Workforce Challenges and Opportunities for the 21st Century: Changing Labor Force Dynamics and the Role of Government Policies. GAO-04-845SP. Washington, D.C.: June 2004. Industrial Security: DOD Cannot Ensure Its Oversight of Contractors under Foreign Influence Is Sufficient. GAO-05-681. Washington, D.C.: July 15, 2005. International Trade: Current Government Data Provide Limited Insight into Offshoring of Services. GAO-04-932. Washington, D.C.: September 22, 2004. International Trade: Further Improvements Needed to Handle Growing Workload for Monitoring and Enforcing Trade Agreements. GAO-05-537. Washington D.C. June 30, 2005. International Trade: Treasury Assessments Have Not Found Currency Manipulation, but Concerns about Exchange Rates Continue. GAO-05- 351. Washington, D.C.: April 19, 2005. International Trade: U.S. and India Data on Offshoring Show Significant Differences. GAO-06-116. Washington, D.C.: October 27, 2005. Tax Policy and Administration: Review of Studies of the Effectiveness of the Research Tax Credit. GAO/GGD-96-43. Washington, D.C.: May 21, 1996. Trade Adjustment Assistance: Reforms Have Accelerated Training Enrollment, but Implementation Challenges Remain. GAO-04-1012. Washington, D.C.: September 22, 2004. The Worker Adjustment and Retraining Notification Act: Revising the Act and Educational Materials Could Clarify Employer Responsibilities and Employee Rights. GAO-03-1003. Washington, D.C.: September 19, 2003. American Electronics Association. Offshore Outsourcing in an Increasingly Competitive and Rapidly Changing World: A High-Tech Perspective. Washington, D.C.: March 2004. Amiti, Mary and Shang-Jin Wei. “Fear of Service Outsourcing: Is It Justified?” IMF Working Paper WP/04/186. Washington, D.C.: International Monetary Fund, October 2004. Antras, Pol, Luis Garicano, and Esteban Rossi-Hansberg. “Offshoring in a Knowledge Economy.” NBER Working Paper 11094. Cambridge, Mass.: National Bureau of Economic Research, January 2005. Aron, Ravi and Ying Liu. “A Study of Operational Risk in Off-Shore Outsourcing of Information Work: Evidence from Field Research.” Working Paper OPIM-2005-05-06. Philadelphia, Pa.: The Wharton School, University of Pennsylvania. Arora, Ashish and Alfonso Gambardella. “The Globalization of the Software Industry: Perspectives and Opportunities for Developed and Developing Countries.” NBER Working Paper 10538. Cambridge, Mass.: National Bureau of Economic Research, June 2004. Atkinson, Robert. “Meeting the Offshoring Challenge.” Policy Report. Washington, D.C.: Progressive Policy Institute, July 2004. Atkinson, Robert. “Understanding the Offshoring Challenge.” Policy Report. Washington, D.C.: Progressive Policy Institute, May 2004. Baily, Martin Neil and Robert Z. Lawrence. “What Happened to the Great U.S. Job Machine? The Role of Trade and Electronic Offshoring.” Brookings Papers on Economic Activity 2 (2004): 211-284. Bajpai, Nirupam, Jeffrey Sachs, Rohit Arora, and Harpreet Khurana. “Global Services Sourcing: Issues of Cost and Quality.” Center on Globalization and Sustainable Development Working Paper Series 16. New York, N.Y.: The Earth Institute at Columbia University, June 2004. Bale, Malcolm D. and John H. Mutti. “Income Losses, Compensation, and International Trade.” The Journal of Human Resources 13:2 (spring 1978): 278-285. Bardhan, Ashok D. and Cynthia Kroll. “The New Wave of Outsourcing.” Fisher Center Research Reports 1103. Berkeley, Calif.: University of California, Berkeley, Fisher Center for Real Estate and Urban Economics, fall 2003. Bardhan, Ashok D. and Dwight M. Jaffee. “Innovation, R&D and Offshoring.” Fisher Center Research Reports 1005. Berkeley, Calif.: University of California, Berkeley, Fisher Center for Real Estate and Urban Economics, fall 2005. Batt, Rosemary, Virginia Doellgast, and Hyunji Kwon. “A Comparison of Service Management and Employment Systems in U.S. and Indian Call Centers.” Paper prepared for the Brookings Trade Forum: Offshoring White-Collar Work—The Issues and the Implications, Washington, D.C., May 12-13 2005. Berg, Andrew and Anne Krueger. “Trade, Growth and Poverty: A Selective Survey.” IMF Working Paper WP/03/30. Washington, D.C.: International Monetary Fund, February 2003. Bergsten, C. Fred and the Institute for International Economics. The United States and the World Economy: Foreign Economic Policy for the Next Decade. Washington, D.C.: Institute for International Economics, January 2005. Bernard, Andrew B. and J. Bradford Jensen. “Who Dies? International Trade, Market Structure, and Industrial Restructuring.” NBER Working Paper 8327. Cambridge, Mass.: National Bureau of Economic Research, June 2001. Bernard Andrew B., J. Bradford Jensen, and Peter K. Schott. “Falling Trade Costs, Heterogeneous Firms, and Industry Dynamics.” NBER Working Paper 9639. Cambridge, Mass.: National Bureau of Economic Research, April 2003. Bernard Andrew B., J. Bradford Jensen, and Peter K. Schott. “Survival of the Best Fit: Exposure to Low-Wage Countries and the (Uneven) Growth of U.S. Manufacturing Plants.” November 2004. http://mba.tuck.dartmouth.edu/pages/faculty/andrew.bernard/working_pap ers.html. (accessed on Apr. 27, 2005). Bhagwati, Jagdish, Arvind Panagariya, and T.N. Srinivasan. “The Muddles Over Outsourcing.” Journal of Economic Perspectives 18:4 (fall 2004): 93- 114. Bivens, L. Josh. “Truth and Consequences of Offshoring: Recent Studies Overstate the Benefits and Ignore the Costs to American Workers.” Briefing Paper 155. Washington, D.C.: Economic Policy Institute, August 2, 2005. The Boston Consulting Group. Capturing Global Advantage: How Leading Industrial Companies are Transforming Their Industries by Sourcing and Selling in China, India, and Other Low-Cost Countries. Boston, Mass.: April 2004. Brainard, Lael and Robert E. Litan. “‘Offshoring’ Service Jobs: Bane or Boon—and What to Do?” The Brookings Institution Policy Brief 132. Washington, D.C.: The Brookings Institution, April 2004. Bronfenbrenner, Kate and Stephanie Luce. “The Changing Nature of Corporate Global Restructuring: The Impact of Production Shifts on Jobs in the U.S., China, and around the Globe.” Paper submitted to the U.S.- China Economic and Security Review Commission. October 14, 2004. The Brookings Institution. “Offshoring and Privacy: Consumer Data in the Global Economy.” Transcript from a Brookings Briefing. Washington, D.C., April 8, 2005. The Brookings Institution. “Preparing America to Compete Globally: A Forum on Offshoring.” Transcript from a Brookings Briefing. Washington, D.C., March 3, 2004. The Brookings Institution. “Services Offshoring: What Do the Data Tell Us?” Summary of Data Workshop. Washington, D.C., June 22, 2004. Brown, Clair and Greg Linden. “Offshoring in the Semiconductor Industry: A Historical Perspective.” Berkeley-Doshisha Employment and Technology Working Paper Series cwts-02-2005. Berkeley, Calif.: University of California, Berkeley, 2005. Brynjolfsson, Erik and Lorin M. Hitt. “Beyond Computation: Information Technology, Organizational Transformation and Business Performance.” Journal of Economic Perspectives 14:4 (fall 2000): 23-48. Business Roundtable. Securing Growth and Jobs: Improving U.S. Prosperity in a Worldwide Economy. March 2004. Center for American Progress. Offshoring and the Global Economy: A Progressive Agenda. October 2004. Collins, Susan M., ed. Imports, Exports, and the American Worker. Washington, D.C.: The Brookings Institution Press, 1998. Congressional Research Service. Deindustrialization of the U.S. Economy: The Roles of Trade, Productivity, and Recession. RL32350. Washington, D.C.: April 15, 2004. Congressional Research Service. Financial Services Industry Outsourcing and Enforcement of Privacy Laws. RS21809. Washington, D.C.: June 9, 2004. Congressional Research Service. The Flat Tax, Value-Added Tax, and National Retail Sales Tax: Overview of the Issues. RL32603. Washington, D.C.: December 14, 2004. Congressional Research Service. Foreign Outsourcing: Economic Implications and Policy Responses. RL32484. Washington, DC: June 21, 2005. Congressional Research Service. Job Loss: Causes and Policy Implications. RL32194. Washington, D.C.: December 22, 2004. Congressional Research Service. Manufacturing Output, Productivity, and Employment: Implications for U.S. Policy. RL32179. Washington, D.C.: January 29, 2004. Congressional Research Service. Offshoring (a.k.a. Offshore Outsourcing) and Job Insecurity Among U.S. Workers. RL32292. Washington, D.C.: May 2, 2005. Congressional Research Service. The U.S. Trade Deficit: Causes, Consequences, and Cures. RL31032. Washington, D.C.: July 13, 2005. Davidson, Carl, Lawrence Martin, and Steven Matusz. “Trade and Search Generated Unemployment.” Journal of International Economics 48:2 (1999): 271-299 . Defense Security Service. Technology Collection Trends in the U.S. Defense Industry. Alexandria Va.: 2002. Deloitte Touche Tohmatsu. Making the Off-Shore Call: The Road Map for Communications Operators. 2004. Dossani, Rafiq and Martin Kenney. “Offshoring: Determinants of the Location and Value of Services.” Briefing Paper for Sloan Workshop Series in Industry Studies, Stanford University, August 13, 2004. Dossani, Rafiq and Martin Kenney. “Went for Cost, Stayed for Quality? Moving the Back Office to India.” Berkeley Roundtable on the International Economy Paper BRIEWP156. Berkeley, Calif.: University of California, Berkeley, 2003. Economic Policy Institute. “Offshoring.” EPI Issue Guide. June 2004. http://www.epinet.org (accessed on July 12, 2005). Eischen, Kyle. “Working Through Outsourcing: Software Practice, Industry Organization, and Industry Evolution in India.” Center for Global International & Regional Studies Working Paper Series. WP 2004-4. Santa Cruz, Calif.: University of California, Santa Cruz, 2004. Forrester Research, Inc. 3.3 Million U.S. Services Jobs to Go Offshore. November 11, 2002. Garner, C. Alan. “Offshoring in the Service Sector: Economic Impact and Policy Issues.” Economic Review. Kansas, City, Mo.: Federal Reserve Bank of Kansas City (third quarter, 2004): 5-37. Global Insight (USA), Inc. The Impact of Offshore IT Software and Services Outsourcing on the U.S. Economy and the IT Industry. Lexington, Mass.: March 2004. Gomory, Ralph E. and William J. Baumol. Global Trade and Conflicting National Interests. Cambridge, Mass.: MIT Press, 2000. Groshen, Erica L. and Simon Potter, “Has Structural Change Contributed to a Jobless Recovery?” Current Issues in Economics and Finance. 9:8. New York, N.Y.: Federal Reserve Bank of New York, August 2003. Haveman, Jon D. and Howard J. Shatz. “Services Offshoring: Background and Implications for California.” Occasional Paper. San Francisco, Calif.: Public Policy Institute of California, August 25, 2004. Hira, Ron and Anil Hira. Outsourcing America: What’s Behind Our National Crisis and How We Can Reclaim American Jobs. New York, N.Y.: American Management Association, 2005. Institute of Electrical and Electronics Engineers. “Position on Offshore Outsourcing.” March 2004. http://www.ieeeusa.org/forum/positions/offshoring.html. (accessed Aug. 4, 2004). Jensen, J. Bradford and Lori G. Kletzer. “Tradable Services: Understanding the Scope and Impact of Services Offshoring,” July 14, 2005. Forthcoming in Brookings Trade Forum 2005: Offshoring White-Collar Work—The Issues and the Implications, Lael Brainard and Susan M. Collins, ed. Kane, Timothy, Brett D. Schaefer, and Alison Fraser. “Myths and Realities: The False Crisis of Outsourcing.” Backgrounder. 1757. Washington, D.C.: Heritage Foundation, May 13, 2004. Kirkegaard, Jacob F. Outsourcing—Stains on the White Collar? Institute for International Economics. Klein, Michael W., Scott Schuh and Robert K. Triest. “Job Creation, Job Destruction, and International Competition: A Literature Review.” Working Paper 02-7. Boston, Mass.: Federal Reserve Bank of Boston, December 2002. Klinger, Shannon and M. Lynn Sykes. Exporting the Law: A Legal Analysis of State and Federal Outsourcing Legislation. Arlington, Va.: National Foundation for American Policy, April 2004. Kroll, Cynthia A. “State and Metropolitan Area Impacts of the Offshore Outsourcing of Business Services and IT.” Fisher Center Working Paper 293. Berkeley, Calif.: University of California, Berkeley, Fisher Center for Real Estate & Urban Economics, 2005. Leana, Carrie R., Daniel C. Feldman, and Gilbert Y. Tan. “Predictors of Coping Behavior after a Layoff.” Journal of Organizational Behavior 19:1 (January 1998): 85-97. Lindsey, Brink. “Job Losses and Trade: A Reality Check.” Trade Briefing Paper 19. Washington, D.C.: CATO Institute, March 17, 2004. MacDonald, James M. “Does Import Competition Force Efficient Production?” The Review of Economics and Statistics 76:4. (November 1994): 721-727. Mann, Catherine L. “Globalization of IT Services and White Collar Jobs: The Next Wave of Productivity Growth.” International Economics Policy Briefs PB03-11. Washington, D.C.: Institute for International Economics, December 2003. Mann, Catherine L. “Offshore Outsourcing and the Globalization of US Services: Why Now, How Important, and What Policy Implications.” In The United States and the World Economy: Foreign Economic Policy for the Next Decade. Washington, D.C.: Institute for International Economics, January 2005. Mann, Catherine L. “This is Bangalore Calling: Hang Up or Speed Dial? What Technology-Enabled International Trade in Services Means for the U.S. Economy and Workforce.” Cleveland, Ohio: Federal Reserve Bank of Cleveland, January 15, 2005. Markusen, James R. “Modeling the Offshoring of White-Collar Services: From Comparative Advantage to the New Theories of Trade and FDI.” Paper prepared for the Brookings Trade Forum: Offshoring White-Collar Work—The Issues and the Implications, Washington, D.C., May 12-13, 2005. McKinsey Global Institute. Offshoring: Is It a Win-Win Game? San Francisco, Calif.: August 2003. neoIT. “The Effect of Data Privacy & Security Regulations on Services Globalization.” Offshore Insights White Paper Series 2:9. San Ramon, Calif.: September 2004. neoIT. “Research Summary: Offshore & Nearshore ITO Salary Report 2004.” Offshore Insights Market Reports Series 3:5. San Ramon, Calif.: May 2005. Office of Senator Joseph I. Lieberman. Data Dearth in Offshore Outsourcing: Policymaking Requires Facts. Washington, D.C.: December 2004. Office of Senator Joseph I. Lieberman. Offshore Outsourcing and America’s Competitive Edge: Losing Out in the High Technology R&D and Services Sectors. Washington, D.C.: May 11, 2004. Ong, Paul M. and Don Mar. “Post-Layoff Earnings Among Semiconductor Workers.” Industrial and Labor Relations Review 45:2 (January 1992): 366-379. Parry, Robert T. “Globalization: Threat or Opportunity for the U.S. Economy?” FRBSF Economic Letter 2004-12. San Francisco, Calif.: Federal Reserve Board of San Francisco, May 21, 2004. Public Citizen’s Global Trade Watch. Addressing the Regulatory Vacuum: Policy Considerations Regarding Public and Private Sector Service Job Offshoring. Product ID E9012. Washington, D.C. April 2004. Rao, Madhu T. and William Poole. “Global Information Technology Sourcing: Impacts and Implications for Washington State.” Seattle, Wash.: RATEC and Seattle Chapter of the Society for Information Management, July 2004. Republican Policy Committee. Outsourcing: Meeting the Challenges Without Destroying the Benefits. Washington, D.C.: March 3, 2004. Rodriguez, Francisco and Dani Rodrik. “Trade Policy and Economic Growth: A Skeptic’s Guide to the Cross-National Evidence.” NBER Working Paper 7081. Cambridge, Mass.: National Bureau of Economic Research, April 1999. Ruffin, Roy J. “The Nature and Significance of Intra-Industry Trade.” Economic and Financial Review. Dallas, Tex.: Federal Reserve Bank of Dallas, fourth quarter 1999. Samuelson, Paul A. “Where Ricardo and Mill Rebut and Confirm Arguments of Mainstream Economists Supporting Globalization.” Journal of Economic Perspectives 18:3 (summer 2004): 135-146. Schultze, Charles L. “Offshoring, Import Competition, and the Jobless Recovery.” The Brookings Institution Policy Brief 136. Washington, D.C.: The Brookings Institution, August 2004. Seshasai, Satwik and Amar Gupta. “Global Outsourcing of Professional Services.” Working Paper 4456-04. Cambridge, Mass.: MIT Sloan School of Management, January 2004. Stiroh, Kevin J. “Information Technology and the U.S. Productivity Revival: A Review of the Evidence.” Business Economics 37:1 (January 2002): 30-37. United Nations Conference on Trade and Development. World Investment Report 2004: The Shift Towards Services. New York and Geneva: 2004. U.S. Chamber of Commerce. Jobs, Trade, Sourcing, and the Future of the American Workforce. Washington, D.C.: April 2004. U.S. House of Representatives Small Business Committee. The Globalization of White-Collar Jobs: Can America Lose These Jobs and Still Prosper? Testimony before full committee. Washington, D.C.: June 18, 2003. White & Case. The Debate Over Outsourcing in the United States: A Real Threat to Job Growth or an Evolution of Free Trade? Washington, D.C.: March 15, 2004. The Government Accountability Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through GAO’s Web site (www.gao.gov). Each weekday, GAO posts newly released reports, testimony, and correspondence on its Web site. To have GAO e-mail you a list of newly posted products every afternoon, go to www.gao.gov and select “Subscribe to Updates.” | Much attention has focused on the "offshoring" of services to lower-wage locations abroad. Offshoring generally refers to an organization's purchase of goods or services from abroad that were previously produced domestically. Extensive public debate has arisen about both the potential benefits of services offshoring, such as lower consumer prices and higher U.S. productivity, as well as the potential costs, such as increased job displacement for selected U.S. workers. In response to widespread congressional interest, GAO conducted work under the Comptroller General's authority to help policy makers better understand the potential impacts and policy implications of services offshoring. This report: (1) provides an overview of experts' views on the potential impacts of services offshoring, (2) describes the types of policies that have been proposed in response to offshoring, and (3) highlights some key areas where additional research might help advance the debate about offshoring. In its comments, the Department of Commerce generally agreed with the findings of this report. Commerce, Treasury, and the Office of the United States Trade Representative also provided technical comments that have been incorporated as appropriate. Analysts of the offshoring phenomenon have expressed a range of views about the likely impacts of offshoring on four broad areas. The differing views reflect several factors: the fact that services offshoring is a relatively recent development whose impact is not fully known, the limitations of available data on offshoring, and different theoretical expectations about how services offshoring will impact the U.S. economy. The average U.S. standard of living: Traditional economic theory generally predicts that offshoring will benefit U.S. living standards in the long run. However, some economists have argued that offshoring could harm U.S. long-term living standards under certain scenarios, such as if offshoring undermines U.S. technological leadership. Employment and job loss: While economic theory generally predicts that offshoring will have little effect on overall U.S. employment levels in the long-run, there is widespread recognition that pockets of workers will lose jobs due to offshoring, though there is disagreement about the expected magnitude of job loss and implications for displaced workers. Distribution of income: Some economists maintain that offshoring could increase income inequality in the U.S., while others argue that changes in the income distribution are driven primarily by factors other than offshoring, such as technological change. Security and consumer privacy: Experts express varying degrees of concern about the impact of services offshoring on the security of our national defense system and critical infrastructure--such as utilities and communication networks--as well as the privacy and security of consumers' financial and medical information. A wide range of policies has been proposed in response to concerns about offshoring and its potential effects. These proposals can be categorized into four areas by the concerns they seek to address: (1) improving U.S. global competitiveness, (2) addressing effects on the U.S. workforce, (3) addressing security concerns, and (4) reducing the extent of offshoring. Some analysts have recommended policies in more than one area. Determining appropriate policy responses to the offshoring phenomenon is challenging due to the limited state of knowledge about the extent and impacts of offshoring. Nonetheless, there are some key areas where additional research might help advance the debate, such as trends in the wages and skill levels of jobs being offshored, reemployment experiences of workers displaced by offshoring, and the extent to which current laws and practices in different sectors of the economy mitigate any increased security-related risks posed by offshoring. In the face of limited federal data, researchers have begun using a variety of approaches to examine such areas. |
As the central human resources agency for the federal government, OPM is tasked with ensuring that the government has an effective civilian workforce. To carry out this mission, OPM delivers human resources products and services including policies and procedures for recruiting and hiring, provides health and training benefit programs, and administers the retirement program for federal employees. According to the agency, approximately 2.7 million active federal employees and nearly 2.5 million retired federal employees rely on its services. According to OPM, the retirement program serves current and former federal employees by providing (1) tools and options for retirement planning and (2) retirement compensation. Two defined-benefit retirement plans that provide retirement, disability, and survivor benefits to federal employees are administered by the agency. The first plan, the Civil Service Retirement System (CSRS), provides retirement benefits for most federal employees hired before 1984. The second plan, the Federal Employees Retirement System (FERS), covers most employees hired in or after 1984 and provides benefits that include Social Security and a defined contribution system. OPM and employing agencies’ human resources and payroll offices are responsible for processing federal employees’ retirement applications. The process begins when an employee submits a paper retirement application to his or her employer’s human resources office and is completed when the individual begins receiving regular monthly benefit payments (as illustrated in fig. 1). Once an employee submits an application, the human resources office provides retirement counseling services to the employee and augments the retirement application with additional paperwork, such as a separation form that finalizes the date the employee will retire. Then the agency provides the retirement package to the employee’s payroll office. After the employee separates for retirement, the payroll office is responsible for reviewing the documents for correct signatures and information, making sure that all required forms have been submitted, and adding any additional paperwork that will be necessary for processing the retirement package. Once the payroll office has finalized the paperwork, the retirement package is mailed to OPM to continue the retirement process. Payroll offices are required to submit the package to OPM within 30 days of the retiree’s separation date. Upon receipt of the retirement package, OPM calculates an interim payment based on information provided by the employing agency. The interim payments are partial payments that typically provide retirees with 80 percent of the total monthly benefit they will eventually receive.then starts the process of analyzing the retirement application and associated paperwork to determine the total monthly benefit amount to which the retiree is entitled. This process includes collecting additional information from the employing agency’s human resources and payroll offices or from the retiree to ensure that all necessary data are available before calculating benefits. After OPM completes its review and authorizes payment, the retiree begins receiving 100 percent of the monthly retirement benefit payments. OPM then stores the paper retirement folder at the Retirement Operations Center in Boyers, Pennsylvania. The agency recently reported that the average time to process retirement claims was 156 days in 2012. According to the Deputy Associate Director for the Center of Retirement and Insurance Services, about 200 employees are directly involved in processing the approximately 100,000 retirement applications OPM receives annually. Retirement processing includes functions such as determining retirement eligibility, inputting data into benefit calculators, and providing customer service. The agency uses over 500 different procedures, laws, and regulations, which are documented on the agency’s internal website, to process retirement applications. For example, the site contains memorandums that outline new procedures for handling special retirement applications, such as those for disability or court orders. Further, OPM’s retirement processing involves the use of over 80 information systems that have approximately 400 interfaces with other internal and external systems. For instance, 26 internal systems interface with the Department of the Treasury to provide, among other things, information regarding the total amount of benefit payments to which an employee is entitled. OPM has reported that a greater retirement processing workload is expected due to an anticipated increase in the number of retirement applications over the next decade, although current retirement processing operations are at full capacity. Further, the agency has identified several factors that limit its ability to process retirement benefits in an efficient and timely manner. Specifically, OPM noted that current processes are paper-based and manually intensive, resulting in a higher number of errors and delays in providing benefit payments; the high costs, limited capabilities, and other problems with the existing information systems and processes pose increasing risks to the accuracy of benefit payments; current manual capabilities restrict customer service; federal employees have limited access to retirement records, making planning for retirement difficult; and attracting qualified personnel to operate and maintain the antiquated retirement systems, which have about 3 million lines of custom programming, is challenging. Recognizing the need to modernize its retirement processing, in the late 1980s OPM began initiatives that were aimed at automating its antiquated paper-based processes. Initial modernization visions called for developing an integrated system and automated processes to provide prompt and complete benefit payments. However, following attempts over more than two decades, the agency has not yet been successful in achieving the modernized retirement system that it envisioned. In early 1987, OPM began a program called the FERS Automated Processing System. However, after 8 years of planning, the agency decided to reevaluate the program, and the Office of Management and Budget requested an independent review of the program, which identified various management weaknesses. The independent review suggested areas for improvement and recommended terminating the program if immediate action was not taken. In mid-1996, OPM terminated the program. In 1997, OPM began planning a second modernization initiative, called the Retirement Systems Modernization (RSM) program. The agency originally intended to structure the program as an acquisition of commercially available hardware and software that would be modified in- house to meet its needs. From 1997 to 2001, OPM developed plans and analyses and began developing business and security requirements for the program. However, in June 2001, it decided to change the direction of the retirement modernization initiative. In late 2001, retaining the name RSM, the agency embarked upon its third initiative to modernize the retirement process and examined the possibility of privately sourced technologies and tools. Toward this end, the agency determined that contracting was a viable alternative and, in 2006, awarded three contracts for the automation of retirement processing, the conversion of paper records to electronic files, and consulting services to redesign its retirement operations. In February 2008, OPM renamed the program RetireEZ and deployed an automated retirement processing system. However, by May 2008 the agency determined that the system was not working as expected and suspended system operation. In October 2008, after 5 months of attempting to address quality issues, the agency terminated the contract for the system. In November 2008, OPM began restructuring the program and reported that its efforts to modernize retirement processing would continue. However, after several years of trying to revitalize the program, the agency terminated the retirement system modernization in February 2011. OPM’s efforts to modernize its retirement system were hindered by weaknesses in several key IT management disciplines. Our experience with major modernization initiatives has shown that having sound management capabilities is essential to achieving successful outcomes. These capabilities include project management, risk management, organizational change management, system testing, cost estimating, progress reporting, planning, and oversight, among others. However, we found that OPM’s capabilities in these areas were not sufficiently developed. For example, in reporting on RSM in February 2005, we noted weaknesses in project management, risk management, and organizational change management. Project management is the process for planning and managing all project-related activities, including defining how project components are interrelated. Effective project management allows the performance, cost, and schedule of the overall project to be measured and controlled in comparison to planned objectives. Although OPM had defined major retirement modernization project components, it had not defined the dependencies among them. Specifically, the agency had not identified critical tasks and their impact on the completion of other tasks. By not identifying critical dependencies among project components, OPM increased the risk that unforeseen delays in one activity could hinder progress in other activities. Risk management entails identifying potential problems before they occur. Risks should be identified as early as possible, analyzed, mitigated, and tracked to closure. OPM officials acknowledged that they did not have a process for identifying and tracking retirement modernization project risks and mitigation strategies on a regular basis but stated that the agency’s project management consultant would assist it in implementing a risk management process. Lacking such a process, OPM did not have a mechanism to address potential problems that could adversely impact the cost, schedule, and quality of the retirement modernization project. Organizational change management includes preparing users for the changes to how their work will be performed as a result of a new system implementation. Effective organizational change management includes plans to prepare users for impacts the new system might have on their roles and responsibilities, and a process to manage those changes. Although OPM officials stated that change management posed a substantial challenge to the success of retirement modernization, they had not developed a detailed plan to help users transition to different job responsibilities. Without having and implementing such a plan, effective implementation of new systems could be hindered by confusion about user roles and responsibilities. We recommended that the Director of OPM ensure that the retirement modernization program office expeditiously establish processes for effective project management, risk management, and organizational change management. In response, the agency initiated steps toward establishing management processes for retirement modernization and demonstrated activities to address our recommendations. We reported again on OPM’s retirement modernization in January 2008, as the agency was about to deploy a new automated retirement processing system. We noted weaknesses in additional key management capabilities, including system testing, cost estimating, and progress reporting. Effective testing is an essential activity of any project that includes system development. Generally, the purpose of testing is to identify defects or problems in meeting defined system requirements or satisfying system user needs. At the time of our review, 1 month before OPM planned to deploy a major system component, test results showed that the component had not performed as intended. We warned that until actual test results indicated improvement in the system, OPM risked deploying technology that would not accurately calculate retirement benefits. Although the agency planned to perform additional tests to verify that the system would work as intended, the schedule for conducting these tests became compressed from 5 months to 2-1/2 months, with several tests to be performed concurrently rather than sequentially. The agency stated that a lack of testing resources, including the availability of subject matter experts, and the need for further system development contributed to the delay of planned tests and the need for concurrent testing. The high degree of concurrent testing that OPM planned to meet its February 2008 deployment schedule increased the risk that the agency would not have the resources or time to verify that the planned system worked as expected. Cost estimating is the identification of individual project cost elements, using established methods and valid data to estimate future costs. Establishing a reliable cost estimate is important for developing a project budget and having a sound basis for measuring performance, including comparing the actual and planned costs of project activities. Although OPM developed a retirement modernization cost estimate, it was not supported by the documentation that is fundamental to a reliable cost estimate. Without a reliable cost estimate, OPM lacked a sound basis for formulating retirement modernization budgets or for developing the cost baseline that is necessary for measuring and predicting project performance. Earned value management (EVM) is a tool for measuring program progress by comparing the value of work accomplished with the amount of work expected to be accomplished. Fundamental to reliable EVM is the development of a baseline against which variances are calculated. OPM used EVM to measure and report monthly performance of the retirement modernization system. The reported results indicated that the project was progressing almost exactly as planned. However, this view of project performance was not reliable because the baseline on which it was based did not reflect the full scope of the project, had not been validated, and was unstable (i.e., subject to frequent changes). This EVM approach in effect ensured that material variances from planned performance would not be identified and that the state of the project would not be reliably reported. We recommended that the Director of OPM conduct effective system tests prior to system deployment and improve program cost estimation and progress reporting. OPM stated that it concurred with our recommendations and would take steps to address the weakness we identified. Nevertheless, OPM deployed a limited initial version of the modernized retirement system in February 2008. After unsuccessful efforts to address system quality issues, the agency suspended system operation, terminated the system contract, and began restructuring the modernization effort. In April 2009, we again reported on OPM’s retirement modernization, noting that the agency still remained far from achieving the modernized Specifically, we retirement processing capabilities that it had planned.noted that significant weaknesses continued to exist in the areas of cost estimating, progress reporting, and testing, while also noting two additional weaknesses related to planning and oversight. Although it concurred with our January 2008 recommendation to develop a revised cost estimate for the retirement modernization effort, OPM had not completed initial steps for developing the new estimate by the time we issued our report in April 2009. We reported that the agency had not yet fully defined the estimate’s purpose, developed an estimating plan, or defined the project’s characteristics. By not completing these steps, OPM increased the risk that it would produce an unreliable estimate and not have a sound basis for measuring project performance and formulating retirement modernization budgets. OPM also concurred with our January 2008 recommendation to establish a basis for effective EVM but had not completed key steps as of the time of our report. Specifically, despite planning to use EVM to report the retirement modernization project’s progress, the agency had not developed a reliable cost estimate and a validated baseline. Engaging in EVM reporting without first taking these fundamental steps could have again rendered the agency’s assessments unreliable. As previously discussed, effective testing is an essential component of any project that includes developing systems. To be effectively managed, testing should be planned and conducted in a structured and disciplined fashion. Beginning the test planning process in the early stages of a project life cycle can reduce rework later. Early test planning in coordination with requirements development can provide major benefits. For example, planning for test activities during the development of requirements may reduce the number of defects identified later and the costs related to requirements rework or change requests. OPM’s need to compress its testing schedule and conduct tests concurrently, as we reported in January 2008, illustrates the importance of planning test activities early in a project’s life cycle. However, at the time of our April 2009 report, the agency had not begun to plan test activities in coordination with developing its requirements for the system it was planning at that time. Consequently, OPM increased the risk that it would again deploy a system that did not satisfy user expectations and meet requirements. Project management principles and effective practices emphasize the importance of having a plan that, among other things, incorporates all the critical areas of system development and is to be used as a means of determining what needs to be done, by whom, and when. Although OPM had developed a variety of informal documents and briefing slides that described retirement modernization activities, the agency did not have a complete plan that described how the program would proceed in the wake of its decision to terminate the system contract. As a result, we concluded that until the agency completed such a plan and used it to guide its efforts, it would not be properly positioned to proceed with its restructured retirement modernization initiative. Office of Management and Budget and GAO guidance call for agencies to ensure effective oversight of IT projects throughout all life- cycle phases. Critical to effective oversight are investment management boards made up of key executives who regularly track the progress of IT projects such as system acquisitions or modernizations. OPM’s Investment Review Board was established to ensure that major investments are on track by reviewing their progress and identifying appropriate actions when investments encounter challenges. Despite meeting regularly and receiving information that indicated problems with the retirement modernization, the board did not ensure that retirement modernization investments were on track, nor did it determine appropriate actions for course correction when needed. For example, from January 2007 to August 2008, the board met and was presented with reports that described problems the program was facing, such as the lack of an integrated master schedule and earned value data that did not reflect the “reality or current status” of the program. However, meeting minutes indicated that no discussion or action was taken to address these problems. According to a member of the board, OPM had not established guidance regarding how the board is to communicate recommendations and needed corrective actions for investments it oversees. Without a fully functioning oversight body, OPM lacked insight into the retirement modernization and the ability to make needed course corrections that effective boards are intended to provide. Our April 2009 report made new recommendations that OPM address the weaknesses in the retirement modernization project that we identified. Although the agency began taking steps to address them, the recommendations were overtaken by the agency’s decision in February 2011 to terminate the retirement modernization project. In mid-January 2012, OPM released a plan to undertake targeted, incremental improvements to retirement processing rather than a large- scale modernization, which described planned actions in four areas: hiring and training 56 new staff to adjudicate retirement claims and 20 additional staff to support the claims process; establishing higher production standards and identifying potential working with other agencies to improve the accuracy and completeness of the data they provide to OPM for use in retirement processing; and improving the department’s IT by pursuing a long-term data flow strategy, exploring short-term strategies to leverage work performed by other agencies, and reviewing and upgrading systems used by retirement services. Through implementing these actions, OPM has said that it aims to eliminate the agency’s retirement processing backlog and accurately process 90 percent of its cases within 60 days by July 31, 2013. However, as we testified in February 2012, that goal represents a substantial reduction from the agency’s fiscal year 2009 retirement modernization goal to accurately process 99 percent of cases within 30 days. Moreover, the plan did not describe whether or how the agency intends to modify or decommission the over 80 legacy systems that it currently relies on to support retirement processing. Last month, OPM officials described steps the agency has begun taking to implement the January 2012 plan for retirement services. These steps include filling the 56 positions needed to adjudicate retirement claims and 20 positions needed to support the claims process; implementing retirement processing improvements identified during an external review of its retirement claims process, such as reorganizing benefits claims officers into two tiers to allow the processing of more complex inquiries by higher-level officers; and improving the accuracy and completeness of retirement data that other agencies provide to OPM by conducting audits of the agencies’ application submissions and providing more frequent feedback and follow-up training. Additionally, the officials identified existing and planned IT improvements to support the retirement process. These efforts include providing retirees with the ability to view the status of their cases through OPM’s web-based application, Services Online; developing the capability to accept electronic data that are transferred from one of the seven federal payroll processing centers; enhancing its internal web-based application, Data Viewer, to allow 11 other agencies to view retirement case packets; upgrading its data storage capacity and production printer; sponsoring a challenge, in cooperation with the National Aeronautics and Space Administration, for developers to create a system with accounting tools for processing service credits; updating reporting guides to include processes for sending electronic retirement data to OPM; and planning an initiative to develop an automated retirement case management system to replace the agency’s existing document and case control system in fiscal year 2014. Nonetheless, while OPM is planning to replace its legacy document and case control system, agency officials stated that there were no major plans to decommission any of the agency’s other legacy systems that support retirement processing. Although the Associate Director for Retirement Services stated that investing in IT is important for improving the efficiency of retirement claims processing, the agency has not yet planned for improving or replacing the remaining legacy systems that support retirement processing. In summary, despite OPM’s longstanding recognition of the need to improve the timeliness and accuracy of retirement processing, the agency has thus far been unsuccessful in several attempts to develop the capabilities it has long sought. For over two decades, the agency’s retirement modernization efforts were plagued by weaknesses in management capabilities that are critical to the success of such endeavors. Among the management disciplines the agency has struggled with are project management, risk management, organizational change management, cost estimating, system testing, progress reporting, planning, and oversight. The incremental steps the agency recently reported taking include dedicating additional resources to retirement processing; however, they do not address the more fundamental need to modernize its legacy IT systems in order to significantly improve the efficiency of the process. Until OPM tackles that challenge, and develops the management capabilities to carry it out successfully, it may face ongoing difficulties in meeting the needs of future retirees. Chairman Farenthold, Ranking Member Lynch, and Members of the Subcommittee, this concludes my prepared statement. I would be pleased to respond to any questions that you or other members of the Subcommittee may have. If you have any questions concerning this statement, please contact Valerie C. Melvin, Director, Information Management and Technology Resources Issues, at (202) 512-6304 or [email protected]. Other individuals who made key contributions include Mark T. Bird, Assistant Director; David A. Hong; and Lee A. McCracken. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | OPM is the central human resources agency for the federal government and, as such, is responsible for ensuring that the government has an effective civilian workforce. As part of its mission, OPM defines recruiting and hiring processes and procedures; provides federal employees with various benefits, such as health benefits; and administers the retirement program for federal employees. OPM's use of IT is critical in carrying out its responsibilities; in fiscal year 2013 the agency plans to invest about $85 million in IT systems and services. For over two decades, OPM has been attempting to modernize its federal employee retirement process by automating paper-based processes and replacing antiquated information systems. However, these efforts have been unsuccessful, and the agency canceled its most recent large-scale retirement modernization effort in February 2011. GAO was asked to summarize its work on challenges OPM has faced in attempting to modernize the federal employee retirement process and to describe the agency's recent reported actions to improve its retirement processing. To do this, GAO generally relied on previously published work. In a series of reviews, GAO found that the Office of Personnel Management's (OPM) retirement modernization efforts were hindered by weaknesses in key management practices that are essential to successful information technology (IT) modernization projects. For example, in 2005, GAO made recommendations to address weaknesses in the following areas: Project management: While OPM had defined major components of its retirement modernization effort, it had not identified the dependencies among them, increasing the risk that delays in one activity could have unforeseen impacts on the progress of others. Risk management: OPM did not have a process for identifying and tracking project risks and mitigation strategies on a regular basis. Thus, it lacked a mechanism to address potential problems that could adversely impact the cost, schedule, and quality of the modernization effort. Organizational change management: OPM had not adequately prepared its staff for changes to job responsibilities resulting from the modernization by developing a detailed transition plan. This could lead to confusion about roles and responsibilities and hinder effective system implementation. In 2008, as OPM was on the verge of deploying an automated retirement processing system, GAO reported deficiencies in and made recommendations to address additional management capabilities: Testing: The results of tests 1 month prior to the deployment of a major system component revealed that it had not performed as intended. These defects, along with a compressed testing schedule, increased the risk that the system would not work as intended upon deployment. Cost estimating: The cost estimate OPM developed was not fully reliable. This meant that the agency did not have a sound basis for formulating budgets or developing a program baseline. Progress reporting: The baseline against which OPM was measuring the progress of the program did not reflect the full scope of the project; this increased the risk that variances from planned performance would not be detected. In 2009, GAO reported that OPM continued to have deficiencies in its cost estimating, progress reporting, and testing practices and made recommendations to address these and other weaknesses in the planning and oversight of the modernization effort. OPM agreed with these recommendations and began to address them, but, in February 2011, it terminated the modernization effort. In January 2012, OPM released a plan to improve retirement processing that aimed at targeted, incremental improvements rather than a large-scale modernization. Toward this end, OPM has reported hiring new claims-processing staff, taking steps to identify potential process improvements, and working with other agencies to improve data quality. Further, the agency reported making IT improvements that allow retirees to view the status of their accounts and automating parts of the retirement application process. However, the plan reflects a less ambitious goal for retirement processing timeliness and does not address improving or replacing the legacy systems that support retirement processing. GAO is not making new recommendations at this time. GAO has previously made numerous recommendations to address IT management challenges that OPM has faced in carrying out its retirement modernization efforts. Fully addressing these challenges remains key to the success of OPM's efforts. |
VHA provides health care to approximately 6.7 million veterans and their families at an estimated annual cost of about $56 billion. In 2016, the agency reported that it employed approximately 207,000 clinical staff to care for veterans in numerous venues throughout the United States and its territories. These venues include 168 VA medical centers and approximately 750 primary care and multi-specialty outpatient clinics. Within VHA, Pharmacy Benefits Management Services is responsible for providing a broad range of pharmacy services, such as promoting appropriate drug therapy, ensuring medication safety, providing clinical guidance to pharmacists and other clinicians, and maintaining VA drugs and supplies used to deliver pharmacy benefits, among other services. To provide these services, VHA operates about 260 pharmacies located in the medical centers and outpatient clinics, as well as 7 consolidated mail outpatient pharmacies. Veterans can receive treatment and obtain and fill prescriptions at the medical centers and outpatient clinics; they can also receive medications by mail via the consolidated mail outpatient pharmacies. According to the department, outpatient prescribing is performed most predominantly. For fiscal year 2016, VHA reported that it provided outpatient pharmacy services to approximately 5 million veterans. The agency further reported that about 31 million outpatient prescriptions were filled at medical center and outpatient clinic pharmacies, and about 116 million prescriptions were filled by consolidated mail outpatient pharmacies. VHA estimated that it spent approximately $6.3 billion for pharmacy services in fiscal year 2016 and it has requested about $7.7 billion for pharmacy services in fiscal year 2017. In prescribing medication for an outpatient, clinicians generally follow a prescription process in which the clinician first reviews the patient’s medical record and selects an appropriate medication from VA’s approved list of medications. The clinician then orders the necessary medication. For each order, the clinician performs checks to identify any excessive dosage, or any possible interactions for the medication; for example, the patient may be allergic to that medication. Medication orders are then reviewed by a pharmacist, who dispenses the drug and updates the patient’s medical record to reflect the medication that was dispensed. The general process for prescribing and dispensing prescription medications to VA patients is depicted in figure 1. VHA relies on the department’s health information system—the Veterans Health Information Systems and Technology Architecture (VistA)—to deliver health care. VistA consists of approximately 200 separate computer applications and modules that provide health care delivery capabilities. This includes multiple computer applications which have pharmacy capabilities. A key application within VistA—the Computerized Patient Record System (CPRS)—enables the department to create and update an individual electronic health record for each VA patient. Specifically, CPRS enables clinicians to enter, review, and continuously update information connected with a patient. Among other things, clinicians can order lab tests, medications, diets, radiology tests, and procedures; record a patient’s allergies or adverse reactions to medications; request and track consults; enter progress notes, diagnoses, and treatments for each encounter; and enter discharge summaries. Over the last three decades, local VHA medical sites have made numerous modifications to VistA, resulting in about 130 different instances, or variations, of the system. Since 2001, VA has recognized the need to modernize VistA and several of its efforts also were aimed at improving the department’s pharmacy capabilities. Specifically, in that year, VHA began the HealtheVet initiative to standardize patient data and modernize health information software applications. Under HealtheVet, VA began the Pharmacy Re-engineering project in 2002, with the intent of replacing all of the legacy applications that supported pharmacy services in order to meet current and future patient needs. The department had initially planned the Pharmacy Re- engineering project to be completed for deployment in 2009. However, in June 2009, the Secretary of Veterans Affairs announced that VA would stop financing failed projects and improve the management of its IT development projects. Toward this end, the VA Chief Information Officer transitioned the Pharmacy Re-engineering project to a phased development effort. According to VA pharmacy management officials, this was done because the project had faced funding delays, contracting difficulties, and differing directions from a number of VA chief information officers. The project was also rescoped to focus on implementing clinical decision support tools, specifically to cross-check drugs in medication orders to reduce the frequency of adverse drug events and improve patient safety. In August 2010, VA reported that it had terminated the HealtheVet initiative. Subsequently, from March 2011 to February 2013, VA worked toward the development of a new, joint integrated electronic health record system with DOD. In October 2011, pharmacy requirements were developed as a part of the joint integrated electronic health record initiative. However, the joint integrated electronic health record system was discontinued in February 2013 based on concerns about the program facing challenges in meeting deadlines, costing too much, and taking too long to deliver capabilities. In December 2013, VA began a new program, VistA Evolution, to enhance and modernize the existing health information system (VistA) by incrementally deploying capabilities through fiscal year 2018. This initiative included plans for future pharmacy enhancements, such as additional order checks and automatic updates to replace the slow (60- day) manual inclusion of new drugs into the medication ordering and management process, as part of the Pharmacy Re-engineering project. The planned pharmacy enhancements also included inbound e- prescribing, a capability to receive inbound electronic prescriptions (e- prescriptions) from a non-VA provider, and then fill, and dispense the prescriptions in VistA. From fiscal years 2002 to 2016, VA reported spending about $187.6 million for the Pharmacy Re-engineering project. Figure 2 shows a timeline of the modernization initiatives for the Pharmacy Re-engineering project in relation to VA’s VistA modernization initiatives. VA’s Office of Information and Technology has responsibility for providing technology services across the department, including the development and management of all IT assets and resources. As such, the office is to support VHA in planning for and acquiring IT capabilities that meet business requirements, to include delivering the necessary technology and expertise that support health care providers within VA’s network of hospitals, outpatient facilities, and pharmacies. Specifically, regarding pharmacy services, it has responsibility for developing computer systems based on requirements from Pharmacy Benefits Management Services. To support its efforts to deliver health care, DOD uses the Armed Forces Health Longitudinal Technology Application (AHLTA)—its outpatient electronic health information system. AHLTA is used to generate, maintain, store, and access patient electronic health records; and it is comprised of multiple legacy medical information systems that were developed from commercial software products and customized for specific uses. For example, the Composite Health Care System, which was formerly DOD’s primary health information system and serves as a foundation for AHLTA, is used to capture information related to pharmacy, radiology, and laboratory order management. The Composite Health Care System also allows clinicians to electronically prescribe medications. DOD relies on its system, called the Pharmacy Data Transaction Service, for its pharmacy capabilities. This system is a central repository for prescription data from all DOD pharmacies. Further, it detects duplicate drug treatments, therapeutic overlap, and drug interactions. It also contains data on specific drugs, dosages, and dispensing dates. The repository includes data for DOD prescriptions processed through authorized private and DOD medical facilities’ pharmacies, as well as through the mail. The department is currently in the process of transitioning from its existing electronic health record systems to a new commercial electronic health record system called MHS GENESIS. The transition to the new system began in February 2017 in the Pacific Northwest region of the United States. According to the department, the new system is to integrate inpatient and outpatient solutions and provide medical and dental information. AHLTA and the Composite Health Care System are among the systems that are intended to be replaced by the new system. According to the Department of Health and Human Services’ Office of the National Coordinator for Health Information Technology, the American Society of Health-System Pharmacists, and others, industry practices for pharmacy IT systems stress the use of capabilities aimed at improving the efficiency and effectiveness of clinicians and pharmacists in prescribing and dispensing medication. Among others, these capabilities would enable clinicians and pharmacists to: Electronically order medications, and record, change, and access a medication order for a patient. Computerized order entry improves the safety, efficiency, and accuracy of the medication-use process by enabling the pharmacist to review and verify the medication order before filling it. Perform drug-to-drug interactions and drug-allergy interaction checks during computerized order entry. Before a medication order is completed and acted upon, interventions should automatically indicate to a user drug-to-drug and drug-allergy contraindications based on a patient’s medication list and medication allergy list. Clinical decision support systems, such as drug interaction, allergy, and dose monitoring warning systems, should be adopted to make patient care more efficient and effective. Track the dispensing of controlled prescription drugs to patients through state-run prescription monitoring drug programs, which may be used to monitor for suspected abuse or diversion, and can give a clinician critical information regarding a patient’s pain management and controlled substance prescription drug history. Electronically create prescriptions for electronic transmission in accordance with National Council for Prescription Drug Programs standards. According to the Office of the National Coordinator for Health Information Technology, 90 percent of pharmacies in the United States are enabled to accept electronic prescriptions, and 70 percent of physicians are electronically prescribing medication. Use capabilities that would help guide the implementation of improved treatment methodologies. Specifically, Gartner’s Generation Model for Enterprise Electronic Health Record systems is a framework where generation level 3 calls for establishing effective clinical decision support capabilities, clinical workflow, clinical display, as well as computer-based physician order entry. According to Gartner, in more robust electronic health record products, medication order entry is tightly connected to clinical decision support. Generation level 3 facilitates dissemination of the latest evidence-based practices by alerting, reminding, and proactively escalating issues to the clinician as necessary. Use a computerized system to manage perpetual inventory so the system displays up-to-date pharmaceutical inventory at all times. The pharmaceutical inventory on hand is entered into the system, and the appropriate amount of products is automatically reduced from the inventory when a prescription or medication order is filled. We have previously reported on VA’s efforts to share pharmacy data. For example, in 2007, we reported that the department and DOD were exchanging computable outpatient pharmacy data for some shared patients, but had not completed important steps for exchanging these data for all shared patients. VA and DOD had developed an electronic interface—the Clinical Data Repository/Health Data Repository—that linked the two departments’ health data repositories and allowed for the exchange of computable data between them. However, we noted that shared patients were not activated when patients’ identifying information did not match exactly and VA patients who were discharged from active duty before 1997 could not be activated if they did not have a unique DOD identification number. In addition, VA and DOD had not established written guidelines for defining and identifying shared patients and VA was exchanging computable outpatient pharmacy data at a limited number of sites. To help ensure that all shared patients would benefit from the exchange of computable outpatient pharmacy data, we recommended that both VA and DOD expedite the development of a solution for activating shared patients when patients’ identifying information does not match exactly and DOD expedite efforts to assign a unique DOD identification number to VA patients who were discharged from active duty before 1997. We also recommended development of written guidelines for all VA and DOD sites to use for defining and identifying shared patients. In addition, we recommended that VA expedite efforts to expand to all VA sites the capability to automatically check DOD data that are exchanged through the Clinical Data Repository/Health Data Repository. Both departments concurred with these recommendations and have taken actions to implement them. In addition, the VA Inspector General reported on the department’s pharmacy system efforts in 2013. The report noted that the Office of Information and Technology had not been effective in keeping the Pharmacy Re-engineering project on target in terms of schedule and cost, as well as the functionality delivered. It noted that project managers had struggled to deploy the Pharmacy Re-engineering project increments in a timely manner and recommended that VA ensure that each remaining Pharmacy Re-engineering increment be reported and monitored; ensure adequate oversight and controls, including the planning guidance, staffing, and cost and schedule tracking needed to deliver functionality on time and within budget; and establish a plan for future funding of the Pharmacy Re-engineering project. VA’s Chief Information Officer agreed with the Inspector General’s recommendations. VA currently has system capabilities that support clinicians and pharmacists in prescribing and dispensing medications to patients. These capabilities are achieved with the use of multiple VistA and other computer applications that enable the processing and viewing of health data. Nevertheless, as a result of several limitations in VistA’s capabilities, pharmacists cannot always view the necessary patient data and transfer prescriptions among the department’s numerous medical centers, primary care clinics, and multi-specialty outpatient clinics. Industry practices suggest that pharmacy systems should, among other things, include the capability to electronically create prescriptions and send them to pharmacies for the dispensing of medications. Guidance developed by the Office of the National Coordinator for Health Information Technology identifies specific capabilities that are key to having a pharmacy system that enables effectively creating and processing prescriptions. These include capabilities to review patient data, select and authorize medications, and send medication orders to pharmacies for processing. Accomplishing this depends on the system enabling clinicians and pharmacists to effectively view and share patient information and pharmacy data. A congressional report has emphasized the importance of VA being able to use its pharmacy systems to view data among VHA medical sites. VA’s current pharmacy system capabilities are being provided by 17 VistA pharmacy software applications and CPRS which, collectively, enable clinicians and pharmacists to process, view, and share pharmacy data. The department relies on these applications to support pharmacy services such as (1) processing and dispensing outpatient and inpatient medications to veterans; (2) processing and automatically transmitting prescription data from VA medical centers to consolidated mail outpatient pharmacies; (3) monitoring and tracking the receipt, inventory, and dispensing of all controlled substances; and (4) alerting pharmacy personnel to the existence of medications that may have been prescribed at other facilities. For a description of applications that VA has categorized as VistA pharmacy applications, see appendix II. Although multiple applications support pharmacy capabilities, clinicians and pharmacists primarily rely on three applications to prescribe and dispense outpatient medications: CPRS, the Medication Order Check Healthcare Application, and Outpatient Pharmacy. CPRS provides clinicians the ability to prescribe medications as well as the ability to record patient data, including patients’ allergies or adverse reactions to medications. The Medication Order Check Healthcare Application enables clinicians and pharmacists to check new prescriptions to identify any interactions with other medications that the patient is currently taking, a process that is referred to as order checks. This application enables checks of new prescriptions for interactions at the medical site where the patient is being treated, as well as at other VHA medical sites. After the clinician prescribes the medications in CPRS, the Medication Order Check Healthcare Application is run instantaneously and the results of the check are displayed for the clinician to review and make any needed changes. Outpatient Pharmacy allows pharmacists to process and fill medication prescriptions from CPRS for veterans that are seen in outpatient clinics or that have received prescriptions upon discharge from a VA hospital. The application also enables pharmacists to review the results of the Medication Order Check Healthcare Application checks to ensure there are no allergies or interactions before filling the prescription. In conjunction with the patient data that can be viewed in these VistA applications, three different viewing applications, or viewers, are available to clinicians and pharmacists for use in creating prescriptions and dispensing medications. These applications—Remote Data View, VistAWeb, and Joint Legacy Viewer—can be used by clinicians and pharmacists to view and share information from other VHA medical sites. Each of these read-only viewers provides slightly different capabilities. Remote Data View enables clinicians and pharmacists to view and share prescriptions, laboratory histories, radiological images, and reports of outpatient medications, which can be seen by clinicians and pharmacists at all VHA medical sites. Clinicians and pharmacists can access Remote Data View after logging on to the CPRS application. VistAWeb enables clinicians and pharmacists to view and share patient data, including prescriptions, lab history, limited radiological imaging, patient information from VA’s private health care providers, and reports of outpatient medications with all VHA medical sites. VistAWeb can be accessed separately in VistA or from the CPRS application. The Joint Legacy Viewer enables clinicians and pharmacists to view and share prescriptions, lab history, radiological imaging, patient information from VA’s private health care providers, and reports of outpatient medications with all VHA medical sites. The Joint Legacy Viewer provides access to DOD clinical notes, among other DOD data, that are not available using the other two viewers; it also offers the ability to integrate data and customize what is displayed. The Joint Legacy Viewer cannot be accessed from the CPRS application and requires a separate login. Figure 3 provides a simplified depiction of the VistA applications and viewers that clinicians and pharmacists use to prescribe and dispense outpatient medications. As shown in figure 3, clinicians and pharmacists follow separate workflows to, respectively, prescribe and dispense medications. Specifically, when prescribing a medication, a clinician uses CPRS to: select the patient being treated by using a patient selection screen. view information needed to assess the patient. Clinicians can use CPRS to view patient information such as active problems, allergies, and medications at the site where the patient is being treated. The patient information is displayed by CPRS to support the clinician’s treatment decisions. access the different viewers to view patient information from other VHA and DOD medical sites where the patient was treated and share additional patient information with other VHA locations. enter new prescriptions and other patient information, such as a patient’s allergies or adverse reactions to medications, progress notes, diagnoses, and treatments, which make up the patient’s electronic record. access the Medication Order Check Healthcare Application to view potential drug interactions and allergy data: when a clinician enters a prescription using CPRS, the system displays drug interactions identified by the Medication Order Check Healthcare Application instantaneously to alert clinicians to potential drug interactions, duplicate therapy, and maximum drug dosage. complete the prescription by approving and signing the prescription. The prescription then becomes available in the Outpatient Pharmacy application for processing and dispensing by a pharmacist. To process and dispense outpatient prescriptions, a pharmacist uses the Outpatient Pharmacy application to: select a prescription for processing by accessing a list of outpatient prescriptions that were approved by clinicians. obtain additional patient information from CPRS or from one of the three viewers, and may also update prescription information in CPRS, such as the number of refills that remain for the patient. The pharmacist can also use the Outpatient Pharmacy application to view prescription and patient information, such as current and past medications and prescriptions that are ready to be dispensed to the patient. process the prescription by entering additional patient information, such as allergy data, reviewing the Medication Order Check Healthcare Application potential interactions, flagging the prescription if the pharmacist has questions for the clinician, and then entering prescription information, followed by verifying the prescription for dispensing. process, dispense, and generate prescription labels and reports that aid the pharmacist in controlling the medication inventory. Nevertheless, while clinicians can view and share patient data to prescribe medications, pharmacists cannot always efficiently view patient data needed to dispense medications. As we found at selected VHA medical sites and during interviews with Pharmacy Benefits Management Services officials, when using the Outpatient Pharmacy application, certain limitations can hinder the pharmacists’ ability to view data: data continually rolls off the computer screen in order to make room for other information, which requires the pharmacists to continually scroll through multiple screens to view pharmacy data; and additional time is required for the pharmacists to check and dispense medications because the pharmacists must switch between the Outpatient Pharmacy application and the data viewers in order to see all relevant patient information needed to dispense medications. Pharmacy Benefits Management Services officials attributed these limitations to several factors: (1) the Outpatient Pharmacy application is outdated, as the core functionality was developed in the 1980s, with character-based input screens that have limited screen space and data that rolls off the computer screen; (2) the Outpatient Pharmacy application provides a character-based interface for users (i.e., the application requires text-based inputs to initiate actions), rather than a graphical user interface; and (3) the three data viewers are not integrated with the Outpatient Pharmacy application because the application was developed without a graphical user interface that could be used to select the viewers; thus the viewers need to be accessed in a separate screen. VHA pharmacists have noted the lack of a graphical user interface as a limitation to efficiently processing prescriptions since 2001. Accordingly, Pharmacy Benefits Management Services officials stated that they have requested that the Outpatient Pharmacy application be modernized during VA’s annual process for requesting system updates. However, VA has not yet done so. According to the VistA Evolution modernization plans and Pharmacy Benefits Management Services officials, the department does not have plans to address this issue due to other re- engineering priorities. Until VA implements changes to its pharmacy system that address the inefficiencies with viewing patient information, pharmacists will continue to lack important capabilities that are essential to their reviews of patient data while processing and dispensing prescriptions. Beyond limitations in viewing patient data, pharmacists lack the capacity to electronically transfer prescriptions to other VHA pharmacies or process prescription refills received from other VHA medical sites. According to the National Council for Prescription Drug Programs’ standards, systems should be able to electronically transfer prescriptions between pharmacies. However, pharmacists at the VHA medical sites we visited said patients that receive specialty care and prescriptions from a VHA medical site cannot have that prescription electronically transferred to their primary care site (the location that the veteran usually goes to), even if the different medical sites are in the same state. This is due to the fact that, as discussed in VA’s inbound e-prescribing project plans and with VHA pharmacists, CPRS and the VistA Outpatient Pharmacy application do not provide the capability to transfer prescriptions between pharmacies. The department’s VistA modernization plans include acquiring by May 2018, the capability to transfer prescriptions from one VHA pharmacy to another VHA pharmacy, as part of the inbound e-Prescribing project. In addition, the modernization plans call for the implementation of a new system—OneVA Pharmacy— that is to allow veterans to obtain prescription refills from a different VHA medical site. According to Pharmacy Benefits Management Services officials, VA is developing plans to implement this capability in September 2017. If VA fully implements the prescription transfer capability as intended, pharmacists should then have an important tool to support the efficient and safe transfer of prescriptions and refills while ensuring that veterans receive prescriptions at the pharmacy of their choice in a timely manner. VA has developed various capabilities over the past two decades that have helped to advance interoperability between its own pharmacy system and DOD’s pharmacy system, thereby allowing clinicians and pharmacists to exchange certain patient and medication information. For example, the departments’ pharmacy systems provide the ability for clinicians and pharmacists to check prescription drug information for potentially adverse drug and allergy interactions. Nevertheless, certain limitations impede interoperability with DOD: VA clinicians and pharmacists (1) cannot always view DOD patient data and (2) do not always receive complete order checks that include new DOD medication data. Further, VA has not assessed the impact of its pharmacy system interoperability on service members transitioning care from DOD to VA. The National Defense Authorization Act for Fiscal Year 2003 required VA and DOD systems to be interoperable, to achieve real-time interface and data exchange, and to have the ability to check prescription drug information for outpatients. Real-time interfaces can enable pharmacy and patient medical data to be viewed instantaneously after patient data is entered. In addition, complete patient information is needed for clinicians and pharmacists to make effective clinical decisions. The act also required that VA’s and DOD’s pharmacy systems have the ability to check prescription drug information for outpatients based on the use of national standards. To perform prescription order checks that include DOD patient data, the medication information has to be interoperable between DOD and VA systems. To advance interoperability between their health information systems and adhere to the use of national standards, VA and DOD have mapped their medication data to the national standard RxNorm, thereby enabling clinicians at each department to use their pharmacy systems to perform medication order checks to identify potential adverse effects, such as drug allergies and drug interactions. Further, as a result of various capabilities that the two departments developed over the past two decades, they are able to view and share pharmacy data in near real- time for transitioning service members or patients that receive care at both departments’ medical facilities. These capabilities include the: Bidirectional Health Information Exchange – built on the Federal Health Information Exchange framework, this mechanism allows VA and DOD clinicians to view real-time inpatient and outpatient clinical data for patients receiving treatment from both departments. The data shared through this exchange includes drug allergy, outpatient pharmacy, and inpatient information. Clinical Data Repository/Health Data Repository – as mentioned earlier, this is an interface that allows VA and DOD to share electronic health records from their respective health data repositories. This interface provides clinicians at both departments’ with bidirectional, real-time exchange of medical records, to include outpatient pharmacy and drug-allergy information that enables drug-to-drug and drug-allergy order checks. Additionally, certain VistA applications (i.e., the Medication Order Check Healthcare Application, CPRS, and VistA Outpatient Pharmacy), along with the Joint Legacy Viewer, enable some level of interoperability between VA’s and DOD’s pharmacy systems by allowing clinicians and pharmacists to check prescription drug information for outpatients. Specifically, the Joint Legacy Viewer enables VA clinicians to view DOD data through a single interface. In addition, as mentioned previously, the Medication Order Check Healthcare Application enables clinicians and pharmacists to check on prescription drug data from both VA and DOD to view drug-to-drug interactions and allergies for outpatients. If any such interactions are identified, the Medication Order Check Healthcare Application displays an alert in CPRS or the VistA Outpatient Pharmacy application. (Additional information on DOD and VA’s initiatives to share patient pharmacy data is discussed in appendix III.) Nevertheless, while these capabilities exist, VA clinicians and pharmacists face limitations in that they cannot always view patients’ data. Specifically, DOD patient data does not always populate in the Remote Data View even though a record exists for the veteran. In addition, we observed during our site visits that the Joint Legacy Viewer could not always connect to DOD’s pharmacy system and display the patient’s medical data. As demonstrated at these sites, when DOD data did not populate in one of the viewers, VA clinicians and pharmacists had to either recheck the viewer that failed to display the DOD data or check one of the other two viewers. VHA officials stated they could not explain the reasons that clinicians and pharmacists could not always view DOD data as we observed during our site visits without additional information. Specifically, they would need information regarding how the data was requested to identify and address any system limitations. The officials also stated that the department had conducted assessments of the accuracy and completeness of the data exchanged between the two departments from October 2014 to May 2015. However, they discontinued the assessments due to other priorities and have not since conducted any such assessments. Until VA ensures that its clinicians and pharmacists can view all necessary DOD patient records, they may not have complete information for making effective clinical decisions about prescriptions which, in turn, may cause unnecessary delays in providing medical care to veterans and eligible service members. In addition, VA pharmacists and clinicians do not always receive complete information from DOD’s pharmacy system that is needed to perform medication order checks on new medications. Specifically, they face limitations in receiving the results of medication order checks based on DOD data when accessed through CPRS and the VistA Outpatient Pharmacy. To facilitate medication information interoperability, both departments currently map and update their medication information to ensure it is consistent with national standards every month. However, since the timing of the updates for each department’s mapping to the national standards may not always be the same, order checks cannot always be performed for new medications that have not been mapped to the national standards. Incomplete order checks are, in part, due to the fact that VA uses data mapping instead of using standardized medication terminology in its pharmacy system (this process is referred to as native standardization). According to VA’s interoperability plan, the department is using mapping as an interim approach in order to meet the requirements of the National Defense Authorization Act for Fiscal Year 2003, which states that VA and DOD should use national standards to exchange outpatient medication information. VHA officials said they recognized that mapping limits standardization and had started to use native standardization for certain data, such as immunizations, labs, problem lists, and encounter data. In addition, according to VA’s interoperability plan, the department has started, but has not yet completed a plan for implementing native standardization for medication and allergy data that is necessary to conduct order checks for duplicate medications, medication allergies, and medications that exceed the maximum dosage amounts. The officials could not tell us when the department expects to complete this plan. Until the department reduces the risk of incomplete order checks by completing its plan to implement an approach to using national standards for medication and allergy data, its clinicians and pharmacists will continue to receive incomplete order checks, which may present risks to patient safety. According to the Office of Management and Budget guidelines, an agency is to conduct assessments of its systems to analyze how organizational assets, such as IT systems, are able to support the organization’s mission. A key aspect of VA’s organizational mission includes providing pharmacy benefits to transitioning service members, which relies on having interoperability between VA and DOD’s systems. However, the impact of VA’s interoperable pharmacy system capabilities on transitioning service members is not known because the department has not conducted such an assessment. While the department performed an operational analysis in fiscal year 2015 for its overall medical IT support investment, the analysis did not address interoperability capabilities of systems, such as the Medication Order Check Healthcare Application, CPRS, or VistA Outpatient Pharmacy, and their impact on the care being provided to transitioning service members. VHA officials in the National Center for Patient Safety stated that it is difficult to assess system impact on veterans’ care and to link adverse medical events in patient care to the pharmacy system because there may be other contributing factors, such as personnel fatigue, team members’ dynamics, or training of the staff. While such factors are relevant, as previously discussed, we identified pharmacy system limitations in VA and DOD interoperability and in medical data mapping, which hindered VA clinicians’ and pharmacists’ ability to view DOD data. Both of these limitations prevent the VA clinicians’ and pharmacists’ ability to consistently obtain prescription information necessary to perform drug-to-drug checks and to make informed clinical decisions on patient care. Thus, without an assessment, VA cannot be assured of the potential impact on veterans as a result of the interoperability of its pharmacy system with DOD’s system. Further, in the absence of such an assessment, VA lacks assurance regarding the effectiveness of its pharmacy system to adequately support its mission of providing health care to veterans. Industry practices that have been suggested for improving the efficiency and effectiveness of clinicians and pharmacists in prescribing and dispensing medications include the six selected practices identified earlier in this report. These practices focus on enabling clinicians and/or pharmacists to (1) order medications electronically, (2) receive drug-to- drug and drug-allergy interaction checks, (3) track the dispensing of controlled prescription drugs, (4) electronically exchange prescriptions with non-VA entities (i.e., private or DOD clinicians and pharmacies), (5) utilize clinical decision support capabilities, and (6) maintain a perpetual inventory management capability to monitor medication inventory levels. We found that VA implemented pharmacy system capabilities that align with three of these six practices. Specifically, as discussed earlier, the department’s current pharmacy system capabilities incorporate two practices: the ability for clinicians to order medications electronically and to receive drug-to-drug and drug-allergy interaction checks. In this regard, VistA and CPRS provide the ability for clinicians to electronically order patient medications at local VHA sites and for clinicians and pharmacists to receive drug-to-drug and drug-allergy interaction checks through the Medication Order Check Healthcare Application. The Medication Order Check Healthcare Application also provides order checks for duplicate therapy and maximum single dose order checks. According to VA’s Pharmacy Re-engineering plans, VA intends to implement the ability for the Medication Order Check Healthcare Application to deliver maximum daily dose order checks for clinicians and pharmacists beginning in May 2018. Further, VA has taken steps related to a third industry practice to track the dispensing of controlled prescription drugs through state-run prescription monitoring drug programs. According to Pharmacy Benefits Management Services officials, and based on our visits to selected medical sites, VA currently sends data on prescriptions for controlled substances to 47 state programs. The officials added that the department plans to begin sending controlled substance prescription data to 3 additional state programs when appropriate agreements have been established. To retrieve data from the state prescription monitoring drug programs, VA clinicians manually access the state prescription monitoring databases and document that they accessed these databases in the patient’s file. According to the user manuals for VA’s CPRS, Medication Order Check Healthcare Application, and VistA Outpatient Pharmacy, the clinicians receive alerts warning them of duplicate orders for controlled substances via these applications; the alerts prompt them to review data from the state prescription monitoring databases. Clinicians at the sites we visited use a template in CPRS that identifies clinical warning signs of potential controlled substance abuse. While VA’s system includes capabilities that are consistent with three of the selected industry practices, the department has not implemented capabilities that align with three other selected practices that could enhance its pharmacy system’s usefulness. Specifically, it has not implemented practices related to electronically exchanging prescriptions with non-VA entities (e.g., private or DOD), using certain clinical decision support capabilities, and maintaining a perpetual inventory management capability to monitor medication inventory levels. According to the Office of the National Coordinator for Health Information Technology, health IT systems should enable a user to electronically send prescriptions to, or receive them from, non-VA providers and pharmacies in accordance with National Council for Prescription Drug Programs standards. The Office of the National Coordinator for Health Information Technology also stated that electronic prescriptions should have the capability to include key information, such as the reason for the prescription, the diagnosis, and the ability to transmit the prescription in a secure manner. This is important to prevent the risk of loss or misinterpretation, which may occur with hand-written prescriptions. However, CPRS and the VistA Outpatient Pharmacy application do not have the functionality that would enable clinicians or pharmacists to electronically receive prescriptions from non-VA providers or non-VA pharmacies (private or DOD). In addition, these applications do not have functionality that enables clinicians or pharmacists to send prescriptions to external providers and pharmacies, which are referred to as outbound prescriptions. As a result, veterans must obtain paper prescriptions or have prescriptions faxed from non-VA providers, and submit the prescriptions to their local VA medical sites in order for the VA pharmacy to manually input the prescriptions into the system and fill them—a process that is time consuming and inefficient. VA has recognized the need to exchange prescriptions with non-VA providers and pharmacies, and has plans to include the capability to receive electronic prescriptions and use the National Council for Prescription Drug Programs standards in the VistA Outpatient Pharmacy application. Specifically, according to Pharmacy Re-engineering documentation, its Inbound ePrescribing project is intended to provide the ability to receive electronic prescriptions from a non-VA provider or a non- VA pharmacy. According to the VistA 4 Roadmap, the Inbound ePrescribing project was originally planned for national deployment in March 2016. VA officials from the Office of Information and Technology stated that they began development of inbound electronic prescribing capabilities in July 2016. However, the initiative was delayed because the technical system infrastructure was not available to support the initiative. Among other actions, these officials said VA needed to redefine technical requirements for the acquisition process and rebaseline and reapprove planning documents. The department now plans to begin to release this functionality in August 2017. On the other hand, with regard to sending electronic prescriptions to non- VA pharmacies, VA does not yet have plans to implement outbound electronic prescribing capabilities. Pharmacy Benefits Management Services officials stated that doing so would require complex modifications to CPRS, or changes to the Enterprise Health Management Platform, which VA is in the initial stages of deploying. However, without outbound electronic prescribing capabilities, VA’s ability to electronically send prescriptions to non-VA pharmacies is limited and the department faces increased risk that a clinician’s prescription will not be entered correctly at a non-VA pharmacy. This could lead to the wrong medication being dispensed or other patient safety issues, including dosing mistakes. In addition, veterans may face inconvenience because their prescriptions are not electronically transmitted to private pharmacies. As previously discussed, Gartner’s Generation Model for Enterprise Electronic Health Record systems is a framework where generation level 3 calls for establishing effective clinical decision support capabilities, clinical workflow, and clinical display, as well as computer-based physician order entry. In robust electronic health record systems, ordering medication is tightly connected to clinical decision support. Clinical decision support helps clinicians make complex decisions and can trigger appropriate early notification of possible untoward events. VA’s health information system, VistA, including CPRS, does not have capabilities that could enhance clinical decision support for patient treatment. Moreover, while VA has undertaken a new initiative to address these deficiencies, the initiative is not in clinical use and its plans are incomplete. According to VA documentation, a 2011 evaluation found that VistA and CPRS did not have generation level 3 capabilities and noted that, compared to commercial solutions, VistA lagged with regard to key clinical functionalities such as clinical decision support, clinical display, and clinical workflow. For example, VistA does not always display data in a meaningful manner that contributes to the clinician’s ability to use the data effectively. VA also identified that CPRS has limited capability for presenting patient information recorded at DOD and other VHA medical sites in a manner that supports clinicians’ effective use of patient data. In this regard, data such as laboratory tests and medications are currently not available for viewing on the same screen, but should be considered together to improve the understanding of how medications affect patients. In order to provide clinicians and pharmacists with more clinical decision support and to help with patient treatment, in 2014, VA initiated development of the Enterprise Health Management Platform, which according to the system design document, is a multi-year effort to modernize the department’s electronic health record system and replace parts of CPRS. The current version of the Enterprise Health Management Platform is to have capabilities for clinicians to view patient data from both VA and non-VA providers and pharmacies on a single screen, and for clinicians to customize the screen—enabling data such as laboratory tests and medications to be displayed together—to meet the clinician’s data needs. By addressing the existing limitation of laboratory and test data not being available on a single screen when creating prescriptions, the Enterprise Health Management Platform is expected to provide a clinical decision support capability that improves the clinicians’ ability to consider all relevant information when creating prescriptions. According to VA officials, these capabilities are currently being tested. Nevertheless, while the Enterprise Health Management Platform will include some generation level 3 clinical capabilities, such as those mentioned above, it is currently not in clinical use and does not have additional capabilities that could make the pharmacy system more useful to clinicians and pharmacists, and enhance clinical decision support and clinical display. These capabilities include the ability to proactively alert a clinician that medication dosage may need to be adjusted based on medical test results, which would help ensure that medication is prescribed based on current medical information for patients, and the ability to navigate from an alert directly to a new medication order screen to change the medication. While VA has plans and time frames for implementing the capability to proactively alert a clinician about medication dosages, VHA officials did not have specific time frames or milestones for when the ability to navigate from an alert to medication order is expected to be achieved. The officials stated that they did not yet have time frames for this effort because the department is evaluating alternatives for its future electronic health record system. Nevertheless, until VA implements certain clinical decision support capabilities, such as the ability to navigate from an alert directly to a new medication order screen, VA clinicians and pharmacists will lack important capabilities that could enhance clinical decisions related to prescribing medications. Industry practices stress the use of a computerized system to manage perpetual inventory so that the system displays up-to-date pharmaceutical inventory at all times. This includes the capability to, when dispensing or restocking medication, update the inventory totals to accurately reflect the amount of medication that is in stock, and to set minimum inventory levels for medications that, when reached, alert the pharmacy to reorder the medication. However, VistA does not include a perpetual pharmaceutical inventory management system to monitor the inventory of VA’s pharmacy medications. Specifically, the pharmacy system cannot consistently update inventory totals to accurately reflect the amounts in stock and cannot set minimum inventory amounts for automated reordering of medication. For example, pharmacists at sites that we visited said that the automated machines they rely on to dispense medication are not integrated with their inventory systems, resulting in labor-intensive processes for updating and tracking when to reorder medications. Further, a VHA pharmacist at one site stated that, compared to commercial retail pharmacies, when dispensing medication, VA’s pharmacy system does not have automated updates to the inventory total to reflect that medication was dispensed (with the exception of controlled substances which they monitor closely). In addition, according to Pharmacy Benefits Management Services officials, VA’s pharmacy system does not have the capability to set automated reorder levels so that pharmacists receive an automated alert to reorder medication when inventory levels drop to a specified amount. According to VHA officials, the department has not prioritized requests to develop inventory management capabilities, although plans and funding requests for these capabilities were included in the original Pharmacy Re- engineering plans and have been resubmitted for inclusion in the budget by Pharmacy Benefits Management Services officials annually since 2009. Pharmacy Benefits Management Services officials added that, other priorities, such as the development of the Medication Order Check Healthcare Application, have prevented VA from replacing inventory management system capabilities, as originally planned in 2002. However, without the ability to monitor and update the inventory of pharmacy medications, VHA pharmacists lack the means to effectively track when to reorder medications, which can potentially impact patient’s health care and safety. VA currently uses VistA and multiple other computer applications to support clinicians and pharmacists in prescribing and dispensing medications to patients. However, inefficiencies exist in the ability of VA pharmacists to view patient medication data between the department’s pharmacy systems, which limits their ability to efficiently process and dispense medications. Most notably, the Outpatient Pharmacy application lacks a graphical user interface and, therefore, pharmacists must take additional steps to view all the necessary information. While pharmacists have requested a modernized graphical user interface since 2001, VA has not developed one, nor does it have plans to do so going forward. However, until pharmacists can efficiently view all necessary medication data, there is a risk that veteran’s safety may be compromised. In addition, VA has implemented capabilities to exchange patient data with DOD via the Joint Legacy Viewer and other sharing initiatives. Nevertheless, VA continues to face limitations in the ability to receive and use DOD data: clinicians and pharmacists cannot always view DOD patient data and VA pharmacists cannot always receive complete order checks from DOD for new medications. Until the department addresses these limitations in viewing DOD patient data and receiving complete order checks from DOD, clinicians and pharmacists will continue to lack the tools to make efficient clinical decisions about prescriptions, which could negatively affect patient safety. Moreover, VA has not assessed the impact that these shortcomings and its pharmacy system interoperability with DOD have on veterans. Without an assessment of the impact of its pharmacy system interoperability with DOD on veterans, VA will be hindered in its ability to determine the effectiveness of delivering pharmacy services and the potential impact on veterans. Finally, while VA’s pharmacy system incorporates some industry practices, it lacks other capabilities, such as electronic prescribing, certain clinical decision support, and inventory management, which could enhance the system’s usefulness. VA has plans to address part of the electronic prescribing capability, but the plans are incomplete since they do not address outbound prescriptions sent to non-VA pharmacies. VA’s planned Enterprise Health Management Platform is expected to position the department to achieve clinical decision support capabilities; but the platform is not in clinical use and there is uncertainty about VA’s implementation approach for delivering these important capabilities. Further, VA has not prioritized the development of pharmacy system capabilities to update and monitor inventory needed to track when to reorder medications. Lacking these capabilities, the department will continue to be limited in its ability to exchange prescriptions with non-VA providers, provide additional clinical decision support, and track medication which could impact veteran patient safety. To provide clinicians and pharmacists with improved tools to support pharmacy services to veterans and reduce risks to patient safety, we recommend that the Secretary of Veterans Affairs direct the Assistant Secretary for Information and Technology and the Under Secretary for Health to take the following six actions: establish and implement a plan for updating the pharmacy system to address the inefficiencies with viewing patient medication data in the Outpatient Pharmacy application and between the pharmacy application and viewers; complete a plan for the implementation of an approach to data standardization that will support the capability for clinicians and pharmacists to view complete DOD data and receive order checks that consistently include DOD data; conduct an assessment to determine to what extent interoperability of VA’s pharmacy system with DOD’s pharmacy system is impacting transitioning service members; develop and execute a plan for implementing the capability to send outbound e-prescriptions to non-VA pharmacies, in accordance with National Council for Prescription Drug Programs standards; ensure that the department’s evaluation of alternatives for electronic health records includes consideration for additional generation level 3 capability such as navigating from an alert to medication order in the electronic health record system; and reassess the priority for establishing an inventory management capability to monitor and update medication levels and track when to reorder medications. We provided a draft of this report to VA, DOD, and HHS for their review and comment. In its written comments on a draft of this report (reprinted in appendix IV), VA generally concurred with our six recommendations and described various actions that it planned to take to address the recommendations. DOD provided technical comments, which we incorporated into our report as appropriate. HHS did not provide comments. After VA received our draft report, VHA officials expressed concerns with the wording of our second recommendation (pertaining to completing a plan for the implementation of an approach to data standardization that will support the capability for clinicians and pharmacists to view complete DOD data and receive order checks that consistently include DOD data). VHA officials noted that the recommendation required actions by DOD in addition to the actions directed specifically at VA. Based on further discussion with these officials, we revised our second recommendation to emphasize the importance of VA completing a plan for the implementation of an approach to data standardization that will support its clinicians and pharmacists in viewing complete DOD data and receiving order checks that consistently include DOD data. In its written comments that addressed the revised recommendation, the department stated that it concurred in principle with this recommendation. In this regard, the department stated that, while viewing complete DOD data is essential for the safe care of veterans, VA’s consistent viewing of DOD pharmacy data is dependent on that department’s completion of its data interoperability initiatives. VA added that, in the interim, both departments are planning enhancements to the joint DOD/VA Clinical Health Data Repository to improve the exchange of pharmacy data between VA and DOD, and plan to complete this effort in fiscal year 2018. We are sending copies of this report to the Secretary of Veterans Affairs and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-6304. I can also be reached by e-mail at [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V. The Senate Appropriations Committee Report accompanying the Consolidated Appropriations Act for fiscal year 2016 called for us to examine the Department of Veterans Affairs’ (VA) acquisition and use of a pharmacy data system. Our objectives were to determine whether: (1) VA currently possesses a functioning pharmacy system and the extent to which the system enables data to be viewed, shared, and transferred among Veteran’s Health Administration (VHA) pharmacy locations; (2) VA’s pharmacy system is interoperable with the Department of Defense’s (DOD), and whether this system, or the absence thereof, is impacting service members who transition care from DOD; and (3) VA has implemented its pharmacy system in accordance with health care industry practices. To address the first objective, we obtained and analyzed documentation on VA’s pharmacy system, such as technical manuals and architecture diagrams, which showed the current and planned capabilities of the pharmacy system. We analyzed the documents to identify the key systems that VHA clinicians and pharmacists used to order and dispense medication, and to assess system capabilities for viewing patient data and medication, both at a veteran’s home facility where most care is provided as well as at other VHA facilities. We also assessed system capabilities of the Pharmacy Outpatient application with a focus on outpatient care because about 70 percent of prescriptions are for outpatient use. We validated our initial assessments through observing demonstrations of the pharmacy system at VA medical centers in Baltimore, Maryland; Butler, Pennsylvania; and San Antonio, Texas; and at a joint VA and DOD health center in North Chicago, Illinois. During these site visits, we reviewed how the system enables the viewing, sharing, and transferring of pharmacy data between VHA locations. Our criteria for selecting these sites was intended to ensure coverage of: (1) different geographic locations, (2) the variety of VA facilities (e.g., a medical center and an independent outpatient clinic), and (3) a location piloting the new VA enterprise health platform (the Enterprise Health Management Platform). Additionally, each of the sites we selected had an on-site pharmacy. Our selection of medical sites ensured that we included diverse geographic locations of varying sizes and breadth of medical services offered. At the sites, we met with clinicians, including doctors, nurses, and clinical pharmacists, and with pharmacists who review and dispense orders for prescriptions. We conducted site visits at medical facilities where we discussed the processes and corresponding systems and data viewers that clinicians and pharmacists used to provide health care to veterans, as well as what was working well and if there were any limitations of the system in conducting their work. In addition, we obtained the perspectives of officials representing VA’s Pharmacy Benefits Management Services and Office of Information and Technology on the strengths and limitations of the department’s pharmacy system, the underlying causes of any limitations, and plans to address the limitations. To address the second objective, we reviewed VA technical manuals, architecture diagrams, and documents produced by the VA/ DOD Interagency Program Office. We also analyzed VA’s plans and identified its actions taken toward achieving interoperability with DOD; we then compared the department’s actions to certain requirements specified in the fiscal year 2003, 2008, and 2014 National Defense Authorization Acts. Specifically, we reviewed VA responses and documentation on: data exchange mechanisms and services, implementation and use of national standards in pharmacy systems, pharmacy data checking, pharmacy system metrics such as availability, and pharmacy system testing. We observed VA clinicians’ and pharmacists’ use of the VA and DOD systems at our selected sites to determine whether they could exchange pharmacy data in real-time and perform prescription drug interaction checks for outpatients. We also observed pharmacy capabilities of the DOD systems, and how those systems access VA data, at the joint VA and DOD health center in North Chicago, Illinois, and during two system demonstrations in Washington, D.C. Additionally, we reviewed VA documentation to see how the department was monitoring and checking prescription drug data that is exchanged with DOD. We also evaluated whether VA systems used national standards for the exchange and mapping of outpatient medication information between VA and DOD. We obtained written responses from VA to questions on interoperability and reviewed VA reports and documents, including a report to Congress on interoperability standards, to evaluate the extent of pharmacy system conformance to national standards for the exchange of outpatient medication information between VA and DOD. Further, we selected and contacted Veterans Service Organizations to determine whether they could provide information on the impact of interoperability of VA and DOD systems on veterans. We selected organizations that (1) we had identified in our prior work related to transitioning service members, (2) represented veterans from recent conflicts, and (3) were referred to us by VA or other Veterans Service Organizations. This resulted in the selection of six organizations for our review, which we determined had not reported on the impact of VA and DOD pharmacy information technology (IT) systems interoperability on veterans. Lastly, to address the third objective we took the following steps to identify best practices of the health care industry. We conducted literature searches, reviewed our prior work, and consulted with the Department of Health and Human Services’ Office of the National Coordinator for Health Information Technology, VA’s Pharmacy Benefits Management Services and a private provider. We reviewed the Office of the National Coordinator for Health Information Technology 2015 Edition Health Information Technology Certification Criteria, and its 2016 Interoperability Standards Advisory to identify the standard for electronic prescribing. We also reviewed the Office of the National Coordinator for Health Information Technology’s report to Congress on the prescription drug monitoring program interoperability standards. Further, we reviewed the 2014 National Defense Authorization Act, Gartner’s Update to the Enterprise Electronic Health Record Generation Model, standards set by the National Council for Prescription Drug Programs, and publications from the American Society of Health-System Pharmacists, and the Archives of Pharmacy Practice. From these sources, we compiled a list of practices that the health care industry has identified as being relevant to the implementation of an effective pharmacy IT system and that reflect areas of relevance with regard to VA’s health information system capabilities. This action resulted in a list of six practices that relate to (1) ordering medication electronically, (2) receiving drug order checks, (3) tracking the dispensing of controlled prescription drugs, (4) electronically exchanging prescriptions, (5) using clinical decision support capabilities, and (6) using a perpetual inventory management capability to monitor medication inventory levels. We confirmed the validity and relevance of the identified practices with the Office of the National Coordinator for Health Information Technology. We also confirmed our selection of the practices through discussions with industry leaders, and based on the views and experiences of these sources, we characterized the practices that we assessed in the third objective as industry practices (rather than as best practices). In addition, we reviewed the pharmacy system architecture and user documents, a VA Office of Inspector General Report on Pharmacy Re- engineering, and VA’s plans to implement pharmacy system capabilities through its Pharmacy Re-engineering project and plans to modernize its Veterans Health Information Systems and Technology Architecture (VistA), such as the VistA 4 Roadmap. We compared the industry practices to current VA system capabilities and modernization plans to identify additional practices VA could implement to enhance its pharmacy IT system to be more aligned with those of the industry. We supplemented our analyses with interviews of VA, DOD, and Department of Health and Human Services officials with knowledge of VA’s pharmacy systems and the interoperability efforts within VA and between VA and DOD. VA officials included those in the department’s Office of Information and Technology, VHA, and its Pharmacy Benefits Management Services, and the VA National Center for Patient Safety and Informatics Patient Safety. We also interviewed officials from the Department of Health and Human Services’ Office of the National Coordinator for Health Information Technology, as well as DOD officials from the Defense Health Agency and DOD/VA Program Coordination Office. We conducted this performance audit from January 2016 to June 2017 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. To enable the provision of health care services to veterans, the Department of Veterans Affairs (VA) uses its integrated health information system—the Veterans Health Information Systems and Technology Architecture (VistA)—which was developed in-house by VA clinicians and information technology (IT) personnel. The system consists of approximately 200 separate computer applications and modules, 17 of which include pharmacy related applications. The following table describes VistA applications categorized as pharmacy applications. The following table describes the Department of Veterans Affairs (VA) and the Department of Defense’s (DOD) initiatives to share patient data, the pharmacy data exchanged, limitations, and plans for decommissioning. In addition to the contact named above, Tammi Kalugdan (Assistant Director), Daniel Wexler (Analyst in Charge), Nabajyoti Barkakati, Jennifer Beddor, Christopher Businsky, Debra Conner, Rebecca Eyler, Wilfred Holloway, Anh Le, Carlo Mozo, Monica Perez-Nelson, Martin Skorczynski, and Merry Woo made key contributions to this report. | VHA provides health care services, including pharmacy services, to approximately 6.7 million veterans and their families. To do so, clinicians and pharmacists rely on VA's health information system. The National Defense Authorization Act for Fiscal Year 2003 required VA to ensure it has a pharmacy system that is interoperable with DOD's system. A provision in Senate Report 114-57 required GAO to examine VA's acquisition and use of a pharmacy system. GAO determined whether (1) VA currently possesses a functioning pharmacy system and the extent to which the system enables data to be viewed, shared, and transferred among VHA pharmacy locations; (2) VA's pharmacy system is interoperable with DOD's, and whether this system, or the absence thereof, is impacting service members who transition care from DOD; and (3) VA has implemented its pharmacy system in accordance with health care industry practices. GAO analyzed documentation describing VA's pharmacy system; observed system demonstrations; analyzed plans and actions taken to achieve interoperability with DOD; and identified industry practices related to pharmacy systems, and compared them to VA's system capabilities. The Department of Veterans Affairs (VA) has system capabilities through multiple computer applications that support its clinicians and pharmacists in prescribing and dispensing medications to patients. However, pharmacists cannot always efficiently view necessary patient data among Veterans Health Administration (VHA) medical sites. In addition, pharmacists cannot transfer prescriptions to other VHA pharmacies or process prescription refills received from other VHA medical sites through the system. As a result, the system does not provide important capabilities for pharmacists to make clinical decisions about prescriptions efficiently, which could negatively affect patient safety. In its efforts to establish and increase interoperability with the Department of Defense (DOD), VA has developed capabilities to exchange certain patient and medication information. For example, VA's pharmacy system has the ability to check prescription drug information from DOD. Nevertheless, limitations impede interoperability with DOD: (1) VA clinicians and pharmacists cannot always view DOD patient data and (2) VA pharmacists do not always receive complete information from DOD to perform prescription checks on new medications. Also, VA has not assessed the impact of its pharmacy system interoperability on service members transitioning from DOD to VA, and VHA officials stated that doing so would be difficult because there are other personnel related-factors that could affect patient-care outcomes. Without assessing the impact that pharmacy system interoperability is having on veterans, VA lacks assurance regarding the effectiveness of the system to adequately support its mission of providing health care to veterans. VA's pharmacy system capabilities align with three of six identified health care industry practices. Specifically, the pharmacy system (1) provides the ability to order medications electronically, (2) enables prescription checks for drug-to-drug and drug-allergy interactions, and (3) tracks the dispensing of controlled prescription drugs. However, the pharmacy system lacks capabilities that align with three other practices which could enhance its usefulness: Pharmacists cannot electronically exchange prescriptions with non-VA providers and pharmacies. Therefore, veterans need to obtain paper prescriptions from external providers or have the providers fax the prescriptions to their local VA pharmacy to fill the prescriptions, which is time consuming and inefficient. VA's system does not include certain clinical decision and workflow capabilities that, among other things, could improve clinicians' and pharmacists' ability to provide enhanced medical care to veterans. VA has indicated that it plans to implement such capabilities, but its plans for doing so are incomplete. VA's system does not maintain a perpetual inventory management capability to monitor medication inventory levels. Therefore, pharmacists cannot effectively track when to reorder medications. In the absence of these capabilities, VA is limited in its ability to interoperate with private providers, provide additional clinical decision support, and more effectively track medications that could impact veterans' patient safety. GAO is making six recommendations including that VA update its pharmacy system to view and receive complete medication data, assess the impact of interoperability, and implement additional industry practices. VA generally concurred with GAO's six recommendations. |
On August 5, 1998, PBS entered into a lease with Jack I. Bender & Sons, General Partnership to provide lease space and parking for SSUD at an annual net rent of $2,129,461. This figure exceeded the prospectus threshold of $1.93 million for fiscal year 2000, the year in which occupancy is to commence. A prospectus was not submitted to GSA’s Senate and House authorizing committees at the time the lease was signed. As the federal government’s primary real estate agent, GSA, through PBS, provides space for agencies in federally owned buildings or by leasing space in privately owned buildings. NCR is responsible for providing space for agencies in the Washington, D.C., metropolitan area. Pursuant to section 210(h)(1) of the Federal Property and Administrative Services Act of 1949, as amended, 40 U.S.C. 490(h), the Administrator of GSA is authorized to enter into lease agreements for periods of up to 20 years on such terms as the Administrator deems to be in the interest of the United States and necessary for the accommodation of federal agencies. Section 7(a) of the Public Buildings Act of 1959, as amended, 40 U.S.C. 606(a), among other things provides for a detailed project description, called a prospectus, containing a project cost estimate and justification to be submitted to GSA’s Senate and House authorizing committees. A prospectus is (1) called for if the average annual rental of a lease is expected to exceed the prospectus threshold, as specified in the statute, and (2) adjusted by GSA annually, as authorized by the statute, to reflect changes in costs during the preceding year. Annually, the PBS National Office issues a Capital Investment and Leasing Program call asking all GSA regional offices to submit their prospectus- level projects. Each year the National Office provides the regions with the prospectus-level threshold and general guidance on preparing prospectuses. The National Office reviews the prospectuses submitted by the regions and the prospectuses that it approves are then consolidated into GSA’s Capital Improvement and Leasing Program and submitted to the Office of Management and Budget (OMB) for approval. Once OMB approves the program, it is sent to GSA’s authorizing committees. PBS stated that its policy since 1972 has been not to enter into any lease agreement if the annual rental exceeds the prospectus threshold unless the authorizing committees have approved a prospectus. At your request, we assessed the circumstances surrounding the award of the SSUD lease. As agreed, we did not evaluate NCR’s overall process for identifying prospectus-level leases or for preparing prospectuses. To determine the circumstances surrounding the award of the lease for SSUD, we spoke with the cognizant NCR officials in the Regional Counsel’s Office, Portfolio Management Division, and Property Acquisition and Realty Services Division; reviewed GSA’s leasing policies and procedures; and discussed policy issues and guidance provided to regional offices with officials in PBS. We also spoke with two former NCR staffers—the original contracting officer for the SSUD lease and an attorney from the Regional Counsel’s Office—since both played significant roles in this acquisition. We reviewed the contract file for the lease to determine the acquisition process used and the critical decision points. Only limited documentation was available to support some of what we considered to be the critical decisions, such as the initial decision that this action was not a prospectus- level acquisition. Thus, some of the information being provided in this report is based on what current and former GSA officials remembered about events that occurred up to 3 years ago. We also obtained information on actions taken by NCR to help prevent prospectus-level leases from being awarded without a prospectus being prepared. Further, we obtained and reviewed GSA’s policies and guidance related to the preparation of lease prospectuses, and we verified that the rental-of- space account in the Federal Buildings Fund (FBF) had sufficient appropriated funds to cover the obligation for the SSUD lease. We discussed the specifics of the SSUD lease with an official in PBS’ Office of Portfolio Management. We also contacted regional officials in 9 of GSA’s 10 other regions to determine whether they had guidance in place specifying how to calculate the lease costs to be used to determine if a prospectus is needed for an acquisition, and whether the decision that a lease is not prospectus-level is revalidated when space requirements or market rental rates change. We were unable to contact the appropriate official in the remaining GSA region in time for inclusion in this report. We did our work between March and July, 1999, in accordance with generally accepted government auditing standards. On July 26, 1999, we requested comments on a draft of this report from the Administrator of GSA. GSA’s written comments are discussed near the end of this letter. In NCR, it is initially the Portfolio Management Division’s responsibility to review expiring leases to identify new leases that might be above the prospectus threshold and to prepare the prospectuses for those leases. In the case of the SSUD lease, there was no indication that Portfolio Management identified the lease as potentially needing a prospectus. According to the contracting officer, who has since left GSA, even though Portfolio Management had not identified this lease as needing a prospectus, when she began working on the SSUD lease in April 1996, she confirmed her expectation that the lease would be below the prospectus threshold. Her estimate of the lease costs was made by multiplying the expected market rental rate ($29 to $30 per square foot) by the estimated space requirement (50,000 square feet). This calculation resulted in an estimated total rent of $1.45 million to $1.5 million, which was below the fiscal year 1998 prospectus threshold initially being used for this lease of $1.81 million. Therefore, she went forward with the acquisition process as a nonprospectus-level lease. In the contract file, we found documents showing that early in the acquisition process there was information available indicating that the SSUD lease could be closer to the fiscal year 1998 prospectus threshold. A letter, dated June 27, 1996, to NCR from a Secret Service official estimated that SSUD would need 55,000 to 60,000 square feet of space. Using the contracting officer’s estimated market rental rate ($29 to $30 per square foot), the dollar range of $1.6 million to $1.8 million for that much space would have been much closer to the fiscal year 1998 prospectus threshold. Although the actual space requirement had not yet been determined, it appears to us that PBS should have recognized that if the space requirement or rental rate were higher than expected, the lease could possibly exceed the fiscal year 1998 prospectus threshold. When the Solicitation for Offers (SFO) was issued in November 1997, it stated that SSUD required 69,500 to 72,250 rentable square feet of office and related space and 78 parking spaces. The SFO stated that this was not a prospectus-level lease. Therefore, to be considered, any offer must be below the prospectus threshold. There was nothing in the contract file to indicate that a check had been done after SSUD’s space requirements were finalized to verify that NCR could still expect lease offers for this project to be below the prospectus threshold. Because of the increase in space needs over SSUD’s June 1996 estimate, it would seem prudent to have done another prospectus-level calculation before issuing the SFO. Had this calculation been done using the actual minimum space requirement in the SFO (69,500 square feet) times the low end of the estimated market rental rate ($29) that the contracting officer had initially used, the estimated annual rent would have been about $2.02 million. This amount exceeded both the fiscal year 1998 prospectus threshold of $1.81 million that was initially used for this lease and the fiscal year 2000 threshold of $1.93 million that was later used. We believe that if an update of the prospectus calculation had been done at this point, the need to reevaluate the prospectus decision would have been apparent to NCR. Early in 1998, there were three offerors competing for the SSUD lease. About the time that NCR received the best and final offers, the original contracting officer left GSA. When the new contracting officer took over responsibility for the SSUD lease, he raised the question about the need for a prospectus on the basis of the offers received. In May 1998, the contracting officer reopened negotiations on the lease to clarify the calculation for determining compliance with the prospectus threshold. Before this time, there was no indication in the contract files that the offerors had been informed about how to calculate whether their offers would comply with the SFO requirement that the offer be below the prospectus threshold. In an attempt to clarify how to calculate the prospectus threshold, the contracting officer sent the offerors two letters in May 1998. His first letter, on May 8, 1998, specified that parking, operating expenses, and the cost of amortizing the tenant allowance for above standard requirements should be subtracted from the total full-service rental rate to determine if the offer would be below the prospectus threshold. A week later, on May 15, 1998, the contracting officer sent the offerors a second letter, amending the May 8 letter. This letter stated that only operating expenses and any concessions offered to the government should be subtracted from the total full-service rental rate when determining compliance with the prospectus threshold. Officials at the National Office and in NCR’s Portfolio Management Division, stated the same calculation mentioned in the May 15 letter as the correct way to determine if an offer met the prospectus threshold. After receiving these letters from the contracting officer, attorneys for two of the offerors expressed concerns about the changes in the calculation being used to determine prospectus compliance at such a late stage in the process. Before the new contracting officer reopened the negotiations, correspondence between the offerors and NCR indicated that the offerors had been informed or led to believe that their offers met the basic requirements for the acquisition, including compliance with the requirement that their offers be below the prospectus threshold amount. Ultimately, none of the offers met that requirement on the basis of the calculation provided in the May 15 letter. The NCR officials involved with this lease said that there were discussions about how to respond to the letters from the offerors’ attorneys and how to proceed with the acquisition. The officials said that their decision to try to complete this acquisition without a prospectus was based on the (1) time already invested in this acquisition, (2) concerns raised by the offerors’ attorneys, and (3) need to award the lease in time for the new space to be ready when SSUD’s current lease expires in February 2000. We found little written documentation in the contract files of the discussions that were held and the decisions made regarding the SSUD lease. The contracting officer said that he consulted primarily with an attorney in the Regional Counsel’s Office on this matter. By telephone and E-mail, the attorney sought input from both the National Office and regional Portfolio Management officials. However, a consensus opinion on how to proceed with this lease was never developed. It appears that the contracting officer acted on advice from the attorney when he issued an amendment to the SFO in July 1998 informing the offerors that (1) the calculation for determining prospectus compliance was the aggregate cost of the contract, minus operating expenses, minus any tenant improvement allowance, and divided by the 20-year term of the lease and (2) the fiscal year 2000 prospectus threshold of $1.93 million would be used for this lease. According to the attorney, who has since left GSA, his advice was based on his understanding and interpretation of the guidance he received from various sources. Specifically, he said he advised the contracting officer that the cost of SSUD’s above standard tenant requirements could be excluded from the prospectus calculation on the basis of discussions with his supervisor and his interpretation of a 1990 Comptroller General decision. However, the Comptroller General decision stated that the cost of “specials” (i.e., items above standard tenant requirements) could be excluded from the prospectus calculation because GSA elected to pay for the specials on a lump-sum basis from the tenant agency’s appropriation. The general rule relating to above standard requirements is that if the costs are paid on a lump-sum basis, they are not included in the annual net rent payment. But, if the costs are amortized in the lease, they are included in the annual net rent payment. In the case of SSUD, it was clear early in the acquisition process that the cost of the above standard tenant requirements would be amortized over the term of the lease because the Secret Service did not have the funds available to pay for these costs by lump-sum payment. The attorney advised the contracting officer to use the fiscal year 2000 threshold because it will be the year when the lease payments begin. The contracting officer confirmed that this was consistent with the oral guidance provided by Portfolio Management in the National Office. The contracting officer ultimately set July 10, 1998, as the date for final revisions to the offers for the SSUD lease, and two final offers were received. The third offeror withdrew because it said that it could not meet the economic requirements specified in the amended SFO. Only one of the offers actually fell below the prospectus threshold as defined in the July 1998 amendment to the SFO. According to the contracting officer, once it was determined that only one offer met the requirements, he had the attorney in the Regional Counsel’s Office, in accordance with NCR’s practice, review and concur in the lease award before it was awarded. The Budget Office also reviewed the lease as an operating lease and certified that funds were available for the award. The contracting officer signed the SSUD lease for GSA on August 5, 1998. NCR reviewed this acquisition after the lease was awarded and questions were raised by a congressional staffer about whether it should have had a prospectus. According to an NCR official, it is GSA’s policy to prepare lease prospectuses for all leases that exceed the prospectus threshold. NCR concluded that the SSUD lease did exceed the prospectus threshold, and that a prospectus should have been prepared in this case. NCR then prepared a prospectus and submitted it to GSA’s authorizing committees on September 25, 1998. During NCR’s review of this lease, it also determined that while the lease was treated as an operating lease when it was awarded, it was in fact a capital lease. As a result, about $22 million in budget authority had to be counted against GSA’s fiscal year 1998 rental-of- space account for the lease. We verified that at the time the SSUD lease was signed, there were sufficient unobligated funds in the FBF rental-of- space account to cover the obligation. NCR officials said that in their opinions, awarding the SSUD lease without a prospectus resulted from NCR employees’ “creatively” interpreting the prospectus threshold. According to a senior NCR official, this action was in response to the specific circumstances of this lease and does not indicate that there is a systemic problem within NCR. Although we did not evaluate NCR’s overall process to determine if there were systemic problems, we found no specific written PBS guidance on what costs are to be included in the calculation to determine whether a lease will need a prospectus. Also, NCR’s internal controls were not sufficient to ensure that the SSUD lease was correctly identified as prospectus-level, and that a prospectus was prepared and submitted to GSA’s authorizing committees before the lease’s award. To strengthen the internal controls, on October 26, 1998, NCR began requiring the Portfolio Management Division to verify all leases before they are awarded. The staff was told that “this verification must be made in sufficient time prior to award so that a different course of action (other than making an award) is available.” However, there still is no written guidance on how to calculate the costs that should be used to determine if a lease is prospectus-level or not. Also, there is still no requirement to revalidate the prospectus decision when space requirements and/or market rental rates change during the course of the acquisition. We spoke with officials in 9 of GSA’s 10 other regional offices to ask whether they had guidance in place specifying how to calculate if a prospectus is needed for a lease, and if they revalidate the decision that a lease is not prospectus-level when space requirements or market rental rates change. None of the officials with whom we spoke said they currently had specific written guidance to follow when determining if a lease prospectus was needed beyond the general guidance provided by the National Office. However, some of the officials said that the old leasing handbook, which is no longer used as guidance, specified that when determining whether a lease was prospectus-level, operating expenses should be subtracted from the total rent. Several officials said that it would be helpful to have specific written guidance on what costs to include and exclude when determining if a lease is expected to exceed the prospectus threshold. When we asked, these nine regional officials said they also did not specifically require that the decision that a lease was not prospectus-level be documented or revalidated when space requirements or market rental rates change. However, many of the officials said that this recheck is inherent in the process. The officials said that when the final offers are received, if those offers are above the prospectus threshold, the region cannot and does not go forward with the award. The lack of adequate internal controls over the leasing process at NCR resulted in PBS’ signing a prospectus-level lease for the SSUD space on August 5, 1998, without first preparing and submitting a prospectus for the lease to GSA’s authorizing committees. While NCR has instituted a new policy requiring its Portfolio Management Division to verify all leases before they are awarded, written guidance on how to calculate the average annual rental to be used to determine whether a prospectus is needed still does not exist. Also, NCR still does not require that the decision that a lease is not prospectus-level be documented when that initial decision is made, or that the decision be revalidated and documented when one or both of the factors on which such a decision is based requirements and market rental rates change. Officials in nine other GSA regions with whom we spoke said that written guidance on how to calculate the average annual rental and a requirement to document and revalidate decisions that a lease is not prospectus-level is also missing in these regions. We recommend that the Administrator of GSA direct the PBS Commissioner to issue explicit written guidance defining the specific cost elements that may be excluded from the total full-service rental rate when calculating whether a prospectus should be prepared for a proposed lease. This guidance should also cover the fiscal year threshold that should be used for making this determination for a capital lease and for an operating lease. We also recommend that the Administrator of GSA direct the PBS Commissioner to establish a requirement specifying that the decision that a lease is below prospectus-level be documented and revalidated whenever there is a change in one or both of the factors on which such a decision is could affect agency space requirements and market rental rates the outcome of the decision on whether a prospectus would be required. On August 20, 1999, we received written comments on a draft of this report from PBS’ Commissioner. He said that the report accurately reflects the factual circumstances surrounding the award of this lease. While the Commissioner believes that the awarding of the SSUD lease without a prospectus was an anomaly, he said that he agrees with our recommendations that current written guidance is needed and has directed his staff to prepare such guidance. The Commissioner’s letter is reproduced in appendix II. In addition, an NCR official provided some technical comments, which we incorporated as appropriate. We are sending copies of this report to Representative Robert E. Wise, Ranking Democratic Member of your Subcommittee; Senator John Chafee, Chairman, and Senator Max S. Baucus, Ranking Minority Member, Senate Committee on Environment and Public Works; the Honorable David J. Barram, Administrator, GSA; Mr. Nelson B. Alcalde, Regional Administrator, NCR, GSA; and to others upon request. If you have any questions about this report, please call me or Ron King on (202) 512-8387. Key contributors to this assignment were Maria Edelstein, Shirley Bates, and Susan Michal-Smith. Table I.1 contains a chronology of major events that transpired in relation to the award of the lease for the United States Secret Service Uniform Division (SSUD). Event The General Services Administration’s (GSA) National Capital Region (NCR) and SSUD began discussing relocation from 1310 L Street. Secret Service provided NCR information on location requirements, generic security standards, and estimated space needs (55,000 and 60,000 square feet) so NCR could begin advertising the need for space to assess the properties that might be available for lease. Four offers were submitted on the basis of NCR’s SFO. Offers were submitted by Jack I. Bender & Sons, General Partnership c/o Blake Construction, Inc.; 17 H Associates L.P. and 17 H II Limited Partnership c/o Carr America; Associated General Contractors c/o Dickstein Shapiro Morin & Oshinsky LLP; and 1920 L Street LLC c/o Leggat McCall Properties. The offerors were given an opportunity to present their offers and were notified of the areas in which their offers did not meet the requirements and were given the opportunity to correct these areas. Representatives from 1920 L Street LLC did not attend a scheduled meeting and failed to submit a best and final offer. Therefore, at this point there were three remaining offerors. Original contracting officer left GSA to work for the Secret Service and a new contracting officer took over the SSUD lease. The new contracting officer said that when he became involved with the SSUD lease he saw the need for a prospectus. Current contracting officer tried to clarify calculation for determining if offers meet the prospectus threshold. Letter to offerors defined the calculation as the total full-service rental rate, minus parking, minus operating expenses, and minus the cost of amortizing the tenant improvement allowance. The letter also specified that using the above calculation, the offers must not exceed the fiscal year 1998 prospectus threshold of $1.81 million. Contracting officer sent another letter to the offerors amending the May 8, 1998, letter. In this letter, the calculation for determining if offers meet the prospectus threshold was defined as the total full-service rental rate, minus operating expenses, and minus any concessions offered to the government. This letter also amended the prospectus threshold to fiscal year 2000, which is $1.93 million. Event Associated General Contractors of America withdrew from the competition for the SSUD lease because it said that it could not meet the economic requirements. Attorneys for the two remaining offerors, Jack I. Bender & Sons, General Partnership and 17 H Associates L.P. and 17 H II Limited Partnership, both wrote letters to GSA expressing concerns about the changes to the calculation for determining compliance with the prospectus threshold. Internal NCR discussions about how to calculate compliance with the prospectus threshold and which fiscal year to use for the threshold were held. Amendment number 6 to the SFO issued stating that the prospectus threshold being used for the SSUD lease is fiscal year 2000 ($1.93M) and the calculation for determining prospectus compliance is the aggregate cost of the contract (including parking), minus operating expenses and any tenant improvement allowance, and then divided by the 20- year term of the lease. Final amendment (number 7) to the SFO was issued setting July 10, 1998, as the date for final revisions to offers. Analysis done on offers, including initial offer submitted by Associated General Contractors of America, found that only the offer from Jack I. Bender & Sons, General Partnership met the prospectus threshold requirement as defined in amendment number 7 to the SFO. GSA signed a 20-year lease with Jack I. Bender & Sons, General Partnership for 72,250 rentable (64,500 usable) square feet of space at 1111 18th Street, Washington, D.C. A Senior Professional Staff Member for the Subcommittee on Economic Development, Public Buildings, Hazardous Materials and Pipeline Transportation called NCR to ask about the SSUD lease that was reported in the newspaper. He asked why the Subcommittee had not seen a prospectus for the lease since the reported size and cost of the lease appeared to be prospectus-level. A prospectus for the SSUD lease at 1111 18th Street was prepared and transmitted to GSA’s authorizing committees. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touch-tone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO provided information on new leased space acquired for the Secret Service's Uniformed Division (SSUD) at 1111 18th Street, N.W., Washington, D.C., by the Public Buildings Service (PBS). GAO noted that: (1) a lack of adequate internal controls over the leasing process at the General Services Administration's (GSA) National Capital Region (NCR) resulted in PBS' awarding a lease for SSUD on August 5, 1998, above the prospectus dollar threshold without first preparing and submitting a prospectus for the lease to GSA's Senate and House authorizing committees; (2) there was confusion about the costs that were to be considered in determining whether a prospectus was needed; (3) specific written guidance on how to calculate the cost did not exist; (4) although the space requirements increased about 40 percent--from about 50,000 square feet to about 70,000 square feet--during the acquisition process, procedures did not call for the revalidation of the decision that a prospectus was not needed when the space requirements or market rental rates used to make the decision changed during the acquisition process; (5) after a congressional staffer asked questions about the lease on August 31, 1998, NCR officials reviewed the award of the lease and determined that a prospectus should have been prepared and submitted to GSA's Senate and House authorizing committees as provided for in section 7(a) of the Public Buildings Act of 1959, as amended, 40 U.S.C. 606(a), and PBS' policy and procedures; (6) subsequently, NCR has instituted a new policy requiring its Portfolio Management Division to verify all leases before they are awarded; and (7) still, GSA has not developed specific guidance on how to calculate the cost to be used to determine whether a prospectus should be prepared, nor has GSA determined that it needs to revalidate prospectus decisions when space requirements or market rental rates change. |
Table 1 lists the 37 research reactors operating in the United States. Of the 33 reactors that NRC licenses and regulates, 27 are located on university campuses. In contrast, all DOE research reactors are located in relatively isolated locations and at facilities where public access is restricted because weapons-usable nuclear materials associated with DOE’s nuclear weapons programs are also stored on site. Several factors may make research reactors a target for terrorists. For example, most U.S. research reactors are located on university campuses; while these research reactors have security systems in place, none are protected with the kind of security or armed security forces that protect nuclear power reactors. Furthermore, once inside the reactor building, terrorists may gain access to the reactor. Figure 1 shows the inside of a research reactor. In addition, while power reactors use LEU fuel, several research reactors still use HEU fuel in order to produce the appropriate conditions in the reactor for conducting a wide variety of research. HEU is attractive to terrorists looking to construct a crude nuclear weapon. NRC’s Office of Nuclear Reactor Regulation has oversight responsibility for all NRC-licensed research reactors. DOE’s Office of Nuclear Energy, Science, and Technology’s Radiological Facilities Management program is charged with maintaining DOE research reactors in a secure manner. To enforce safety, security, and emergency planning requirements, both DOE and NRC conduct routine inspections to ensure compliance with DOE orders, manuals, and directives and with NRC regulations. DOE’s Office of Independent Oversight and Performance Assurance—which independently assesses the effectiveness of DOE policies and programs in safeguards and security and emergency management for DOE facilities— routinely inspects DOE facilities for compliance with DOE safeguards and security requirements. NRC-licensed research reactors are licensed and routinely inspected by inspectors representing NRC’s Research and Test Reactor Section. The requirements for the physical protection of NRC- licensed research reactors are set out in NRC regulations and primarily focus on preventing the theft and diversion of fuel. In addition to the specific requirements established in the regulations, NRC may require— depending on the individual facility and site conditions—any additional measures it deems necessary to protect against radiological sabotage at research reactors that it licenses to operate above 2 MW of power. Commensurate with the security requirements, security related inspection activity is based on a graded approach, where security measures are based on the type and quantity of nuclear material on site. For example, research reactors licensed to possess more than 5 kilograms of HEU are inspected at least annually, while reactors that are licensed to possess less than 1 kilogram of HEU are inspected at least triennially. NRC used its security assessment of NRC-licensed research reactors to determine whether additional security measures were warranted. NRC’s assessment considered an analysis of security at reactors, as well as the consequences of attacks. The security assessment also included site- specific assessments of NRC-licensed research reactors to determine the vulnerability of structures, security operations, and physical protection systems, as well as access control systems at research reactors. Using varying numbers of adversaries and capabilities, NRC assessed threat scenarios, which included theft of fuel for use in a nuclear weapon or dirty bomb and sabotage attacks designed to disperse radioactive material. NRC used the number of immediate fatalities caused by radiological release resulting from an attack at a research reactor as its criterion to measure consequences and assessed the adequacy of the security at NRC-licensed reactors. If NRC discovered that there was potential to affect public health, it was to identify countermeasures to mitigate or prevent the consequences, while considering the cost-effectiveness of these countermeasures. As a complement to DOE and NRC security efforts, NNSA’s Reactor Conversion Program has a goal of reducing or eliminating the use of HEU at research reactors. To support this goal, NRC promulgated a rule in 1986 requiring all NRC-licensed research reactors to convert to LEU if feasible and if DOE provided adequate funding. In addition, under the 2005 North American Security and Prosperity Partnership, the United States, Mexico, and Canada agreed to convert civil HEU reactors on the North American continent to LEU fuel, where such LEU fuel is available. Since 2004, NNSA has overseen the fuel conversion of U.S. research reactors. To achieve NNSA’s goal, in 2005, NNSA’s reactor conversion program partnered with the DOE Office of Nuclear Energy University Reactors Program to accelerate the conversion of U.S. research reactors by providing funding to enable research reactors where LEU is available to convert as rapidly as possible. INL is the technical lead for the reactor conversion program’s fuel development effort. To protect its four research reactors, DOE uses the security and emergency requirements developed from its DBT and counts on the security afforded by the reactors’ locations at certain national laboratories that require heightened security. Furthermore, DOE has concluded that consequences from an attack at some of its research reactors could be severe and has therefore established extensive plans and procedures for safety and security incidents. DOE’s research reactors benefit from the greater security required for the national laboratories where the research reactors are located. The laboratories are engaged in nuclear weapon activities or store special nuclear material and therefore are to meet the requirements for DOE’s 2003 DBT. This DBT was developed to support DOE policies for preventing unauthorized access, theft, or sabotage of nuclear weapons and all special nuclear material under DOE’s jurisdiction. More specifically, following the DBT, DOE requires its research reactors to be protected in a graded manner; that is, a reactor possessing more dangerous nuclear material must be safeguarded more securely than those that have less dangerous material. For example, SNL and INL–-the locations of DOE’s Annular Core Research Reactor and its Neutron Radiography Reactor, respectively—store weapons-usable nuclear materials and therefore have robust security features and specially dedicated, heavily armed guard forces. The other two DOE reactors—the Advanced Test Reactor and High Flux Isotope Reactor—located at INL and ORNL, respectively, have extensive security features, including perimeter barbed-wire fences and armed security guards at all times. In addition, DOE requires that all personnel with routine access to DOE reactors have a federal security clearance. Among other things, this requirement helps to reduce the possibility of an insider threat. We also found that DOE is engaged in efforts to improve security at the reactor sites. For example, at the SNL and INL sites the locations of the Annular Core Research Reactor and Neutron Radiography Reactor, respectively, DOE recently made several security upgrades, including installing new surveillance systems with thermal imaging cameras; these cameras enable surveillance of the surrounding territory for up to several miles, regardless of light and weather conditions. Despite extensive security features at DOE research reactors, we did find a security weakness and some research reactor vulnerabilities. Specifically, we discovered that the Web site for one DOE research reactor contained information about its refueling schedule. According to security experts, reactors are more vulnerable during refueling because large doors that are normally tightly secured must be opened to deliver fuel. After we brought this weakness to DOE’s attention, the department removed the information. Concerning vulnerabilities, at two DOE research reactors, we discovered key features at the reactor facilities that were vulnerable to attack, as DOE officials acknowledged. In both cases, the reactor operators store large amounts of spent reactor fuel in pools that are easily accessible to anyone inside the reactor facility. According to national laboratory officials at both of these facilities, this fuel is dangerous because if it is damaged during a terrorist attack, it could cause a large radiological release into the area surrounding the research reactor. During visits to both facilities, the reactor operators said that an attack on their spent fuel concerned them just as much as an attack on the actual reactor because of the potential for release of radiological material into the atmosphere. These operators said that the spent fuel needs to be removed for disposal; DOE plans to remove most of this spent fuel by 2012. DOE has concluded that the consequences of an attack at some of its research reactors could be severe, possibly causing radiation to be dispersed over many square miles and requiring the evacuation of nearby areas. As a result, all facilities where DOE reactors are located have established extensive plans and procedures for responding to reactor emergencies, as DOE policies require. For example, ORNL—the location of the High Flux Isotope Reactor—has a laboratory shift superintendent on duty at all times to classify potential events and coordinate preplanned responses geared to the nature of the event. According to ORNL officials, emergencies can lead to the mobilization of significant numbers of security personnel trained to respond to emergencies at the reactor. This mobilization could include the activation of the mutual assistance agreement between ORNL and the neighboring Y-12 National Security Complex to deploy Y-12’s off-duty security forces to ORNL in the event of a terrorist attack. DOE policies also require DOE research reactor operators, with DOE and laboratory officials, to assess the worst-case consequences of accidents or terrorist attacks at their research reactors and develop emergency response plans that call for evacuating areas up to 300-square miles surrounding the reactor in the event of a potentially hazardous radiological release into the atmosphere. Decisions to evacuate are made based on the amount of radiation to which people could be exposed, as determined by their proximity to the reactor and the amount of radioactivity released. Furthermore, in worst-case scenarios, DOE reactor facility emergency plans include multijurisdictional plans outlining the immediate coordination of regional and federal emergency response assets. NRC decided to largely retain the security and emergency response regulations it had in place before September 11, 2001. NRC decided to retain these requirements after conducting a security assessment between 2003 and 2006 and determining that these requirements were sufficient. However, we found that NRC’s security assessment used questionable analysis and assumptions that may not fully reflect the consequences of a terrorist sabotage attack. According to experts at INL and DHS, the consequences of a terrorist attack on a research reactor could be more than what NRC estimates. Consequently, even though a number of NRC- licensed research reactors have recently improved security, NRC’s security and emergency response requirements may need immediate strengthening to protect against the consequences of an attack. Between 2003 and 2006, NRC conducted a security assessment of NRC- licensed research reactors to determine whether existing security and emergency response requirements were sufficient to protect against an attack. NRC first conducted a screening analysis to assess the significance of the consequences of a sabotage attack at each of the 33 NRC-licensed research reactors and established a minimum radiological dose that an attack would have to produce before further assessment was warranted. Eventually, NRC concluded that the potential effects of terrorists sabotaging these 33 reactors were minimal and that the security and emergency response regulations for research reactor licensees did not need strengthening. In conducting this assessment, NRC established a minimum radiological dosage as the criterion to determine if a full security assessment was necessary. During its initial phase of this assessment, NRC determined that most of the reactors would experience minimal consequences from sabotage and therefore present a low radiological risk to public health and safety. For the remaining reactors, NRC conducted a further detailed security assessment. NRC concluded that the potential effects of an attack at these reactors were also minimal and that the security and emergency response regulations for research reactors did not need strengthening. NRC’s security assessment also included SNL’s evaluation of the security of NRC-licensed research reactors; however, NRC disagreed with several of SNL’s findings. NRC contracted with SNL to help perform its security assessment, and as part of this work, SNL estimated the probabilities that terrorists could successfully carry out an attack on NRC-licensed reactors. SNL found that some NRC-licensed research reactors may not be prepared for certain types of terrorist attacks. For example, SNL’s analysis of several reactors found that under certain scenarios involving a small group of well-trained terrorists, an attack on a reactor could be successful. NRC, however, believed that SNL’s assumptions about terrorists’ capabilities were excessive and that SNL did not give enough credit to the capabilities of first responders. Ultimately, NRC disagreed with SNL about the security of research reactors. In its final analysis, NRC concluded that, because the radiological consequences of an attack would be minimal, no changes in the security and emergency response regulations for NRC-licensed research reactors were necessary. However, NRC’s security assessment may contain important shortcomings. As a result, NRC may not have a sound basis for determining the adequacy of security and emergency response requirements for its licensed research reactors. Based on our analysis and an analysis conducted by an INL reactor vulnerability expert at our request, we concluded that NRC’s security assessment used questionable assumptions and analyses about research reactor security and the potential consequences of an attack on NRC-licensed research reactors. Specifically, NRC made the following assumptions that we have reason to question: NRC assumed that terrorists would use certain weapons and tactics in attacking a reactor but did not fully consider alternative attack scenarios which could be more damaging if carried out successfully. According to an SNL expert, attacking a research reactor using this alternative approach would be a difficult and sophisticated task, which would likely require specific knowledge of reactors and sabotage techniques. Nonetheless, this expert stated that such an attack was possible and identified detailed information for carrying it out. Moreover, the attack scenarios that NRC did not fully consider could lead to more significant consequences than NRC estimates, according to an INL reactor vulnerability expert. NRC assumed that only a small portion of a research reactor could be damaged in a terrorist attack, resulting in the release of only a small amount of radioactivity into the atmosphere. However, according to experts at INL and DHS, it is possible that a larger portion of a research reactor could be damaged in a terrorist attack. If this occurred, these experts also noted that an attack could result in a release of a larger amount of radioactivity into the atmosphere over neighboring communities. NRC assumed that insiders with access to the reactor would only participate to a limited degree. However, in similar security assessments for DOE facilities, DOE assumes that insiders would fully participate in an attack, and it has designed its defenses on the assumption of full participation. Fully participating insiders could both provide information, such as details of the facility layout and operating schedule, as well as participate in an attack by performing key functions, such as opening doors or disabling alarm systems. NRC officials acknowledge that if its assessment had assumed fully participating insiders, then the results of its assessment may have turned out differently. “It is clear that an event as described in this report could have significant consequences. The consequences of a successful sabotage attack in addition to the direct dose could be significant radioactive material release and subsequent contamination of areas that have high socio-economic impact. It is important that the risk from these reactors be well characterized and the emergency preparedness for such an event be included the planning process.” Because most NRC-licensed research reactors are located on college campuses or in urban areas, the release of large amounts of radiation could affect a substantial portion of the population. We discussed the INL reactor vulnerability expert’s analysis with INL’s Deputy Associate Laboratory Director for National and Homeland Security Directorate, who stated that the analysis was technically accurate and that their reactor vulnerability expert had done good work in preparing it. However, he cautioned us that the analysis represented the efforts of only one of INL’s reactor vulnerability experts. In his view, a more comprehensive analysis of the vulnerability and the consequences of a terrorist attack on a research reactor is warranted. Such a study should include experts from a variety of technical areas, including national intelligence sources, and involve more than one laboratory. These experts would determine the most appropriate assumptions that should be used in the analysis. For example, according to the Deputy Associate Laboratory Director, one important part of such an analysis would be examining the physical nature of damaging a research reactor. This could be done through modeling and actual experiments. Once this is determined, it would inform other aspects of a reactor vulnerability analysis and result in a more comprehensive understanding of the potential consequences of a terrorist attack. We shared the results of INL’s reactor vulnerability expert’s analysis with NRC, who disagreed with several of the basic assumptions and findings concerning the consequences of an attack on a research reactor. NRC’s reasons for its disagreement, and our analysis of these reasons, are discussed in detail in the classified version of this report. NRC maintains an active oversight program of all research reactor licensees, which includes routine safety and security inspections. Between 2001 and 2006, NRC worked with its licensees to make immediate security improvements to research reactors where needed. As a result of continuing oversight activities, when NRC found additional security measures were necessary to ensure public health and safety, NRC requested that licensees implement additional security measures. NRC verified improved security through inspections and issued letters formally binding the licensees to maintain security enhancements. During our visits to NRC-licensed research reactors, we found the following improvements to security: improved access controls to key areas inside reactor facilities, augmented surveillance of activities within controlled access areas, and improved alarm and communication systems. For example, one NRC research reactor licensee installed antitruck bomb barriers, including concrete and steel reinforced poles and a steel cable gate, which are not required for the category of reactor at this particular facility. In fact, we discovered that several of NRC’s research reactor licensees have made security improvements that exceed NRC’s security requirements. Similarly, to address the potential truck bomb threat, several other NRC research reactor licensees have placed jersey barriers near exterior parts of reactor buildings. Figure 2 shows a research reactor building surrounded with jersey barriers. Some NRC-licensed research reactors have added jersey barriers, installed new steel-hardened doors, and improved camera surveillance systems. Still another licensee installed a new alarm system that is hardwired to the closest police station, which monitors reactor alarms at all times. Despite such improvements, we identified potential shortcomings with current security and emergency response requirements and measures. These requirements and measures may require immediate attention if NRC’s assessment of the consequences of an attack on its licensed reactors is deficient. For example: At two research reactors we visited, we found features of the reactor that if damaged during an attack could make the reactor more at risk for radiological releases. According to an SNL security analysis of NRC-licensed research reactors, a number of reactors could be attacked and sabotaged by well-trained terrorists. If an NRC-licensed research reactor were attacked, the local police would have to assess the threat and determine the appropriate response before the attackers have completed the tasks needed to sabotage the reactor. At still another research reactor, we found an unlocked and unalarmed access leading directly into the reactor room. In this case, the licensee is relying on another security measure that might be overcome. However, this measure could be compromised. In our view, it is both sensible and inexpensive to put a lock and an alarm trigger on this access to the reactor room, rather than depend on having one element of the security system function flawlessly. In response to the Energy Policy Act of 2005, NRC has begun to address a potential security weakness we identified during our review. Specifically, we found that NRC did not require research reactor licensees to conduct extensive background checks on their staff with access to reactors. However, starting in 2006, NRC began requiring research reactor licensees to fingerprint staff with access to sensitive security information and subject them to a criminal history background check by the Federal Bureau of Investigation. Furthermore, in May 2007, NRC ordered research reactor licensees to subject all staff with unescorted access to reactors to this check. All of the NRC-licensed research reactors that we visited have detailed and coordinated emergency plans for responding to terrorist attacks, including the deployment of police, Special Weapons and Tactics (SWAT), fire, ambulance, and hazardous material personnel to the reactor facility. In addition, most NRC-licensed research reactors licensees we visited have agreements with local law enforcement and other first responders for responses to emergencies. For example, the research reactor at the Massachusetts Institute of Technology has memorandums of understanding with the city of Cambridge Police Department, Fire Department, Emergency Management Department, and Massachusetts General Hospital outlining cooperation in case of emergencies. However, we found weaknesses in two key areas of NRC emergency response plan requirements—evacuation planning and first response: Few Reactors Have Evacuation Planning. Evacuation planning is important because most NRC-licensed reactors are located in highly populated areas, with other buildings located near the reactor facility. For example, one NRC-licensed research reactor is located within 100 yards of a day-care facility, 300 yards of a university dormitory, and one-half mile of a stadium that holds more than 90,000 fans on game days during football season. NRC regulations for emergency plans require licensees to establish plans for coping with emergencies, but NRC does not require that these plans include evacuation plans for areas surrounding its licensed reactors. Instead, these requirements only require licensees to establish limited emergency planning zones, which vary in size depending on the size of the reactor. The acceptable emergency planning zone for reactors that NRC licenses to operate at 2 MW or less—that is, 30 of the 33 NRC-licensed research reactors—is limited to the grounds of the reactor facility; there are no evacuation plans for the areas surrounding the reactor. Two other NRC-licensed research reactors—at the Massachusetts Institute of Technology and the University of Missouri, Columbia—must establish an emergency planning zone with possible evacuation of 100 meters surrounding the research reactor; the 20 MW National Institute of Standards and Technology reactor must establish an emergency planning zone of 400 meters. Some First Responders Are Not Armed. NRC regulations on emergency response require that licensees ensure that a watchman or off- site response force will respond to unauthorized entrance or activity at research reactors, but regulations do not require first responders for emergencies at research reactors to be armed. At most NRC-licensed reactors we visited, the designated first responders are armed. At a few reactors, however, unarmed campus police—not local law enforcement agencies—would be the first responders when alarms are set off. Such plans are likely to delay an armed police response. According to SNL security experts, the lack of a timely armed response increases the risk that a terrorist attack will be successful. NNSA has converted 8 currently operating U.S. research reactors from HEU to LEU fuel and has plans to convert 10 remaining reactors by 2014. However, NNSA will confront challenges in converting 5 of these 10 remaining research reactors because they cannot be converted with fuel that is currently available. According to NNSA and national laboratory officials, the schedule for fuel development is optimistic and further technical setbacks in fuel development would likely delay their research reactor conversion plans. Since 1978, when the reactor conversion program started, DOE has converted a total of 8 currently operating U.S. research reactors from HEU to LEU fuel. In 2004, we reported on the progress of the reactor conversion program and recommended, among other things, that NNSA place a higher priority on converting these reactors. In response to our recommendation, in 2006, NNSA converted 2 more operating U.S. research reactors from HEU to LEU fuel. NNSA plans to convert an additional 10 U.S. research reactors by 2014, including 5 that can convert with currently available fuel and 5 that cannot convert with currently available fuel. The 2 NRC-licensed research reactors that converted in 2006 were reactors at the University of Florida and Texas A&M University, which were converted at a cost of about $3 million and $7 million, respectively. These recent conversions represent the first U.S. conversions since 2000 and are part of NNSA’s expanded effort to convert research reactors worldwide. NNSA plans to convert the remaining 5 U.S. research reactors that can convert with currently available fuel by September 2009 at an estimated cost of $37 million (see table 2). NNSA has set a target date of 2014 for converting the five remaining HEU research reactors that cannot convert with currently available fuel. NNSA is now developing a new fuel that will allow the remaining five reactors to convert; according to an NNSA official, this new fuel must be developed by 2011 if NNSA is to meet its 2014 conversion schedule goal. We believe that the conversion schedule may be optimistic because developing this fuel has been problematic. For example, early efforts to develop the fuel experienced failures during testing that caused NNSA to push back anticipated completion dates from 2008 to 2010, and NNSA has since delayed the completion of the fuel until 2011. Argonne National Laboratory officials working on the fuel development effort at that time characterized the failures during testing as the worst they had ever experienced. According to NNSA officials and INL fuel development scientists, more recent attempts to develop new LEU fuel appear promising. In addition, a series of recent successful tests of the new fuel, including fuel fabrication and testing at the Advanced Test Reactor are indicative of the potential to successfully develop the new LEU fuel. However, NNSA and national laboratory officials acknowledged that the fuel development schedule is optimistic and that further technical setback would likely delay DOE’s research reactor conversion plans. NNSA estimates that an additional $46 million will be needed to actually convert reactors once the fuel is available. This estimate is uncertain. If any further technical difficulties are experienced in the process of developing the new fuel, additional funding will be required for further fuel improvements, and the estimated conversion date will not be met. Table 3 outlines the schedule for converting the five research reactors that cannot convert with currently available fuel. The NRC-licensed nuclear research reactors located throughout the United States play an important role in education and basic scientific research. However, because most of these reactors are located on university campuses, they face unique challenges in both remaining accessible for educational purposes and providing enough security to protect neighboring communities from the potentially significant impacts of a terrorist attack. Understanding the consequences of a terrorist attack on these research reactors is critical to determining the level of security needed to protect them. To understand the consequences of an attack, NRC conducted a security assessment of its licensed reactors and concluded that the consequences would be minimal—having almost no effect on nearby areas. However, NRC’s security assessment may underestimate the potential consequences of an attack because it used assumptions and analyses about reactor security and terrorist capabilities that we believe are questionable. Additionally, NRC’s conclusions are not supported by the findings of SNL, an INL reactor vulnerability expert, and a DHS expert. SNL found that a group of well-trained terrorists could gain access to a number of NRC-licensed research reactors. Moreover, INL and DHS experts believe that it is possible that a meaningful portion of a research reactor could be damaged in an attack. Such an attack could result in a radioactive release that is greater than NRC estimates in their assessment. Without an analysis that better reflects the full range of expert opinion on the security of reactors and the capabilities of potential terrorist forces, NRC will not have fully considered the risks posed by research reactors. NRC will also lack assurance that it has established security and emergency response plan requirements commensurate with the risks posed by attacks on its licensed research reactors. To better understand and prepare for the potential consequences of a terrorist attack on NRC-licensed research reactors, we recommended in our October 2007 classified report that the Chairman of NRC reassess the consequences of terrorist attacks on NRC-licensed research reactors using assumptions that better reflect a fuller range of outside expert opinion on the security of reactors and the capabilities of potential terrorist forces. If NRC finds that the consequences of an attack on a research reactor are more severe than previously estimated, we recommended that the Chairman of NRC take the following three actions: ensure that the security requirements for research reactors are commensurate with the consequences of attacks, reexamine emergency response requirements to address whether evacuation plans should be included, and require that first responders to alarms at research reactors be armed. We provided DOE, NNSA, and NRC with draft copies of our classified report for their review and comment. As discussed in our classified report, NNSA, whose comments also reflected DOE’s views, generally agreed with the report and provided minor technical comments, which we incorporated as appropriate in this unclassified report as well. NRC did not agree with the report and stated that the report provides an unbalanced assessment of its effort to enhance security at research reactors since September 11, 2001. NRC summarized its views in a separate unclassified letter which we have included in appendix I, along with our comments. NRC criticized our report in four areas. First, NRC stated that the draft report misrepresented the effort it has made following September 11, 2001, to assess and enhance the security of research reactors; it also asserted that we compared security requirements for NRC-licensed research reactors with DOE operated reactors and that the comparison is incomplete and inaccurate. Second, NRC stated that we misrepresented its use of the SNL security assessment and that we incorrectly stated that NRC had dismissed the findings in SNL’s assessment. Third, NRC asserted that our report misrepresented or excluded key facts. Finally, NRC believes that our assumptions concerning terrorist attack scenarios lack a sound technical basis. First, we disagree with NRC’s assertion that our report misrepresents the Commission’s efforts since September 11, 2001, to assess and enhance the security of research reactors. We accurately describe NRC’s active oversight actions, including routine inspections for safety and security. Furthermore, we give NRC credit for working with research reactor licensees to make, and to verify, many security improvements that NRC identified as necessary. We also discuss the many security features and improvements at NRC-licensed research reactors that we visited and note that several of the licensees have made security improvements that exceed NRC’s security requirements. Furthermore, contrary to NRC’s comments, our report does not compare security requirements for NRC-licensed and DOE operated research reactors or actual security conditions at the reactors. Rather, our report discusses our findings on security requirements and their implementation at NRC-licensed and DOE operated research reactors. Second, we disagree with NRC’s assertion that our report misrepresents NRC’s use of the SNL security assessment and that NRC dismissed SNL’s security assessment. Our report did not state that NRC “dismissed” the security assessment; instead, it accurately states that NRC “disagreed” with SNL about the security of research reactors. Furthermore, NRC itself has reiterated this disagreement with the SNL analysis in writing on several occasions. Specifically, when NRC provided us with copies of SNL’s security assessment, it also provided a disclaimer stating that NRC “does not support many of the assumptions and/or information contained in these reports and…the reports cannot be used independently to develop any conclusions regarding the security or protective measures for the facilities contained in the reports.” In addition, a 2005 statement by an NRC Commissioner concerning SNL’s work further supports our point that NRC disagreed with the SNL analysis. This Commissioner states, “because the Sandia security assessment reports contain scenarios and assumptions that are not supported by the Commission, the reports should not be released to anyone outside the agency nor should they be shared with licensees or stakeholders.” Continuing, this Commissioner states that SNL’s security reports “if taken out of context, could prove to be an enormous burden on NRC and our licensees and could result in a tremendous amount of time spent explaining why we think the Sandia analyses are deeply flawed.” Third, we disagree with NRC’s assertion that our report misrepresents or excludes key facts. In particular, NRC states that INL and SNL refute our characterization of key facts gathered from INL, federal agencies, and SNL to support our recommendations. With regard to INL, we did receive a letter from INL in June 2007 requesting that we not include or refer in any fashion to any INL technical judgments contained in the INL report. Later that month, we spoke with INL management about the reason for this request. As we state in our report, according to INL’s Deputy Associate Laboratory Director for National and Homeland Security Directorate, INL believes that a more comprehensive analysis of the vulnerability and consequences of attacks on research reactors is warranted. Nonetheless, this official stated that the INL analysis was technically accurate and INL’s vulnerability expert had done good work in preparing it. As a result of this discussion, we deleted from the report much of the specific details of this analysis, such as the specific estimates of radiological consequences, and instead provided only a short summary of the key findings of the analysis. Our report includes a statement from the INL analysis stating that a terrorist attack could produce “significant consequences” and have “high socio-economic impact” because INL officials emphasized this point during communications with us after we received INL’s June 2007 letter. Furthermore, in its comments, NRC states that INL requested that we exclude from our report references to information we obtained from verbal communications with INL experts. INL never asked us to exclude discussions we had during our visit to INL and subsequent discussions with INL officials. INL would have no basis to make such a request because representatives of INL management arranged our meetings with INL experts to gather the information and data needed to complete our work. With respect to SNL, in neither of two sets of written comments did SNL dispute our primary conclusion regarding its work for NRC—that some NRC-licensed research reactors may not be prepared for certain types of terrorist attacks—nor did SNL disagree with our main report recommendation. We received initial comments from SNL in July 2007 on an early version of our classified draft report. At that time, we revised our draft to acknowledge one of SNL’s key points—namely, that damaging a research reactor is a difficult and sophisticated task. However, we did not include further details of these initial comments because they were inconsistent with the information SNL had provided during extensive discussions over 2 days in November 2006. For example, in its July 2007 written comments, SNL provided information that demonstrated why this task is so difficult. However, during discussions with SNL’s expert, he noted that damaging a reactor was possible and provided us with very detailed steps of how to do so. These steps addressed many of the very limitations discussed in the July 2007 comments from SNL. Furthermore, a key finding of our report is that NRC disagreed with the SNL finding that some NRC-licensed research reactors may not be prepared for certain types of terrorist attacks. In its July 2007 comments, SNL did not address our characterization of the work it did for NRC. Finally, in subsequent comments provided in September 2007 as part of DOE’s technical comments, SNL expanded upon its earlier comments regarding the difficulty of sabotaging a research reactor which we had already acknowledged in the report. In discussing this point, SNL stated that further study was needed on the extent to which terrorists could damage a research reactor. Regardless of the details of the work performed by INL and SNL, which we believe raise key concerns, one thing remains clear: there is need for further study to better understand the risks and consequences of an attack on a research reactor by well trained terrorists. Finally, NRC asserted that our assumptions regarding terrorist attack scenarios lack a sound technical basis. We disagree. Specifically, we note the following: The findings in our report do not rely on assumptions but, instead, are based on the evidence we collected from experts at NRC, DOE, INL, SNL, DHS, and other sources. This evidence demonstrates that there is uncertainty about some aspects of NRC’s security assessment. However, NRC’s comments suggest that no such uncertainty exists, even though in some cases NRC used assumptions in its security assessment that it had difficulty defending. For example, NRC officials did not fully consider an alternative attack scenario that could be more damaging if carried out successfully because, according to NRC officials, the supervisor of the staff doing the assessment was an engineer who instructed the staff that such scenarios were unlikely, if not impossible. During discussions on this point, an NRC official acknowledged that if the alternative attack scenario had been fully assessed, NRC’s security assessment might have demonstrated more significant consequences. NRC states that we incorrectly assumed that terrorists could use certain tactics in attacking research reactors since there is a lack of intelligence information that terrorists have demonstrated these capabilities. We disagree. The events of September 11, 2001, and the threats faced by our armed forces in Iraq demonstrate that terrorists are capable of innovating how they conduct attacks. Consequently, we believe that, in conducting its security assessment, NRC should have considered a fuller range of threats, including both the threats that have occurred and the possibility of emerging threats. NRC also disagreed with our characterization of (1) what portion of a reactor could be damaged in a terrorist attack and (2) the extent of the radiation released from such an attack. However, experts at INL and DHS provided our evidence on these points. As previously discussed, according to an INL vulnerability expert, a well-executed terrorist attack could damage a significant portion of a research reactor and release a larger amount of radioactivity into the neighboring communities than NRC estimates. On this point, INL’s Deputy Associate Laboratory Director for National and Homeland Security Directorate told us that additional analysis and study is warranted in order to gain a more comprehensive understanding of both how much of a reactor could be damaged in an attack and what the resulting radiological consequences would be. As agreed with your office, unless you publicly release the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will then send copies of this report to the appropriate congressional committees; the Secretary of Energy; the Administrator of NNSA; the Chairman of NRC; and the Director, Office of Management and Budget. We will also make copies available to others upon request. In addition, this report will be available at no charge on the GAO Web site at www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix II. The following are GAO comments on the Nuclear Regulatory Commission’s (NRC) letter dated December 17, 2007. 1. We disagree. We accurately describe NRC’s oversight actions taken since September 2001, including its process of performing routine inspections for safety and security. Furthermore, we give NRC credit for working with research reactor licensees to make, and to verify, many security improvements that NRC identified as necessary. We also discuss the many security features and improvements at NRC-licensed research reactors that we visited including security improvements that exceed NRC’s security requirements. 2. Our report does not misrepresent NRC’s use of Sandia National Laboratories’ (SNL) security assessment and did not state that NRC “dismissed” the security assessment. Instead, our report accurately states that NRC “disagreed” with SNL about the security of research reactors. On this point, NRC has reiterated its disagreement with the SNL analysis in writing several times. Specifically, when NRC provided us with copies of SNL’s security assessment, it also provided a disclaimer stating that NRC “does not support many of the assumptions and/or information contained in these reports and…the reports cannot be used independently to develop any conclusions regarding the security or protective measures for the facilities contained in the reports.” Furthermore, a 2005 statement from an NRC Commissioner concerning SNL’s work further supports our point that NRC disagreed with the SNL analysis. According to this Commissioner, “because the Sandia security assessment reports contain scenarios and assumptions that are not supported by the Commission, the reports should not be released to anyone outside the agency nor should they be shared with licensees or stakeholders.” He further states that SNL’s security reports “if taken out of context, could prove to be an enormous burden on NRC and our licensees and could result in a tremendous amount of time spent explaining why we think the Sandia analyses are deeply flawed.” 3. Contrary to NRC’s comments, our report does not compare security requirements for NRC-licensed and Department of Energy (DOE) operated research reactors or actual security conditions at the reactors. In fact, we reported on DOE and NRC security issues in separate sections of the report. We did, however, compare one assumption regarding how each agency considered the role of insiders who may provide assistance to an attacking force. In our view, this was an important comparison to make because, in its assessment, NRC assumed that insiders with access to the reactor would only participate to a limited degree in an attack on a reactor. However, in similar security assessments for DOE facilities, DOE assumed that insiders would fully participate in an attack, and it has designed its defenses on the assumption of full participation. In discussing this point with NRC officials, they acknowledged that if NRC’s assessment had assumed fully participating insiders, then the results of its assessment may have turned out differently. 4. Our report did not misrepresent or exclude key facts provided to us by Idaho National Laboratory (INL) and SNL. With regard to INL, we did receive a letter from INL in June 2007 requesting that we not include or refer in any fashion to any INL technical judgments contained in the INL report, and we subsequently spoke with INL management about the reason for this request. As our report states, according to INL’s Deputy Associate Laboratory Director for National and Homeland Security Directorate, INL believes that a more comprehensive analysis of the vulnerability and consequences of attacks on research reactors is warranted. Nonetheless, this official stated that the INL analysis was technically accurate and INL’s vulnerability expert had done good work in preparing it. As a result of this discussion, we deleted from the report many of the specific details of this analysis, such as the specific estimates of radiological consequences, and instead provided only a short summary of the key findings in the analysis. As we pointed out in our report, the INL analysis stated that a terrorist attack could produce “significant consequences” and have “high socio-economic impact.” We retained this statement because INL officials emphasized this point during communications with us after we received INL’s June 2007 letter. Furthermore, although NRC states that INL asked us to exclude references to information we obtained from verbal communications with INL experts, INL never made such a request to us. INL would have no basis to make such a request because representatives of INL management arranged our meetings with INL experts to gather the information and data needed to complete our work. With respect to SNL, in neither of two sets of written comments did SNL dispute our primary conclusion regarding its work for NRC—that some NRC-licensed research reactors may not be prepared for certain types of terrorist attacks—nor did SNL disagree with our main report recommendation. We received initial comments from SNL in July 2007 on an early version of our classified draft report and revised our draft to acknowledge one of SNL’s key points—namely, that damaging a research reactor is a difficult and sophisticated task. However, we did not include further details of the SNL comments because they were inconsistent with the information we received during extensive discussions with SNL experts during 2 days in November 2006. For example, in its July 2007 written comments, SNL provided information that demonstrated why this task is so difficult. However, during discussions with SNL’s expert, he noted that damaging a reactor was possible and provided us with very detailed steps of how to do so. These steps addressed many of the very limitations discussed in the July 2007 comments from SNL. Furthermore, as we also reported, NRC disagreed with the SNL finding that some NRC-licensed research reactors may not be prepared for certain types of terrorist attacks. In its July 2007 comments, SNL did not address our characterization of the work it did for NRC. Finally, in subsequent comments provided in September 2007 as part of DOE’s technical comments, SNL provided more detailed information on the difficultly of sabotaging a research reactor. Our report includes SNL’s view that attacking a research reactor would be a difficult task that would likely require specific knowledge of reactors and sabotage techniques. Nonetheless, SNL’s comments also acknowledge the need for further study on the extent to which terrorists could damage a research reactor. Regardless of the details of the work performed by INL and SNL, which we believe raise key concerns, one thing remains clear: there is need for further study to better understand the risks and consequences of an attack on a research reactor by well trained terrorists. 5. We disagree with NRC’s assertion that our assumptions regarding terrorist attack scenarios lack a sound technical basis. Specifically, we note the following: The findings in our report do not rely on assumptions but instead are based on the evidence we collected from experts at NRC, DOE, INL, SNL, DHS, and other sources. This evidence demonstrates uncertainty about some aspects of NRC’s security assessment. In contrast, NRC’s comments suggest that no such uncertainty exists, even though in some cases NRC used assumptions in its security assessment that it had difficulty defending. For example, NRC officials did not fully consider an alternative attack scenario that could be more damaging if carried out successfully because, according to NRC officials, the supervisor of the staff doing the assessment instructed the staff that such scenarios were unlikely, if not impossible. An NRC official acknowledged that if the alternative attack scenario had been fully assessed, NRC’s security assessment might have demonstrated more significant consequences. We disagree with NRC’s statement that we incorrectly assumed that terrorists could use certain tactics in attacking research reactors, since there is a lack of intelligence information that terrorists have demonstrated these capabilities. NRC’s security assessment did not address certain tactics that were raised as a concern in its own intelligence documents. Furthermore, as the events of September 11, 2001, and the threats faced by our armed forces in Iraq have shown, terrorists are capable of innovating how they conduct attacks. Consequently, we believe that, in conducting its security assessment, NRC should have considered a fuller range of threats, including both the threats that have occurred and the possibility of emerging threats. We stand by the evidence provided by INL and DHS experts regarding the portion of a reactor that could be damaged in a terrorist attack and the extent of the radiation that could be released from such an attack. As previously discussed, according to an INL vulnerability expert, a well-executed terrorist attack could damage a significant portion of a research reactor and lead to the release of a larger amount of radioactivity into the neighboring communities than NRC estimates. On this point, INL’s Deputy Associate Laboratory Director for National and Homeland Security Directorate told us that more analysis and study is warranted to gain a more comprehensive understanding of both how much of a reactor could be damaged in an attack and what the resulting radiological consequences would be. 6. This comment refers to a classified report Los Alamos National Laboratory (LANL) issued in 1989. That report discussed the potential and limitations to a certain type of terrorist attack on research reactors that is discussed more fully in our classified report. The scenario addressed in the LANL report was similar to the type of attack identified in the INL analysis. (The LANL report was discussed in our classified report. Because the LANL report is classified, we are not including the details of the LANL report in this report.) However, we note that the LANL report was completed more than 15 years ago at a time when the United States faced different and less severe potential threats. In our view, the LANL study, when combined with the views of INL and DHS experts, demonstrates that there is some uncertainty within the community of reactor experts on the consequences of certain types of attacks on research reactors. This uncertainty provides the basis for our recommendation that NRC reconsider its security assessment. In addition to the contact named above, John Delicath, Doreen S. Feldman, Eugene Gray, Keith Rhodes, Ray Rodriguez, Peter Ruedel, Rebecca Shea, Carol Herrnstadt Shulman, Ned Woodward, and Franklyn Yao made key contributions to this report. | There are 37 research reactors in the United States, mostly located on college campuses. Of these, 33 reactors are licensed and regulated by the Nuclear Regulatory Commission (NRC). Four are operated by the Department of Energy (DOE) and are located at three national laboratories. Although less powerful than commercial nuclear power reactors, research reactors may still be attractive targets for terrorists. As requested, GAO examined the (1) basis on which DOE and NRC established the security and emergency response requirements for DOE and NRC-licensed research reactors and (2) progress that the National Nuclear Security Administration (NNSA) has made in converting U.S. research reactors that use highly enriched uranium (HEU) to low enriched uranium (LEU) fuel. This report summarizes the findings of GAO's classified report on the security of research reactors (GAO-08-156C). DOE developed the security and emergency response requirements for its research reactors using its Design Basis Threat--a process that establishes a baseline threat for which minimum security measures should be developed. These research reactors benefit from the greater security required for the national laboratories where they are located, which store weapons-usable nuclear materials. DOE also has concluded that the consequences of an attack at some of its research reactors could be severe, causing radioactivity to be dispersed over many square miles and requiring the evacuation of nearby areas. As a result, all facilities where DOE reactors are located have extensive plans and procedures for responding to security incidents. NRC based its security and emergency response requirements largely on the regulations it had in place before September 2001. NRC decided that the security assessment it conducted between 2003 and 2006 showed that these requirements were sufficient. While it was conducting this assessment, NRC worked with licensees to improve security when weaknesses were detected. However, GAO found that NRC's assessment contains questionable assumptions that create uncertainty about whether the assessment reflects the full range of security risks and potential consequences of attacks on research reactors. For example, Sandia National Laboratories (SNL)--a contractor NRC used to assist in performing its assessment-- found that some NRC-licensed research reactors may not be prepared for certain types of attacks. However, NRC disagreed with SNL's finding. In 2006, NRC concluded that the consequences of attacks would result in minimal radiological exposure to the public. In addition, NRC assumed that terrorists would use certain tactics in attacking a reactor but did not fully consider alternative attack scenarios that could be more damaging. Finally, NRC assumed that a small part of a reactor could be damaged in an attack, resulting in the release of only a small amount of radioactivity. However, according to experts at Idaho National Laboratories and the Department of Homeland Security, it is possible that a larger part of a reactor could be damaged, which could result in the release of larger amounts of radioactivity. NNSA has made progress in changing from HEU to LEU fuel in U.S. research reactors but may face difficulty in converting some of the remaining research reactors. Since 1978, NNSA has converted eight currently operating U.S. research reactors, including two in 2006. In addition, NNSA plans to convert 10 more U.S. research reactors by September 2014--five of which are scheduled for conversion by 2009. However, NNSA faces difficulties in converting the remaining five reactors because these reactors cannot operate with the currently available LEU fuel. NNSA is now developing a new LEU fuel that will allow the remaining five reactors to operate. However, according to NNSA, developing this fuel has been problematic, as early efforts experienced failures during testing. NNSA officials acknowledged that further setbacks are likely to delay plans to convert these research reactors. |
State and local governments generally have the primary responsibility for disaster recovery while the federal government provides support when requested. Because there are many parties involved in this process— including all levels of government as well as victims and businesses within the affected communities—effective collaboration is a key factor for successful recovery. In addition, collaboration among recovery stakeholders can continue for an extended period of time. Short-term recovery is immediate and an extension of the response phase in which basic services are restored. Long-term recovery can include some of these short-term activities, but typically continues them for a number of months or years, depending on the severity and extent of the damage sustained. It also involves restoration of both individuals and the community, including the redevelopment of damaged areas. To provide recovery assistance after a disaster, many federal agencies and program components are called upon to administer disaster supplemental programs and funding, re-program funds, or expedite normal procedures. For example, grants, loans, loan guarantees, temporary housing, and counseling are among the forms of disaster assistance available from federal agencies including FEMA; the departments of Agriculture, Commerce, HUD, Treasury, and Transportation; and the Small Business Administration (SBA). Some of these federal programs provide financial resources to state and local governments following disasters, while others provide technical assistance. For example, FEMA’s Public Assistance grant program provides funding to repair or replace public infrastructure; HUD’s Community Development Block Grant (CDBG) program provides formula grants for long-term recovery needs such as rehabilitating and building housing, EDA’s economic adjustment grant responds to the short- and long-term effects of severe economic dislocation events on communities; and DHS’s Flood Insurance Program enables individuals to purchase insurance against losses from physical damage from floods. Other agencies directly carry out rebuilding or recovery projects such as the reconstruction of levees by the U.S. Army Corps of Engineers and the repair of federal roads by the Federal Highway Administration. Federal recovery assistance is also provided directly to disaster victims. For example, FEMA’s Individual Households Program provides housing, financial assistance, and other direct services while the Internal Revenue Service provides information about how to claim casualty loss deductions. The federal government also provides technical assistance for communities to engage in long-term community recovery activities, through the Emergency Support Function #14 (ESF #14), as part of the National Response Framework. ESF #14 coordinates federal and state long-term community recovery support and helps communities plan for and identify the necessary resources for recovery. Developed shortly before the 2005 Gulf Coast hurricanes, ESF #14 was not in place at the time of the five past disasters we studied. ESF #14 and FEMA’s Long-Term Community Recovery Branch in its Disaster Assistance Directorate, which supports this annex, provide assistance in coordinating federal, state, and local recovery efforts and developing community recovery plans. The Long-Term Community Recovery Branch also works with other federal agencies to help identify program gaps and the potential need for flexibilities and new authorities during the recovery process. Our previous work defines collaboration broadly as any joint activity that is intended to provide more public value than could be produced when the organizations act alone. Because of the large number and wide variety of stakeholders involved in the recovery from a catastrophic event, collaboration is a critical element of this process. We have previously reported that agencies can enhance and sustain their collaborative efforts by engaging in eight practices: defining and articulating a common outcome; establishing mutually reinforcing or joint strategies; identifying and addressing needs by leveraging resources; agreeing on roles and responsibilities; establishing compatible policies, procedures, and other means to operate across agency boundaries; developing mechanisms to monitor, evaluate, and report on results; reinforcing agency accountability for collaborative efforts through agency plans and reports; and reinforcing individual accountability for collaborative efforts through performance management systems. Effective collaboration among recovery stakeholders can play a key role in facilitating disaster recovery. Because the recovery process requires partnerships among representatives from all levels of government as well as nongovernmental groups, effective collaboration is critical. We have previously identified a number of practices that can enhance and sustain collaborative efforts, which would help to facilitate disaster recovery. We found four of these collaborative practices in the past disasters we reviewed. Specifically, governments (1) developed and communicated common recovery goals; (2) leveraged resources to facilitate recovery; (3) used recovery plans to agree on roles and responsibilities; and (4) evaluated and reported on progress made toward recovery. To overcome significant differences in missions, cultures, and established ways of doing business, collaborating groups must have a clear and compelling rationale for working together. We have previously reported that the compelling rationale for collaboration can be imposed externally such as through legislation or can come from the understanding that there are benefits to working together. In either case, collaborative efforts require staff working across organizational lines to define and articulate a common outcome or purpose they are seeking to achieve that is consistent with respective organizational goals and mission. In our September 2008 report on disaster recovery, we discussed the importance of recovery plans and how clearly identified goals in such plans can provide direction and specific objectives for communities to focus on. Building on this, we identify two approaches of how stakeholders involved in the recovery process following the Kobe earthquake in Japan and the Grand Forks/Red River flood in Grand Forks, North Dakota, worked collectively to define and articulate common outcomes. A month after the 1995 Kobe earthquake, the national Japanese government formed a “reconstruction committee” to organize recovery efforts. The Japanese government created this body through national legislation that required the participation of numerous national, prefectural, and local agencies as well as nongovernmental organizations, such as the Kobe Chamber of Commerce and Industry. The Prime Minister personally managed the committee, and the Chief Cabinet Secretary and Minister of the National Land Agency served as deputy managers. The reconstruction committee also included representation from other high- ranking government officials—including cabinet ministers, the governor of Hyogo prefecture, and the mayor of the city of Kobe—as well as participants from academia. According to an official who participated in this committee, the involvement of these prominent leaders not only encouraged stakeholders involved in the reconstruction committee to collaborate in order to come to agreement on recovery goals, it brought national attention to recovery issues. Working together through this committee, these officials and stakeholders collaborated to create a national plan of action for recovery. This plan included broad proposals that provided insight for how the national government would assist in recovery, such as recommending that a long- term recovery plan be developed quickly as well as making housing reconstruction, debris removal, port reconstruction, and job creation a national priority. It also included more specific details to guide Hyogo prefecture and the city of Kobe’s recovery, such as promptly demolishing unsound structures and using excess concrete from the earthquake rubble for construction and repairs in the port area. In addition to providing an action plan, this committee also reviewed Hyogo prefecture’s and the city of Kobe’s recovery plans to help localities align their recovery proposals with the funding priorities of the national government. According to an evaluation of the recovery conducted by the city as well as outside recovery experts, the specific feedback provided by the reconstruction committee, along with the recovery goals previously clarified by the national government helped local officials to come to consensus on their recovery goals. Within 6 months of the earthquake, Hyogo prefecture and the city of Kobe completed recovery plans, which included specific recovery goals for their regions, such as rebuilding damaged housing units in 3 years and completing physical recovery in 10 years. According to this evaluation, the delineation of these goals at a local level played a critical role in helping to coordinate the wide range of participants involved in implementing recovery projects. After the Grand Forks/Red River flood in 1997, federal, state, and local officials worked together to define common goals when planning for the recovery of Grand Forks, North Dakota. Technical consultants, funded by a HUD grant, brought together federal and city officials as well as members of the community to discuss Grand Forks’s rebuilding priorities. According to a local official, because the city had no experience with the process of developing common goals for the city prior to the flood, this external facilitation helped the Grand Forks community and city officials come to agreement on a set of common recovery goals. The recovery goals resulting from these meetings were included in a comprehensive recovery plan for Grand Forks. A subsequent city evaluation found that the process of specifying goals within the recovery plan—which identified five broad goals and a number of supporting objectives and tasks to achieve those goals—helped the city to conceive and formulate projects in coordination with the city council and representatives from state and local governments. We have previously reported that to effectively collaborate requires the identification of the human and financial resources needed to initiate or sustain collaborative effort. In doing so, collaborating groups can bring different levels of resources and capacities to the task at hand. In our September 2008 report, we discussed the importance of helping state and local governments take advantage of all available disaster assistance by enhancing their financial and technical capacity when needed. Following the Kobe and Northridge earthquakes, we found examples of how governments leveraged the knowledge and expertise of diverse stakeholders to produce effective collaboration and, in turn, facilitate the recovery process. In the wake of the 1995 Kobe earthquake, the Japanese government created a formal organization through which human capital resources from all levels of the government were leveraged to plan for and implement recovery strategies. A committee comprised of high ranking officials—including members of the Japanese House of Representatives and leaders of affected jurisdictions and their staff—developed intergovernmental recovery strategies. In addition to those high-ranking officials, the committee also included working-level staff from national ministries to provide expertise for developing specific details to be included in the recovery plan. For example, staff from the Ministry of Transportation brought expertise on infrastructure replacement while those from the Kobe Chamber of Commerce and Industry contributed knowledge regarding economic recovery matters. According to a Japanese official involved in the recovery, this committee combined the political know-how from the top-level officials and interdisciplinary expertise from line-level bureaucrats to propose many recovery proposals which laid a foundation for the national government’s approach to recovery. The Japanese government also leveraged human capital expertise through this committee to facilitate the implementation of recovery strategies. Upon the approval of certain recovery policies, working staff associated with the committee returned to their respective organizations to guide their home departments on how best to implement the strategies. A Japanese official involved in the committee said that this collaboration helped to ensure that disparate ministries understood and properly implemented the recovery strategies they helped to develop. After the 1994 Northridge earthquake, the city of Los Angeles, California, also leveraged human capital resources to accelerate the rebuilding of its freeway system. Using the technical expertise of staff from the Federal Highway Administration (FHWA) and the California Department of Transportation (CalTrans), the city of Los Angeles developed an expedited contracting process. To review construction proposals more efficiently, FHWA and CalTrans staff collaborated to review documents, discuss needed changes, and then approve projects together in one location. According to CalTrans officials, state and federal offices normally conduct separate reviews. This joint process helped to expedite the approval of projects while still meeting oversight requirements for both levels of government. Under standard contracting procedures, the contracting process could take 26 to 40 weeks to complete. However, this collaborative, co-located process enabled state highway officials to advertise and award construction contracts in just 3 to 5 days. By leveraging the knowledge and resources of state and federal staff in this way, Los Angeles successfully restored its highways within a few months after the Northridge earthquake. In addition to leveraging human capital expertise, Los Angeles also found ways to take advantage of resources from different federal programs to facilitate housing recovery for certain disaster victims. The city faced challenges in helping owners of housing units that had suffered extensive damage in the earthquake. When Los Angeles learned that some of these dwellings were not eligible for SBA disaster assistance because they had negative cash flows, the city identified resources available from a HUD program to help these property owners. Using these funds, the city allocated $322 million to an Earthquake Supplemental Disaster Relief fund which assisted property owners who were declined by SBA. To obtain information on owners who might benefit from this program, the city entered into a cooperative agreement with SBA to obtain direct referrals of individuals who were denied loans so that the city could inform them of this additional source of assistance. A city evaluation of this program found that Los Angeles received over 5,000 referrals, which represented more than 22,000 housing units. Collaborating organizations can work together to define and agree on their respective roles and responsibilities. In doing so, they can collectively agree on who will do what, organize joint and individual efforts, and facilitate decision making. One way to delineate roles and responsibilities for disasters is through planning. For the emergency response phase, the National Response Framework sets out the roles and responsibilities of key partners at the local, tribal, state, and federal levels. Responsibilities for recovery stakeholders are detailed in ESF #14, the Long-Term Recovery Annex. The annex mostly addresses the responsibilities of federal agencies involved in recovery. Because state and local governments play a lead role in disaster recovery, it is also important for their roles and responsibilities to be clearly delineated. After past disasters, this information has been delineated through long-term community recovery plans. Communities can develop such plans either before or after a disaster occurs. Post-disaster recovery plans typically include detailed projects and approaches to rebuild a community based on the damage and impacts of the specific disasters. Some communities have supplemented post-disaster plans by conducting planning efforts prior to a disaster. Pre-disaster planning does not involve actually developing rebuilding programs in advance of a disaster because the patterns of damage from natural disasters are impossible to predict with sufficient accuracy to support detailed pre-planning. However, these plans can be helpful in other ways that foster collaboration, specifically in defining the roles and responsibilities of recovery stakeholders prior to a disaster. We have previously reported how effective recovery plans identify specific roles and responsibilities among various stakeholders. While these plans are often developed after a disaster takes place, we have identified some instances where this information was clarified beforehand. Los Angeles’s Recovery and Reconstruction Plan clearly identified the roles and responsibilities of key officials involved in recovery. In the aftermath of the Northridge earthquake in southern California, the city revised the plan for the purposes of recovery from that event. Specifically, the plan identified which city departments have responsibility for implementing pre-determined activities before and after a disaster in several functional categories, including residential, commercial, industrial rehabilitation, and economic recovery. An evaluation of the plan funded by the National Science Foundation found that the assignment of general responsibilities to the departments was useful because it helped the various components of city government to understand their post-disaster roles and responsibilities. Further, the process of developing the plan also improved collaboration among stakeholders. Specifically, representatives from many departments—including public safety, planning, public works, building, and community redevelopment—met several times to develop and revise the plan. A good plan is not simply a paper-driven exercise, but rather the result of a dynamic and inclusive process wherein key stakeholders are consulted and involved in the identification of priorities and the formation of strategies. Collaboration among recovery stakeholders was further enhanced through long-term recovery planning exercises held by the city of Los Angeles. In these exercises, police and fire officials engaged in role playing exercises in which they assumed the responsibilities of recovery officials. For example, a public safety officer played the role of a building inspector responsible for issuing building permits after an earthquake. A city official at the time of the earthquake told us that the process of developing the plan and conducting exercises was an important part of developing relationships among stakeholders which facilitated collaboration among city officials after the Northridge earthquake. According to a federally-funded evaluation of this plan, the contacts established during the planning process facilitated the recovery after the Northridge earthquake. Another city official stated a positive outcome of the planning effort was that participants knew of others who worked on similar issues with whom they can initiate conversations. In addition, the process of preparing and testing the plan educated city staff on their post- disaster roles and responsibilities. More recently, two other communities have taken action to develop recovery plans prior to a disaster that identify roles and responsibilities for recovery. In the San Francisco Bay Area, state and local governments used pre-disaster planning to reinforce a regional approach to recovery as well as to assign regional roles and responsibilities for recovery. Learning from past experiences with natural disasters in California including the Loma Prieta earthquake, the Bay Area recognized the value of planning for recovery in anticipation of future disasters. Toward that end, Bay Area officials initiated a regional disaster response planning effort in 2004 culminating with the Regional Emergency Coordination Plan in March 2008, which included a subsidiary plan focused specifically on recovery. Specifically for recovery, the San Francisco Bay Area Regional Emergency Coordination Plan summarizes in a table organizations involved at each level of government and the primary role of each. For example, the table specifies that local governments will resume government functions and request state and federal assistance, that state agencies will implement state-funded recovery programs, and that regional infrastructure owners will initiate planning for and implementation of permanent repairs. We have previously reported on the challenges that state and local jurisdictions sometimes face with understanding the extent to which the federal government will pay for disaster-related costs. Pre-disaster recovery plans that clearly identify the roles and responsibilities of various stakeholders may prove useful in clarifying the specific types of costs federal programs are likely to cover as well as some of the requirements of these programs before a disaster strikes. Partly as a result of experiences following Hurricane Andrew, Florida’s Palm Beach County developed the Palm Beach Countywide Post-Disaster Redevelopment Plan for guiding decision making and action during the disaster period as well as detailing actions that can be taken before a disaster strikes to speed the recovery process. Palm Beach County delineates roles and responsibilities for recovery by creating working groups who will be responsible for implementing different sections of the plan, including infrastructure, economic development, and government operations. Each working group is assigned several issues to cover along with a chairperson to spearhead those activities for the county. Additionally, city departments and agencies are represented in each of these working groups. As an outgrowth of this plan, a Business and Industry program was created that formally integrated business interests into the recovery process. Additionally, the program also created a private-public partnership comprising local, state, regional, and national businesses as well as governmental and nongovernmental organizations. According to a Palm Beach County official, partners in this program are fully engaged in the development and implementation of recovery initiatives. These collaboration efforts have resulted in improved relationships among the governmental, nongovernmental, and business entities involved in the program. Post-disaster recovery plans can also provide a vehicle to designate roles and responsibilities for recovery, among other things. We have previously reported that well-crafted post-disaster recovery plans can clarify roles and responsibilities and help jurisdictions make progress with recovery. For example, the city of Grand Forks’s recovery plan developed in the wake of the 1997 Grand Forks/Red River flood clearly identified which personnel—drawn from city, state, and federal agencies—would be needed to carry out each task. Specifically, the plan called for collaboration of staff from the city’s urban development and engineering/building inspection departments, FEMA, and the U.S. Army Corps of Engineers to create an inventory of substantially damaged buildings in the downtown area. By clarifying the roles and responsibilities of those who would be involved in accomplishing specific tasks, the plan provided detailed information to facilitate its implementation. Organizations engaged in collaborative efforts need to create the means to monitor and evaluate their efforts to enable them to identify areas for improvement. Reporting on these activities can help decision makers, clients, and stakeholders obtain feedback for improving both policy and operational effectiveness. We have previously reported that effective recovery plans identify clear goals that can provide governments with a basis for subsequent evaluations of the recovery progress. As a next step, we identify how local jurisdictions impacted by the Kobe earthquake established a process through which government officials, community members, and recovery experts worked together to assess the recovery progress and recommend improvements. Hyogo prefecture and the city of Kobe established a system of periodic recovery assessments in the wake of the 1995 Kobe earthquake in Japan. Both governments designed a two-phase approach to evaluating the progress they have made toward recovery, the first taking place about 5 years after the earthquake and the second about 10 years afterward. This design allowed for both a short- and long-term assessment of the recovery. Although the Hyogo and Kobe governments funded these evaluations, neither prefecture nor city employees were directly involved in conducting these assessments; rather, they used external staff to perform the reviews. Hyogo prefecture invited domestic and international disaster recovery experts to serve on its evaluation panels, while the city of Kobe staffed its reviews with members of local community groups. These evaluations focused on the goals established in the recovery plans approved by the national government 6 months after the earthquake. They enabled policy makers to measure the progress made by various stakeholders in achieving recovery goals, and identify needed changes to existing policies, and learned lessons for future disasters. The panels examined six broad recovery topics—including health, industry, employment, and urban development—which resulted in many recommendations to improve recovery from the Kobe earthquake. For example, as a result of its 10-year evaluation Hyogo prefecture gained insight into the unintended consequences of how it relocated elderly earthquake victims, which subsequently led to a change in policy. After the earthquake, the prefecture gave priority to the relocation of elderly victims and grouped them together in special care residences located outside the city. While this policy ensured that this vulnerable population received housing quickly, it also had the unintended effect of isolating the relocated seniors, who were removed from their communities. In fact, the verification committee attributed this housing arrangement as leading to untimely deaths for some seniors. After learning of this finding, the prefecture built new types of residential housing that offer comprehensive lifestyle support for seniors. In addition, for future disasters the prefecture plans to develop a system to track displaced populations as they move from temporary to permanent housing to help maintain better contact with victims. Recovery experiences from past catastrophes—including good collaboration practices—can offer lessons for such events in the future. FEMA has taken some actions to encourage recovery stakeholders to collaborate by sharing lessons and experiences related to recovery. However, in contrast to other phases of a disaster for which FEMA has a specific mechanism dedicated to sharing such information, this is not the case with the recovery phase. FEMA has taken steps to support collaboration through planning and sharing recovery lessons. FEMA has assisted state and local governments in developing post-disaster recovery plans in various ways, which in turn can help facilitate collaboration among stakeholders. First, FEMA, along with other federal agencies such as HUD and EDA, provided technical assistance for post-disaster recovery plans for several of the disasters we reviewed. Second, FEMA developed guidance for conducting the long-term recovery planning process. More specifically, the agency created a Long-Term Community Recovery Self Help Guide that offers communities step-by-step guidance for implementing a recovery program and planning process. Third, FEMA created the Long-Term Recovery Assessment Tool to help communities analyze the impacts of a disaster while taking into consideration the local government’s capacity to assist in promoting its own long-term recovery. The assessment tool helps federal and other decision makers identity the type and level of supplemental long-term community recovery assistance that may be needed for full recovery from a disaster. The tool also includes processes and procedures for assessing long-term recovery needs, community evaluation protocols, standard planning templates, staffing strategies, and timetables for various levels of effort. FEMA has also taken actions to encourage collaboration among state and local officials to share experiences and expertise related to disaster recovery. For example, FEMA’s Long-Term Community Recovery Branch, working through ESF #14, hosted a teleconference linking officials in Florida, Mississippi, Colorado, and Iowa with experience recovering from previous disasters to provide information to officials in Texas recovering from Hurricane Ike. In this way, officials with direct experience in the recovery process were able to share good practices related to recovery planning, disaster funds administration, and coordinating regional efforts with the participants from Texas. According to FEMA officials, this collaboration helped the Texas officials identify recovery projects and develop a community recovery plan. In addition, FEMA is considering ways to further facilitate the sharing of lessons learned for disaster recovery, including creating a peer-to-peer mentoring program where experienced local officials can provide technical assistance, advice, and support to communities impacted by the 2005 Gulf Coast hurricanes. However, these officials told us that this idea is still at an early stage and additional specifics are not yet available. FEMA’s information sharing Web sites do not include a focus on recovery. FEMA has systematic approaches for sharing lessons regarding three of the four phases of a disaster—preparedness, response, and mitigation; however, as of June 2009, the agency does not have an information sharing system focused on recovery. Officials involved in the preparedness and response phases of a disaster can share lessons through FEMA’s Lessons Learned Information Sharing (LLIS) Web site. LLIS is a national online network of lessons learned and best practices for the emergency preparedness, response, and homeland security communities. Online since April 2004, LLIS provides users access to over 12,000 documents including state and local plans, after-action reports, best practices, and lessons learned that are culled from real-world experiences and exercises. Because the Web site includes some sensitive information, its registration process is limited to domestic users with “a need to know.” Information on LLIS is organized through a number of “featured topics,” such as critical infrastructure, exercise planning and program management, and wild-land fires. Materials are also organized by numerous “disciplines” that have an emergency management focus such as emergency communications, mass care and human services, mortuary services, as well as search and rescue. While LLIS does contain materials relating to recovery, the issue is neither a featured topic nor a discipline, making it challenging to access recovery- related information in an easy or intuitive way. Additionally, the message boards that allow LLIS users to discuss a variety of homeland security topics are rarely used to exchange information about recovery. For example, while almost 600 messages have been exchanged on forums discussing preparedness and response issues as of June 2009, only two messages have been posted to the message board focused on disaster recovery (and they were both from the same individual). A FEMA official with responsibility for LLIS told us that there is an increasing recognition that recovery is an underserved area of disaster management, and the agency can see benefits of potentially including more information about recovery in LLIS. To share lessons related to the mitigation phase, FEMA has created a searchable online portfolio of case studies and best practices submitted by individuals and communities describing the measures they have taken to reduce the loss of life or property from future disasters. Communities that have taken creative steps in implementing good mitigation practices can submit those stories to FEMA where officials will review and possibly include them in the online best practices portfolio. Unlike the information sharing mechanisms it has in place for the preparedness, response, and mitigation phases of a disaster, FEMA does not have a similar approach for sharing lessons focused on recovery. Recovery lessons from specific disasters are sometimes available through the FEMA Web site under the listings for specific disasters, although the amount and nature of recovery information available this way varies greatly. In addition, the Long-Term Community Recovery and ESF #14 Web site contains considerable information on recovery, but this site is mostly dedicated to providing technical guidance for planning and does not permit recovery officials to share lessons or learn about recovery best practices. FEMA officials told us that that they plan to develop a document compiling community-based best practices for disaster recovery, but they do not know when it will be available. Perhaps more useful than the sharing of reports and other written accounts of recovery lessons and experiences is the ability to directly network with other recovery officials who can answer questions and relate insights first-hand. In the course of our work, we learned of instances where this type of personal connection was particularly valuable. For example, a Watsonville, California, official told us of his efforts immediately after the Loma Prieta earthquake to contact a local official in southern California because he had read that the official had experienced an earthquake a few years earlier and he wanted to solicit his guidance. The official from southern California agreed to help and traveled to Watsonville the next day to share his experiences and provide insights on potential recovery strategies. In another example, when Hurricane Katrina hit the Gulf Coast in 2005, officials from Grand Forks, North Dakota, offered to help city leaders in Biloxi, Mississippi, based on their experiences with the 1997 Grand Forks/Red River flood. Through one-on-one exchanges like these, state and local officials involved in recovery can obtain tailored advice from individuals who have addressed similar challenges themselves. For emergency managers involved in the disaster preparedness and response phases, FEMA’s LLIS Web site has a network-building feature that can be used to foster this type of exchange. LLIS provides its users with access to a directory of other registered users that can be searched for a number of variables including name, affiliation, and emergency management function (primarily disciplines such as mass care and human services or public health). The online directory mostly consists of officials and researchers involved in various aspects of emergency management. Such a directory, or one similar to it, might be very useful to recovery officials seeking to network with, and learn from, others with experiences or expertise in disaster recovery. Collaboration is essential for an effective partnership between the wide range of participants involved in the disaster recovery process. While effective collaboration has helped to facilitate recovery in past disasters, experiences from the 2005 Gulf Coast hurricanes reveal that more can be done in this area. Specifically, we have identified a number of practices used during past disasters that can offer insights for effective collaboration: developing and communicating common goals to guide recovery; leveraging resources to facilitate recovery; using recovery plans to agree on roles and responsibilities; and monitoring, evaluating, and reporting on progress made toward recovery. While there is no one right way for how jurisdictions should manage recovery nor is there a recipe of techniques that fits all situations, the examples we describe in this report—which were tailored to the specific needs and conditions of those particular disasters—may provide insights into improving collaboration among the many stakeholders involved in the ongoing recovery efforts in the Gulf Coast as well as for future catastrophic events. Recovery stakeholders have a responsibility for fostering collaboration during disaster recovery. State and local governments have taken the lead in defining roles and responsibilities within pre- and post-disaster recovery plans, a step that has helped to facilitate the recovery process. The federal government has also played an important role in fostering collaboration for recovery. For example, FEMA has supported post-disaster planning efforts and hosted videoconferences between experienced officials and those currently in the recovery process. However, the agency can take additional steps to share information focused on recovery so that it is captured and preserved for the future. In the absence of a mechanism for compiling and disseminating recovery information, valuable expertise from officials who have first-hand recovery knowledge may be lost. To improve the ability of the federal government to capture and disseminate recovery information, we recommend that the Secretary of Homeland Security direct the Administrator of FEMA to establish a mechanism for sharing information and best practices focused on disaster recovery, including practices that promote effective collaboration such as those discussed in this report. Options for doing this could include (1) creating an approach, similar to the LLIS Web site or the mitigation best practices portfolio, through which disaster recovery lessons can be compiled and shared, and personal networks among interested recovery officials encouraged; and/or (2) modifying the LLIS Web site to add a focus on recovery by taking steps such as including more recovery documents, creating a recovery topic area within LLIS, and creating an online directory for recovery officials to encourage networking and facilitate further sharing of recovery experiences. On June 19, 2009, we provided a draft of this report to the Secretary of Homeland Security for comment. We received written comments on July 22, 2009. In its written comments, which are reprinted in appendix VII, DHS concurred with our recommendation. In addition, the department provided technical clarifications that we incorporated where appropriate. We also provided drafts of relevant sections to public officials, nongovernmental stakeholders, and recovery experts involved in or knowledgeable of the specific examples cited in this report and incorporated their comments as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the date of this letter. We will then send copies of this report to the Secretaries of Homeland Security and Housing and Urban Development, the FEMA Administrator, the Assistant Secretary of Commerce for Economic Development, and the state and local officials we contacted for this review. In addition, the report will be available on our Web site at http://www.gao.gov. If you or your staff have any questions regarding this report, please contact me at (202) 512-6806 or by email at [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VIII. To identify recovery lessons from past experiences, we selected 5 catastrophic disasters: the 1989 Loma Prieta earthquake, Hurricane Andrew in 1992, the 1994 Northridge earthquake, the 1995 Great Hanshin- Awaji (Kobe) earthquake, and the 1997 Grand Forks/Red River flood to review (see fig. 1). The defines a illion. Under this definition, all the disasters selected for or exceed $500 m this review qualify as catastrophic. We chose these disasters because they had devastating communitywide or regional impact and occurred in urban areas of developed nations. Additionally, these disasters occurred far enough in the past that we could observe the long-term recovery process, occurred recently enough so that key officials and supporting documentation are still available, and represent different types of natural disasters. Federal Emergency Management Agency (FEMA) “catastrophic” event as one where the related federal costs reach We interviewed officials from national, state, and local governments and nongovernmental organizations, as well as academic experts, involved in or knowledgeable of the recovery following each of our selected disasters. We also obtained and reviewed legislation, ordinances, policies, and program documents that described steps that were taken to facilitate long- term recovery following each of these disasters as well as the disaster recovery literature. In some instances, our review was limited by the availability of historic documents and the accessibility of key officials engaged in recovery from past disasters. To better understand the federal government’s role in recovery from these disasters, we interviewed officials at the Department of Homeland Security, FEMA, the Economic Development Administration in the Department of Commerce, and the Department of Housing and Urban Development. We visited the ke our study to meet officials involved in the recovery effort and examine current conditions. Although we did not visit communities affected by the 1997 Grand Forks/Red River flood, we were able to gather the necessary information through telephone interviews with key officials involved in the recovery as well as recovery experts knowledgeable about the disaster. The scope of our work did not include independent evaluation or verification regarding the extent to which the communities’ recovery efforts were successful and the practices we discuss in this report only represent a selection of the many recovery actions taken after these disasters. y communities impacted by four of the five disasters in To identify examples of good collaboration among recovery stakeholders, we applied eight key practices we have reported on in prior work that enhance and sustain collaboration: (1) define and articulate a common outcome; (2) establish mutually reinforcing or joint strategies; (3) identify and address needs by leveraging resources; (4) agree on roles and responsibilities; (5) establish compatible policies, procedures, and other means to operate across agency boundaries; (6) develop mechanisms to monitor, evaluate, and report on results; (7) reinforce agency accountability for collaborative efforts through agency plans and reports; and (8) reinforce individual accountability for collaborative efforts through performance management systems. We used this framework to assess the ways in which recovery stakeholders collaborated in the five disasters included in our review. While we found examples related to four of these good collaborative practices, others that enhance coordination may also exist. To understand how FEMA supports collaboration among recovery stakeholders and the extent to which it facilitates the sharing of lessons and experiences from past recovery efforts, we interviewed officials from FEMA’s Long-Term Community Recovery Branch and staff responsible for managing the agency’s Lessons Learned Information Sharing (LLIS) Web site. We also obtained access to LLIS and FEMA’s online mitigation best practices portfolio, after which we reviewed the content and operations o those systems. The Loma Prieta earthquake, which occurred in the Santa Cruz mountains in 1989, severely impacted four cities in northern California. San Francisco experienced damage to several areas, including its Embarcadero freeway, Marina district, and City Hall. In Oakland, the earthquake caused the collapse of the Cypress Expressway as well as damage to other infrastructure and low income housing. The cities of Santa Cruz and Watsonville, both located near the earthquake epicenter, suffered devastating destruction to their downtown districts (see fig. 2). The federal government provided significant funding to the affected areas to facilitate its recovery from the 1989 Loma Prieta earthquake. Some examples of federal assistance for recovery are shown in figure 3. The areas most impacted by the earthquake were Oakland, San Francisco, and other cities in Santa Cruz county, including Santa Cruz, and Watsonville. Key aspects of disaster recovery include planning, housing, economic development, and infrastructure. The following presents an overview of selected recovery efforts after the Loma Prieta earthquake in each of these areas. However, it does not provide a comprehensive account of recovery actions taken. Two months following the earthquake, the Santa Cruz City Council appointed a citizen group to develop an overall plan to rebuild the devastated downtown area. Santa Cruz faced the challenge of reaching consensus for decisions regarding recovery. The city facilitated decision making during the recovery from the Loma Prieta earthquake as a result of growing tension between citizens and local officials. To do so, the city devised a formal structure that incorporated time frames that helped different community groups reach consensus on a unified recovery plan. It created Vision Santa Cruz, a 36-member citizen advisory body that included wide representation from the neighborhood and community groups, business, finance, labor, and nonprofit organizations. To facilitate decision making among these groups, according to a former Santa Cruz official, time limits were instituted so that if Vision Santa Cruz did not agree on a plan by a certain date, city officials would finalize the plan without the group’s consensus. Further, once a consensus was reached on an issue, it could not be opened for discussion again. Although faced with the challenge of uniting political groups with differing views in the community, Vision Santa Cruz succeeded in bringing the community together by forging a compromise among different stakeholders for recovery. Vision Santa Cruz completed the Downtown Recovery Plan in September 1991, which provided the policies, standards, and guidelines to direct the downtown rebuilding. The Downtown Recovery Plan provided guidance for building form, character, and height; housing; accessibility; open space and streetscape; circulation; and parking. According to a former Santa Cruz official, the Downtown Recovery Plan took into account the needs of the retail community by redesigning the business center. For example, the plan proposed new design guidelines that made buildings more suitable for retail purposes, such as requiring large ground floor windows to ensure that stores received more lighting. Specifically, the main street was designed to accommodate both pedestrians and low speed traffic (as opposed to being pedestrian-only), preserved on-street parking, and widened sidewalks. This plan is still in use today to guide development projects in downtown Santa Cruz. Watsonville relied on planning assistance offered by the Urban Land Institute to create a redevelopment plan. The city followed the plan it developed to rebuild and revitalize its downtown with a specific focus on implementation and an ancillary focus on development potential, plann urban design, development goals, and marketing strategies. Watsonville ing, took the opportunity to make improvements to address changing demographics of many blocks that became empty as a result of the earthquake. However, not all aspects of the plan were successfully implemented. For example, one of the plan’s goals was to support and open a department store. However, because the store’s upscale retail marketing did not fit with the changing demographics, sales dropped, an the store closed within one year. The Urban Land Institute also offered planning assistance to the city of Santa Cruz; however, the final plan focused heavily on housing, which was not the direction in which the city was interested. Therefore, the city of Santa Cruz did not implement the plan. According to a former Santa Cruz official, a key finding of the Urban Land Institute was the need to establish a decision-making process to overcome the differing political and business interests in the community. Toward that end, the city established Vision Santa Cruz which helped to facilitate the planning process for the city’s downtown recovery. Approximately 850 housing units in Watsonville (almost 10 percent of th city’s housing stock) were severely damaged or destroyed after the L Prieta earthquake. According to a report funded by the Federal Emerge Management Agency, Watsonville planners drafted a rebuilding ordinan within the first four days after the earthquake that suspended the limits o ned rebuilding nonconforming construction. The ordinance also streamli the permitting process. Santa Cruz County, which includes the city of Watsonville, passed a temporary one-half-cent sales tax increase for 6 years, called Measure E. The proceeds were targeted to damaged areas within the county based on an allocation approved by vot received approximately $15 million through Measure E which helped to repair the damaged housing. Further, Watsonville also used portions o f existing Department of Housing and Urban Development’s CommunityDevelopment Block Grant funds that it received prior to the earthquake (and not part of a supplemental or special disaster appropriation) to repair and replace damaged housing units. Within 1 year of the earthquake, almost 50 percent of the damaged housing units in Watsonville were repaired or replaced. The Loma Prieta earthquake resulted in the loss of many single-room occupancy units in the cities of Oakland and Santa Cruz. Oakland experienced destruction or severe damage of 1,300 single-room occupancy units, which provided housing to many minority and elderly residents. Oakland financed the replacement of single-room occupancy units through California’s Disaster Assistance Program. In Santa Cruz, single-roo occupancy units were built in several new buildings. According to a subsequent evaluation of the earthquake, these buildings represente overall improvement in the housing stock. Hurricane Andrew made landfall over southern Miami-Dade County in Florida as a category 5 hurricane, severely impacting several cities in southern Florida, including Homestead and Florida City (see fig. 5). As a result of the hurricane, the city of Homestead suffered a 31 percent declin to its population, 60 percent of the aggregated residential property value, and 29 percent of its average commercial real estate value. Additionally, the Department of Defense’s decision to scale down the presence of the Homestead Air Force Base contributed to the loss of thousands of jobs. In Florida City, located near Homestead, Hurricane Andrew damaged every building, reducing residential property value by 78 percent, and the average commercial real estate value by 32 percent. The federal government provided significant funding to the affected area to facilitate its recovery from Hurricane Andrew in 1992. Some examples of federal assistance for recovery are shown in figure 6. Key aspects of disaster recovery include planning, housing, economic development, and infrastructure. The following presents an overview of selected recovery efforts after Hurricane Andrew in each of these areas. However, it does not provide a comprehensive account of recovery actions taken. To help plan for the recovery from Hurricane Andrew, community leaders created a nonprofit organization called We Will Rebuild. The organization was led by the publisher of the Miami-Herald as well as other political, business, and civic leaders in Miami-Dade County. A key role that We Will Rebuild played was to coordinate the distribution of nearly $28 million of private and public funds. We Will Rebuild worked to devise recovery strategies through 29 committees that focused on different issue areas, including agriculture, business and economic development, housing, social services, as well as families and children. Committee members developed plans to achieve goals within those areas and in some instances implemented those strategies directly. For example, to achieve the goal of preventing the complete closure of the Homestead Air Force Base, one committee successfully advocated for the base to be changed into a combined civil and military facility. We Will Rebuild also funded planning meetings, coordinated through loca universities, which brought together as many as 300 professionals over a 3 week period to create solutions for rebuilding Miami-Dade County. Teams comprised of architects, engineers, planners, and as well as others from the public and private sectors presented proposals for how to rebuild l communities and neighborhoods. Eventually these meetings produced 16 projects focused on many issues such as site-specific neighborhood revitalization plans focused on urban planning, transportation, historic preservation, and natural resources for 28 communities in the county. Many of these plans served as the basis for the redevelopment of neighborhoods and future regional developments related to water management, transportation development, and the preservation of buildings and open space. Hurricane Andrew caused devastating housing damage in Miami-Dade County, resulting in the destruction of over 25,000 homes and major damage to over 37,000 homes. According to a city official, Florida City worked with the Department of Housing and Urban Development to create a program that provided second mortgages for homeowners to repair damaged housing. Today, the population of Florida City is approximately 10,000, a significant increase from the 3,000 still remaining in the wake of the hurricane. Over 8,000 Homestead residents, or 31 percent of its pre-hurricane population, left the city, leaving many abandoned properties that created challenges for the city to redevelop some areas. Some residential communities that suffered significant damage from the hurricane were eventually rebuilt. For example, the Naranja Lakes development, a private condominium community of thousands of residents, was razed and is being rebuilt with a mix of condominium and single family homes. f To rebuild the economy in the wake of Hurricane Andrew, a number o economic development organizations in Miami-Dade County worked to revitalize affected communities. One of these groups, the Economic Development Council was founded by local business leaders in response to Hurricane Andrew to represent the economic development int erests of the unincorporated portions of Miami-Dade County. The group led efforts to beautify a major roadway and commercial center through the area. Th e Economic Development Council hoped that such improvement proje cts would attract residents who had mov to the county once more. ed away after the hurricane to return The mission of the Vision Council, another organization that promoted economic redevelopment after Hurricane Andrew, was to attract new businesses to Miami-Dade County. The council serves Homestead, Florida City, and other parts of southern Miami-Dade County. The Vision Council experienced mixed success in its economic development efforts. For example, the Vision Council assisted Homestead’s efforts to market and build a 270 acre commerce park, called Park of Commerce. The Vision Council also supported the creation of the Homestead-Miami Speedway which sponsors professional car racing events year round. Howev all of its projects have been successful. For example, we observed Park of Commerce which the Vision Council supported was mostly today. A Vision Council official explained that factors such as the perceived risk of another storm in the area, high insurance rates, and population decline has deterred some businesses from relocating to Miami-Dade County. Hurricane Andrew destroyed much of the public and transportation infrastructure in southern Florida. For example, Florida City lost every public building to Hurricane Andrew, according to a city official. The city rebuilt the government center complex, which included the city hall, jail, and police statio n. Because the city’s public funds were insufficient to rebuild the new government center complex, it used funding from the Economic Development Administration (EDA) in the Department o Commerce and the state to complete the center. Homestead and Florida City suffered sustained damage to their water systems. In Homestead, EDA provided $7.7 million partly for the construction of water and sewer lines, which extended these services to the Miami-Homestead Speedway and Park of Commerce facilities. W those infrastructure enhancements, these projects could not have been developed. Florida City also sustained extensive damage to its water delivery system. EDA provided almost $5 million for the repair, replacement, and expansion the city’s water system (see fig. 7). As a resu of the water system’s expansion, the State Farmers Market was also restored, creating almost 400 jobs. The federal government provided significant funding to the affected area to facilitate its recovery from the 1994 Northridge earthquake. Some examples of federal assistance for recovery are shown in figure 9. Key aspects of disaster recovery include planning, housing, economic development, and infrastructure. The following presents an overview of selected recovery efforts after the Northridge earthquake in each of these areas. However, it does not provide a comprehensive account of recovery actions taken. In 1987, prior to the Northridge earthquake, the city of Los Angeles developed a Recovery and Reconstruction Plan in preparation for a future disaster. In the aftermath of the Northridge earthquake, the city adapted this plan to guide its recovery efforts. According to an evaluation funded by the National Science Foundation, the plan contributed to fostering good working relations between city officials and other stakeholders. The process of developing the plan itself strengthened relationships between city departments and agencies which in turn helped to facilitate collaboration during the recovery process. City departments also implemented several strategies outlined in the plan, such as developing loan programs for businesses unable to receive Small Business Administration loans, establishing an interdepartmental group to adapt the recovery plan for Northridge, streamlining permit processing, establishing mutual aid agreements, and forming reconstruction task forces. After the earthquake, the city of Los Angeles designated 17 areas that suffered extensive damage as “ghost towns,” to received priority atten The vacating of 7,500 housing units in those areas resulted in criminal trespassers such as drug dealers, prostitution rings, and squatters. In turn, those activities increased burglaries in surrounding neighborhoods and resulted in local businesses losing their customer base. The city’s housing department collaborated with the police, public works, and building departments to create a special work unit to focus on security and offer refinancing to help property owners in the ghost towns rebuild. A subsequent evaluation of the housing reconstruction after Northridge found that this program contributed to the successful rebuilding of those areas and helped to stabilize surrounding neighborhoods. tion. According to Los Angeles officials, the city prioritized the replace restoration of its highway infrastructure to restore the region’s transportation networks. To maintain partial traffic flows immediately after the earthquake, the city established alternative detour rout highways. The earthquake resulted in 480 damage locations to federal, state, and local roads throughout the Los Angeles area and forced the closure of four major highway corridors that, together, carried over 780,000 vehicles per day before the earthquake. This caused significant disruption to commuting patterns as well as the transportation of freight. ing for the To expedite the completion of highway rebuilding projects, the Califor nia Department of Transportation (CalTrans) included financial incentives in its contracts for each major restoration or repair contract. Under this approach, bonuses were available to each contractor who completed projects early. CalTrans calculated bonuses based on an analysis of the economic cost incurred to the region as a result of the disruption to traffic rded and associated delays. As a result of this approach, bonuses were awa to 9 out of the 10 eligible contractors. According to a CalTrans official, these incentives allowed the city to restore these freeways within a months after the earthquake (see fig. 10). The Federal Highway Administration also granted other measures of flexibility within its regulations to facilitate infrastructure recovery. For example, the agency granted exemptions from certain regulations, such as allowing the California Department of Transportation to proceed without conducting environmental impact statements as required under the National Environmental Policy Act. On January 17, 1995, a magnitude 7.3 earthquake caused significant damage to the Japanese city of Kobe in Hyogo prefecture. As a result of the earthquake, the affected areas sustained heavy damage and many casualties. For example, over 6,000 people were killed and 40,000 injured. In addition to destroying over 400,000 homes and buildings, the earthquake caused extensive damage to roads, railroads, highways, and subway stations (see fig. 11). The port of Kobe, Japan’s leading container shipping port at the time, also experienced heavy damage to almost all container berths. The Japanese government provided significant funding to facilitate recovery from the 1995 Kobe earthquake. Some examples of national government assistance for recovery are shown in figure 12. Recovery after the Kobe earthquake was generally a top-down process of post-disaster planning and financing. The government prioritized the rapid rebuilding of infrastructure and economic stabilization and later focused on housing and social recovery. The physical reconstruction process took less than 3 years to complete. Specifically, the city of Kobe designated 24 areas to prioritize for rebuilding, using national government funds to widen roads, add parks and open spaces, and construct other public facilities. For the first 3 to 4 years after the earthquake, the focus was mainly focused on physical reconstruction. In subsequent years, the government shifted it focus to community development, economic development, and the restoration of communities. Key aspects of disaster recovery include planning, housing, economic development, and infrastructure. The following presents an overview of selected recovery efforts after the Kobe earthquake in each of these areas. However, it does not provide a comprehensive account of recovery actions taken. Immediately following the earthquake, Japan’s national government implemented a 2-month moratorium where it did not approve any building permits so that the local governments could finalize planning before the rebuilding process began. Hyogo prefecture and the city of Kobe adopted complementary recovery plans within 2 months of the earthquake—in March 1995—which prioritized projects that replaced infrastructure as well as others that would help stabilize the economy and attract new businesses. After the earthquake, there was a relatively short amount of time to proposals for the national budget in order to be considered for the com year. To ensure that they could take advantage of national go vernment funding as soon as possible, the city of Kobe and Hyogo prefecture completed its recovery plans promptly. Facing this deadline, local offici devised a two-phase strategy to develop a plan that could quickly identif broad recovery goals to provide a basis for budget requests to meet the national budget deadline. After that initial planning phase, the governments then collaborated with residents to develop detailed plans for specific communities. The national government funded a 3-year emergency housing plan to build about 72,000 permanent new housing units throughout Kobe and Hyogo to replace an estimated 82,000 houses lost due to the earthquake. They p more than half through public housing agencies. This overall target level lanned to provide around 8,200 through the private rental market and was achieved by March 1998 when more than 120,000 new housing units were constructed. The accumulated number of new housing units built by2005 was estimated to be over 222,000. Almost all the replacement housingwas provided in multi-rise condominium structures, with approximately 56 percent of those available as public housing units. Both the public andprivate sector used a variety of strategies, aimed at many levels of theult population, to ensure that replacement housing would be built. The res was that more housing was built than had been lost. According to an expert on Japan’s recovery, the city of Kobe has recently experience challenges in attracting new occupants—especially younger families—to move into these units. The Kobe earthquake had lasting impacts on several industries, including the port and small businesses. The port of Kobe, Japan’s leading container shipping port, sustained heavy damage to almost all container berths. Port repairs took almost 1 year to complete, during which time the port disruption was estimated to be an amount equivalent to the income of 40,000 workers. The city of Kobe completed its port restoration by March 1997 (see fig. 13). However, port activity stalled at around 80 percent of pre-earthquake levels. In October 1998, exports from the port of Kobe to Asian countries declined by 24.3 percent from the previous year. The negative impact of damage to the local economy and regional exports, in addition to the relocation of many container cargos to other ports during the port of Kobe’s closure contributed to its decline. Further, changing trends in the international trade industry introduced increased competition from other Asian ports. As a result of these factors, the port of Kobe has not fully recovered. For example, while the port ranked 6th in the world for volume in 1994, it dropped to the 24th position in 1995 and the 33rd in 2006. Local industries, such as chemical and steel manufacturers as well as small businesses, were also affected by the earthquake. Chemical and manufacturers did not operate for several months after the event. Large companies, such as Kobe Steel and Mitsubishi Industries, were able to map out effective recovery strategies and were less affected by the earthquake. On the other hand, smaller industries, also severely damaged by the earthquake, were unable to recover as easily as the larger manufacturers. For example, smaller businesses such as shoe manufacturers, sake breweries, and roof-tile makers never fully recovered from the earthquake. The city of Kobe took several actions to stimulate the local economy w mixed success. For example, Kobe established support systems for affected businesses such as special no-interest loans and subsidies for th construction of temporary stores and factories. The city also created the “Luminaire,” a festival of illuminated lights which began in the winter following the earthquake to boost morale of local residents and to attract tourists. In 2003, the event drew 28.1 million visitors, which increased the number of tourists by 115 percent from pre-earthquake levels. However, ot have not all of its efforts were successful. For example, the city did n e enough funds to help all the small businesses that needed financial assistance. Some small businesses could not recover from the earthquake and closed. Recognizing the need to diversify Kobe’s economy from its traditional port and manufacturing businesses, the city took steps to attract and develop several new industries. Kobe recognized that it could benefit from new infrastructure projects to change the industrial base of the city. Soon after the disaster occurred, the city conducted a study to assess economic conditions in Kobe. This study showed that although some of the economic challenges the city faced were a result of the earthquake, a more fundamental problem was Kobe’s continued reliance on “old economy” industries, such as shipbuilding, steel, and shoe manufacturing. With this information, the city, in coordination with Hyogo prefecture, targeted new industries—such as medical, pharmaceutical, robotics, and information technology companies—to establish businesses in the region. To attract companies from these targeted industries, the city of Kobe and Hyogo prefecture offered loans, subsidies, tax incentives, and inexpensive office space. Further, these jurisdictions proposed reductions in existing government regulations for the medical and information technology sectors. These plans allowed foreign researchers to work in Kobe without overly rigorous visa regulations. Additionally, the city sought to remove regulations that prevented foreign firms from participating in the medical industry and thereby encourage the entry of foreign researchers and business persons. Overall, Kobe and Hyogo achieved success in diversifying its economy. About 10 years after the earthquake, over 285 new companies moved to the city, 40 of which were foreign firms. Additionally, six public facilities—including centers for business, developmental biology, and heath care—had relocated to the city as well. The 1997 Grand Forks/Red River flood devastated Grand Fo Dakota, and East Grand Forks, Minnesota. Following a season of record snowfall, the Red River flooded up to 2,200 square miles in these states, an area about twice the size of Rhode Island. The flood damaged 83 percent of affected homes and impacted all downtown businesses. In East Grand Forks, 99 percent of homes and businesses were damaged (see fig. 15). The federal government provided significant funding to the affected area to facilitate its recovery from the 1997 Grand Forks/Red River flood. Some examples of federal assistance for recovery are shown in figure 16. Key aspects of disaster recovery include planning, housing, economic development, and infrastructure. The following presents an overview of selected recovery efforts after the Grand Forks/Red River flood in each of these areas. However, it does not provide a comprehensive account of recovery actions taken. In the wake of the flood, the mayor of Grand Forks asked directors from the city’s urban development, public works, and finance departments to collaborate in order to contribute their respective expertise to help the city create a recovery plan. The mayor delegated much of her authority to these civil servants, known as the Tri-Chairs, allowing them to set priorities for recovery, submit action steps for approval, and collectively manage the city’s recovery resources. The Tri-Chairs, City Council, representatives from the federal Department of Housing and Urban Development (HUD), and other city staff collaborated to create a detailed flood recovery action plan for the city that identified (1) broad recovery g (3) potential id entified five broad recovery goals covering areas such as housing and community redevelopment, business redevelopment, and infrastructure rehabilitation. The plan details a number of supporting objectives and tasks to be implemented in order to achieve the stated goals. Additionally, the plan identified a target completion date for each task. oals, (2) roles and responsibilities associated with specific projects, and sources for funding those activities. Specifically, the plan Consultants that provided technical assistance on the planning process were hired using HUD’s Community Development Block Grant (CDBG). A key role that these federally-funded consultants played was to maintain good communications and coordination between the city and HUD. The consultants facilitated communications through scheduling and publicizing meetings, providing workspace, and convening weekly conference calls. According to a subsequent evaluation of the consultants’ efforts, these activities helped to build a team mentality among stakeholders by encouraging the sharing of information and common problem-solving. An important result of this communication was the completion of Grand Forks’ recovery plan. A city evaluation of the recovery plan found that the process of specifying goals and identifying funding sources allowed the city to conceive and formulate projects in collaboration with the city council and representatives from state and local go vernments. This helped Grand Forks meet its recovery needs as well as adhere to federal and state disaster assistance funding laws and regulations. The cities of Grand Forks and East Grand Forks took measures to buy out housing located in the 100-year floodplain of the Red River. Grand Forks officials developed a buy-out program that purchased nearly 800 homes, which was about 10 percent of the city’s housing stock at the time. To determine the value of properties in the buy-out program, the city created teams to assess each home and based the value on pre-flood price of the home along with a deduction of insurance payments. According to a Grand Forks official, the city also changed existing land-use ordinances to prevent future building in the 100-year flood zone. In East Grand Forks, where nearly 99 percent of the homes were dam by the flood, officials also established a buy-out program for approximately 400 homes located in the 100-year floodplain. The city used local realtors to determine property values, and the city provided a 7 to 10 percent premium above the house value to account for rebuilding costs. East Grand Forks used U.S. Army Corps of Engineers and the Federal Emergency Management Agency’s (FEMA) Hazard Mitigation Grant Program funding to support its buy-out program. To focus on issues of economic recovery, the mayor of Grand Forks formed a task force on business redevelopment comprised of 15 prominent business leaders to address issues such as getting access to funding for business recovery and increasing opportunities for business development and growth. The business redevelopment task force comprised of seven committees that met regularly to discuss these issues. Grand Forks created several business redevelopment programs using federal funding. For example, using almost $2 million from HUD’s CDBG program and over $5 million from the Economic Development Administration, the city constructed Noah’s Ark, a large industrial building developed for the purpose of providing temporary office space to any displaced small business in the Grand Forks region. According to a Grand Forks official, the Noah’s Ark building was converted into an Amazon.com call center in 1999. The city also developed several projects which incorporated mitigation techniques so those structures would be better prepared for a future flood. For example, the city changed the design of a convention center by raising the main event arena space of a convention center to ground level to mitigate against future flooding. The Universit North Dakota also incorporated disaster resistant features into its construction of a new $100 million hockey arena to protect it from blizzards, floods, and wind. ccording to East Grand Forks officials, the city’s business community A relied upon the University of Minnesota’s School of Architecture to development a strategy for economic recovery. As part of its economic redevelopment after the flood, East Grand Forks entered into an agreement with a major outdoor retailer to build a $7 million store if it were to employ local residents in the store. Since its opening, the retailer has thrived in East Grand Forks and is one of the fastest growing stores in this nationwide chain. Stanley J. Czerwinski, Director, Strategic Issues Team, (202) 512-6808 or [email protected]. In addition to the contact named above, Peter Del Toro, Assistant Director, and Shirley Hwang were the major contributors to this report. Additionally, Patrick Breiding, Keya Chateauneuf, Christopher Harm, Donna Miller, and Diana Zinkl also made key contributions. | In the wake of the 2005 Gulf Coast Hurricanes, coordination and collaboration challenges created obstacles during the government's response and recovery efforts. Because of the many stakeholders involved in recovery, including all levels of government, it is critical to build collaborative relationships. Building on GAO's September 2008 report which provided several key recovery practices from past catastrophic disasters, this report presents examples of how federal, state, and local governments have effectively collaborated in the past. GAO reviewed five catastrophic disasters--the Loma Prieta earthquake (California, 1989), Hurricane Andrew (Florida, 1992), the Northridge earthquake (California, 1994), the Kobe earthquake (Japan, 1995), and the Grand Forks/Red River flood (North Dakota and Minnesota, 1997)--to identify recovery lessons. GAO interviewed officials involved in the recovery from these disasters and experts on disaster recovery. GAO also reviewed relevant legislation, policies, and the disaster recovery literature. Effective collaboration among stakeholders can play a key role in facilitating long-term recovery after a catastrophic event. Toward that end, GAO has identified four collaborative practices that may help communities rebuild from the Gulf Coast hurricanes as well as future catastrophic events: (1) Develop and communicate common goals to guide recovery. Defining common recovery goals can enhance collaboration by helping stakeholders overcome differences in missions and cultures. After the Grand Forks/Red River flood, federally-funded consultants convened various stakeholders to develop recovery goals and priorities for the city of Grand Forks. The city used these goals as a basis to create a detailed recovery action plan that helped it to implement its recovery goals. (2) Leverage resources to facilitate recovery. Collaborating groups bring different resources and capacities to the task at hand. After the Northridge earthquake, officials from the Federal Highway Administration and California's state transportation agency worked together to review highway rebuilding contracts, discuss changes, and then approve projects all in one location. This co-located, collaborative approach enabled the awarding of rebuilding contracts in 3 to 5 days--instead of the 26 to 40 weeks it could take using normal contracting procedures. This helped to restore damaged highways within a few months of the earthquake. (3) Use recovery plans to agree on roles and responsibilities. Organizations can collectively agree on who will do what by identifying roles and responsibilities in recovery plans developed either before or after a disaster takes place. Learning from its experiences from the Loma Prieta earthquake, San Francisco Bay Area officials created a plan that clearly identifies roles for all participants in order to facilitate regional recovery in the event of a future disaster. (4) Monitor, evaluate, and report on progress made toward recovery. After the 1995 earthquake, the city of Kobe and the surrounding region established processes to assess and report on recovery progress. These jurisdictions required periodic external reviews over 10 years on the progress made toward achieving recovery goals. As a result of one of these reviews, the city of Kobe gained insight into unintended consequences of how it relocated elderly earthquake victims, which subsequently led to a change in policy. Past recovery experiences--including practices that promote effective collaboration--offer potentially valuable lessons for future catastrophic events. FEMA has taken some steps to facilitate the sharing of such experiences among communities involved in disaster recovery. However, the agency can do more to build on and systematize the sharing of this information so that recovery lessons are better captured and disseminated for use in the future. |
PRWORA made sweeping changes to national welfare policy, creating TANF and ending the federal entitlement to assistance for eligible needy families with children under Aid to Families with Dependent Children (AFDC). The Department of Health and Human Services (HHS) administers the TANF block grant program, which provides states with up to $16.5 billion each year through fiscal year 2002. TANF was designed to help needy families reduce their dependence on welfare and move toward economic independence. The law also greatly increased the discretion states have in the design and operation of their welfare programs, allowing states to determine forms of aid and the categories of families eligible for aid. TANF establishes time limits and work requirements for adults receiving aid and requires states to sustain 75 to 80 percent of their historic level of welfare spending through a MOE requirement. In addition, TANF gives states funding flexibility, which allows states to exclude some families from federal time limits and work requirements. TANF establishes a 60 month time limit for families receiving aid. States have the option of establishing shorter time limits for families in their state. A state that does not comply with the TANF time limit can be penalized by a 5 percent reduction in its block grant. While the intent of TANF is to provide temporary, time-limited aid, federal time limits do not apply to all forms of aid or to all families receiving aid. First, states are only to count toward the 60 month time limit any month in which an individual receives a service or benefit considered “assistance,” which is defined in the TANF regulations as cash or other forms of benefits designed to meet a family’s ongoing basic needs. Second, time limits do not apply to the following types of cases: 1. Child-only cases in which the adult in the household does not receive cash assistance.2. Families who received assistance while living in Indian country or an Native Alaskan village where 50 percent of the adults are not employed. Third, all states have the option to use federal funds to extend assistance beyond the federal 60 month limit for reasons of hardship, as defined by the state. States can extend assistance for up to 20 percent of the average monthly number of families receiving assistance (“20 percent extension”).Finally, assistance that is provided solely through state MOE is not subject to the federal time limit. TANF also establishes work requirements for adults receiving aid. After 2 years of assistance, or sooner if the state determines the recipient is ready, TANF adults are generally required to be engaged in work as defined by the state. In addition, TANF establishes required work participation rates—a steadily rising specified minimum percentage of adult recipients who must participate in federally specified work or work-related activitieseach year for at least a minimum number of hours. States were required in federal fiscal year 2002 to meet a work participation rate of 50 percent for all TANF families with adult members—referred to as the rate for all families. States were also required to meet a much higher rate—90 percent—for two-parent families. States must meet these work participation rates to avoid financial penalties. While states have generally met the work participation rate for all families, many states have faced financial penalties due to failure to meet the two- parent required rate in recent years. HHS issued penalty notices to 19 states in fiscal year 1997, 14 in fiscal year 1998, 9 in fiscal year 1999, and 7 states in fiscal year 2000. In addition to establishing federal participation rate requirements, PRWORA specified that the required rates are to be reduced if a state’s TANF caseload declines. States are allowed caseload reduction credits, which reduce each state’s work participation requirement by 1 percentage point for each percentage point by which its average monthly caseload falls short of its fiscal year 1995 level (for reasons other than eligibility changes). While states are to meet federal participation requirements, they also have the flexibility to encourage and require TANF recipients to participate in any activity a state chooses or at any level of activity, although that activity or the hours of activity may not count toward the federal participation rates. In addition, federal time limits and work requirements may not apply in some states that were granted federal waivers to AFDC program rules in order to conduct demonstration programs to test state reforms. The Personal Responsibility, Work, and Family Promotion Act of 2002 passed by the House of Representatives (H.R. 4737) on May 16, 2002, reauthorizes the TANF block grant keeping in place key elements of TANF, such as time limits and work requirements. It also changes some aspects of TANF, including the participation rate requirements. It increases the federally mandated rate by 5 percent a year to 70 percent by 2007, revises the number of hours of participation and types of activities required, and made some alterations to the caseload reduction credit, among other changes. In addition, the act specifies that two-parent families would no longer be subject to a separate and higher work participation rate. The Senate is in the process of reauthorizing TANF as of June 2002. Previously, under AFDC, state funds accounted for 46 percent of total federal and state expenditures. Under PRWORA, the law requires states to sustain 75 to 80 percent of their historic level of spending on welfare through a MOE requirement to receive their federal TANF block grant. The federal TANF funds and state MOE funds can be considered more like funding streams than a single program and states may use their MOE to assist needy families in state programs other than their TANF programs. In fact, states have flexibility to expend their MOE funds for cash assistance in up to three different ways, some of which allow states to exclude some families from time limits and work requirements. A state may use its state MOE funds in three different ways to provide cash assistance for needy families. Commingling: A state can provide TANF cash assistance by commingling its state MOE with federal funds within its TANF program. Segregating: A state can provide some TANF cash assistance with state MOE accounted for separately from its federal funds within its TANF program. Separating: A state can use its state MOE to provide cash assistance to needy families in any one or more non-TANF state programs, referred to as “separate state programs.” Each state may choose one or more of these options to provide cash assistance. In some cases, in this report, we refer to the second and third options as using “state-only” funds when the distinction between segregating and separating funds is not necessary. In addition, we focus only on cash assistance and not on other forms of aid or services, including, for example, child care and transportation, for which time limits and work requirements generally do not apply. (For more information on state funding choices, see app. I.) How a state structures its funds determines which TANF rules apply to the needy families being served. (See table 1.) When a state commingles funds, it must meet all TANF requirements. For example, states that commingle all their state MOE with federal funds are only able to exclude families from time limits through the 20 percent extension, cannot exclude families from counting toward the federal work participation rate, and cannot provide assistance to certain groups of legal immigrants. In addition, while not required by federal law, states may choose to apply work requirements or time limits on their state-funded assistance. States reported that in the fall of 2001, 2.1 million families received cash assistance funded with federal TANF or state MOE dollars. This includes about 110,000 families, or 5 percent, who were provided cash assistance through separate state programs funded by state MOE dollars. These families are not counted in the TANF caseload data reported by HHS. Twenty-six states used separate state programs to provide cash assistance, typically to legal immigrants and two-parent families. In most of these states, the separate state program caseload represented 5 percent or less of the total caseload. However, in four of these states, families served through separate state programs represented from 10 to 30 percent of the total cash assistance caseload. (For more information on the separate state program caseload by state and the populations served in the states’ programs, see app. II.) It is noteworthy that the separate state program caseload represents a more significant share in two of the nation’s most populous states— California and New York. More specifically, the number of families receiving cash assistance through separate state programs in California alone—nearly 50,000—is greater than the total cash assistance caseload in most states. HHS began requiring states in fiscal year 2000 to provide information on families provided assistance through separate state programs and reported on the separate state program caseload in their recently issued Fourth Annual Report to Congress. However, this caseload is not included in the TANF caseload data. Child-only cases, while not generally in separate state programs, account for an even more significant proportion of the cash assistance caseload. Of the 2.1 million families receiving aid, 736,045, or one-third, were composed of children only. Generally, child-only cases are not subject to work requirements or time limits. The percentage of child-only cash assistance cases varied greatly among the states, ranging from 13 percent in Hawaii to 73 percent in Wyoming. In addition, as shown in figure 1, the types of child-only cases vary and can include families in which the caregiver is a nonparent, such as grandparent or other relative; parent is receiving Social Security or Supplemental Security Income (SSI) and not eligible for TANF; parent is a noncitizen ineligible for federally funded TANF; and parent has not complied with TANF program requirements and so has been denied benefits, called a sanction.(For more information on each state’s total cash assistance and child-only caseloads, see app. III.) Reduced federal participation rate requirements and states’ use of their MOE funds give states considerable flexibility in implementing work requirements. Almost all the states had more adults participating in work and work-related activities than they were required to, but the percentage of adults participating varied greatly among the states. Almost all of the families who received cash assistance through separate state programs were subject to state work requirements, even though federal work requirements did not apply. States faced greatly reduced federal participation rate requirements for fiscal year 2000, as caseload reduction credits were triggered by recent caseload declines. Welfare caseloads have declined dramatically, from 4.4 million in August 1996 to 2.1 million as of September 2001, marking a 52 percent decline in the number of families receiving cash welfare. As a result, the fiscal year 2000 participation rate requirement was adjusted downward from 40 percent to 0 in 31 states. (See table 2.) Even though most states faced relatively low or no participation rate requirements, about 30 percent of TANF adults were counted as meeting federal participation requirements nationwide. However, the federal participation rates varied greatly among the states, as shown in figure 2. Officials in one state told us that because the participation rate requirements are so low, states have more flexibility in choosing whether to enroll TANF recipients in work or in other types of activities or services, such as substance abuse treatment or mental health services, which do not count for purposes of the federal participation rate. State officials believe they can make such choices without fear of not meeting their federal work participation rates. In other cases, the lower participation rates give states more flexibility in exempting TANF recipients considered hard to employ from meeting work requirements. For more information on TANF and persons with disabilities, see our report entitled: U.S. General Accounting Office, Welfare Reform: More Coordinated Federal Effort Could Help States and Localities Move TANF Recipients with Impairments toward Employment, GAO-02-37 (Washington, D.C.: Oct. 31, 2002). of participation if allowed by a state. For example, in some states, this measure would include participation in mental health treatment activities. In addition, in one state we talked with, an adult working only 1 hour a week would be considered as participating in state-defined activities. In contrast, a minimum of 30 hours of work would generally be required to count as meeting the federal participation requirement. Using this state-defined rate, nationwide, about 56 percent of TANF adults were involved in work or work-related activities, based on the 47 states that provided data for fall 2001. The percentage of the adult caseload involved in work or work-related activities (as defined by the state) ranged from 6 percent to 93 percent. As shown in figure 3, the percentage of adults participating was 30 percent or less in 8 states, 31-50 percent in 20 states, and more than 50 percent in 19 states, according to state survey responses. (See app. IV for more specific data by state.) Providing cash assistance through separate state programs has offered states additional flexibility, as federal work requirements do not apply to families served through these programs. the 26 states with separate state programs, 16 states used these programs to provide cash assistance to two-parent families. Several state officials told us they provide aid in this way to avoid the risk of financial penalties for failing to meet the federal two-parent participation rate requirement. State officials told us that two- parent families often have as many or more challenges as single parents, making the higher participation rate for two-parent families difficult to meet. However, states that provided cash assistance through separate state programs typically imposed their own work requirements on families receiving aid. We found that approximately nine-tenths of the families receiving cash assistance in separate state programs are still subject to a state work requirement. While states generally imposed work requirements, about half of them also have policies in place to exclude families facing significant barriers to work from work requirements. For example, 13 states exclude families with an adult who is disabled and 13 states exclude families who care for someone with a disability. It is possible that states may rely more on separate state programs in the future to provide cash assistance free from federal work requirements as they take steps to meet state and local goals. H.R. 4737—the reauthorization bill passed by the House—eliminates the higher federal participation requirement for two-parent families that was often cited by states as a reason for using separate state programs. However, it also includes higher overall federal participation requirements for all families. States would still have the option to serve other families who they deem may have difficulty meeting higher federal requirements through separate state programs. With higher participation requirements for all families, the number of families that states may consider unable to meet higher federal work requirements could increase. Through the 20 percent federal extension and the use of state funds, states generally excluded the following types of families from federal and state time limits: families they considered “hard to employ,” families that were working but not earning enough to move off TANF, and families that were cooperating with program requirements but had not yet found employment. During fall 2001, states excluded from federal or state time limits 11 percent of the 1.4 million cash assistance families with adults. The number of families excluded from time limits may increase in the future because most families have not yet reached their federal or state- imposed cash assistance time limit. States targeted time limit exclusions to families they considered hard to employ, families who were working but not earning enough to move off TANF, and families who were cooperating with program requirements. The majority of states excluded hard-to-employ families in which the parent had a disability or was caring for a child with a disability, families dealing with domestic violence, and families with a head of household of advanced age. (See fig. 4.) Some of these exclusions are granted on a temporary basis (such as for disabled recipients pending transfer to the Supplemental Security Income program), while others are granted for longer periods of time (such as for family heads of advanced age). Twenty-two states exclude working families or families participating in a work activity from time limits, either through the federal 20 percent extension or by using state-only funds. Maryland and Illinois, for example, “stop the clock” for families who are working or participating in a work activity by funding them with state-only dollars. Officials from both states told us that their states adopted this policy to reward working families for complying with program requirements. States that exclude families by using state-only funds use criteria similar to those used by states that rely solely on the federal 20 percent hardship extension. Using the 20 percent extension, states are able to extend time limits for a broad range of families, such as families cooperating with program requirements or making a “good faith effort” to find employment. For example, officials from Michigan, a state that commingles all of its state funds with federal funds, told us that they will use the 20 percent extension for all recipients following the rules of the program; if the number of families to whom they want to provide an extension begins to exceed 20 percent, they plan to continue providing assistance through state funds. Almost half of the states exclude families making a good faith effort to find employment. States have excluded from time limits 11 percent (about 154,000) of the approximately 1.4 million families with adults receiving federal- or state- funded cash assistance. (See app. V for the percent of exclusions by state.) As shown in figure 5, 45 percent of these families—mostly in Illinois, Massachusetts, and New York—were excluded through states use of state-only funds. An additional 43 percent of the families were excluded from time limits under federal waivers granted to states before welfare reform to conduct demonstration programs. Many of these waivers remain in effect. While states sometimes use state funds to exclude families from federal time limits, states are still applying a state time limit to a significant portion of state-funded families. Overall, 64 percent of families who receive cash assistance through separate state programs or segregated state funds are still subject to a state time limit. Twenty-six of the 33 states with state-funded families apply a state time limit to some or all of their state-funded families. (See app. VI for additional information on state choices regarding funding and time limits.) The percentage of the caseload that is excluded from time limits may increase because most families have not reached their time limit. In 22 states TANF had not been in effect long enough for families to reach either the federal or the state time limit by the time we conducted our survey.Even in those states where it was possible to have received 60 months of cash assistance, many families had not reached their time limit because they have cycled on and off welfare, slowing their accrual of time on assistance. State officials generally thought the 20 percent federal time- limit extension was adequate now, but were less sure about the future, given that many families have not yet reached the 60 month time limit. State officials we spoke with told us that they planned to rely more heavily on state MOE funds to continue assistance to significant numbers of families reaching the 60 month time limit. For example, California told us it estimated that over 100,000 families with adults would reach the federal time limit in the next year. California plans to use state-only funds to continue aid beyond 60 months to children by removing the adult from the case. California also plans to continue aid to families who are making a good faith effort to find employment and to families who are hard to employ because the adult is aged, disabled, caring for a disabled family member, or experiencing domestic violence. In addition, New York plans to continue assistance to families who reach the 60 month time limit through its separate state program. In December 2001, New York State had 44,027 families reach the 60 month federal time limit. Of these families, 28,781(65 percent) were transitioned to the state’s separate state program funded with state MOE, 9,873 (22 percent) received the 20 percent extension, and the remaining 5,393 (12 percent) were transitioned off assistance. These families were among the first to reach time limits with more families to follow. At the time of our survey, we found that only 15 states had begun to use the federal 20 percent hardship extension; overall, these states were applying it to less than 1 percent of their adult caseload. While it is difficult to estimate the extent to which states may use the 20 percent extension as more families reach the 60 month time limit, it is important to note that states’ child-only caseloads can result in significantly more than 20 percent of the adult TANF caseload receiving the extension. As discussed earlier, TANF allows each state to extend the 60 month time limit for up to 20 percent of the average monthly number of families receiving TANF assistance funded in whole or part with federal TANF funds. In each state, the maximum number of families who may receive extensions is equal to 20 percent of the total number of TANF families, including child-only cases. This results in a higher number of adults who can receive the extension than if the calculation were based on 20 percent of TANF families with adults. We estimated that the maximum percentage of adults who may receive the federal extension ranges from 77 percent in Wyoming to 24 percent in Vermont and New Mexico, based on our analysis of survey data for fall 2001. (For more on this analysis, see app. VII). Although states have had TANF programs in place for 5 years now, their experiences with key elements of TANF are still evolving. The dramatic caseload decline that greatly reduced the federally required participation rates gave states great flexibility in implementing work requirements. With this flexibility, the extent of involvement of TANF adults in federally or state-required activities varied greatly among the states. On the one hand, this means states have adapted their programs to meet state and local goals and needs. On the other hand, it means states with relatively low participation rates have more limited experience than other states in involving welfare recipients in work activities. This may affect their ability to meet federal participation rate requirements in the future. In addition, many states have used the flexibility allowed them in using state MOE to exclude families from or to extend federal time limits. In this way, states could ensure a safety net for families that state TANF program officials had determined needed more time to become self-sufficient or were unable to support themselves. Because so many families have not yet reached their time limits, much remains unknown about choices states will make in enforcing time limits and whether an appropriate balance will be struck between ensuring a safety net for families in need and creating a transitional aid system that promotes work and personal responsibility. Two issues that warrant attention in the future include wider implementation of the 20 percent federal time limit extension and states’ use of separate state programs to provide cash assistance. First, as we reported, the 20 percent time limit extension, when applied to adults, represents a larger and varying share of adults among the states than when applied to all families, including child-only cases. As this extension policy is more widely used in the years ahead, it will be important to understand whether the 20 percent extension as currently calculated affords all states the access needed to support families experiencing hardship as well as supporting the federal goal of reducing welfare dependence. Second, with the use of state MOE through separate state programs, a not insignificant number of families—and potentially more in the years to come—receive cash assistance although they are not counted in welfare caseload data routinely reported by HHS. With continuing attention focused on the number of families receiving cash assistance and whether PRWORA has successfully reduced dependence on welfare, it is important that program administrators and policymakers have information on the size of the separate state program caseload. These data should be more regularly available to consider along with TANF caseload data as HHS has recently begun to collect and report information on states’ separate state programs. In commenting on a draft of this report, HHS said that it agreed with the findings. HHS’ s written comments are included in appendix VIII. We are sending copies of this report to the secretary of Health and Human Services and other interested parties. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions about this report, please contact me at (202) 512-7215 or Gale Harris at (202) 512-7235. Other contacts and acknowledgments are listed in appendix IX. Most states use some form of state maintenance-of-effort (MOE) funding to provide cash assistance to families. Eighteen states relied solely on federal or commingled federal and state funds in their Temporary Assistance for Needy Families (TANF) programs to provide cash assistance, as shown in figure 6. The other 33 states used at least one of the state MOE funding options in addition to commingled funds: 7 had segregated state funds; 17 had separate state programs; and 9 had both segregated funds and separate state programs. States across the nation have opted to use state MOE funds to provide cash assistance. (See table 3.) States with larger caseloads are more likely to use segregated funds or separate state programs than smaller states; similarly, states with the smallest caseloads are more likely to commingle all of their state and federal funds. Even though two-thirds of the states have opted to use segregated funds, separate state programs, or both to provide cash assistance, only 11 percent of the total number of families receiving cash assistance is funded with these funds. States most often used separate state programs to serve two populations— legal aliens and two-parent families—and applied their own state work requirements on these two populations. (See table 4.) Other examples of populations served by some states in their separate state programs include parents completing education or training (four states), parents or caretakers with a physical impairment (four states), families caring for a young child (three states). Not available. Percentage of TANF adults involved in work (as defined by state) 52 62 26 22 30 93 Not available. Four states were unable to provide us with information on the percent of adults participating in a state-defined work-activity. Delaware was not able to provide us with data on families excluded from time limits. Delaware was not able to provide data on their use of the federal 20 percent extension. The Personal Responsibility and Work Opportunity Reconciliation Act (PRWORA) specifies that up to 20 percent of families receiving TANF assistance in each state can receive an extension to the 60 month federal time limit. Based on our analysis of survey data, we estimated that the maximum percentage of adults who could receive extensions ranged from 24 to 77 percent among the states, depending on the size of each state’s child-only caseload. For example, Wyoming may extend time limits for up to 92 families, which represents 20 percent of the 458 TANF families in Wyoming. However, because almost three-fourths of its TANF families are child-only, only 119 families with adults would have time limits in place. This means that the state could provide extensions to 92 of the 119 TANF families with adults; this represents 77 percent rather than 20 percent of TANF families with adults. In contrast, in New Mexico, with its much smaller percentage of child-only families (15 percent), the maximum percentage of time-limit extensions that may be provided to families with adults is 24 percent rather than 20 percent. In addition to those named above, the following individuals made important contributions to this report: Elisabeth Anderson, Kara Kramer, and Kim Reniero. Patrick DiBattista and Beverly Ross also provided key technical assistance. Welfare Reform: States Provide TANF-Funded Work Support Services to Many Low-Income Families Who Do Not Receive Cash Assistance. GAO-02-615T. Washington, D.C.: April 10, 2002. Welfare Reform: States Provide TANF-Funded Services to Many Low- Income Families Who Do Not Receive Cash Assistance. GAO-02-564. Washington, D.C.: April 5, 2002. Welfare Reform: States Are Using TANF Flexibility to Adapt Work Requirements and Time Limits to Meet State and Local Needs. GAO-02-501T. Washington, D.C.: March 7, 2002. Welfare Reform: More Coordinated Federal Efforts Could Help States and Localities Move TANF Recipients with Impairments Toward Employment. GAO-02-37. Washington, D.C.: October 31, 2001. Welfare Reform: Challenges in Maintaining a Federal-State Fiscal Partnership. GAO-01-828. Washington, D.C.: August 10, 2001. Welfare Reform: Moving Hard-to-Employ Recipients Into the Workforce. GAO-01-368. Washington, D.C.: March 15, 2001. Welfare Reform: Work-Site-Based Activities Can Play an Important Role in TANF Programs. GAO/HEHS-00-122. Washington, D.C.: July 28, 2000. Welfare Reform: Improving State Automated Systems Requires Coordinated Federal Effort. GAO/HEHS-00-48. Washington, D.C.: April 27, 2000. Welfare Reform: State Sanction Policies and Number of Families Affected. GAO/HEHS-00-44. Washington, D.C.: March 31, 2000. Welfare Reform: Assessing the Effectiveness of Various Welfare-to-Work Approaches. GAO/HEHS-99-179. Washington, D.C.: September 7, 1999. Welfare Reform: Information on Former Recipients’ Status. GAO/HEHS- 99-48. Washington, D.C.: April 28, 1999. Welfare Reform: States’ Experiences in Providing Employment Assistance to TANF Clients. GAO/HEHS-99-22. Washington, D.C.: February 26, 1999. | Congress created the Temporary Assistance for Needy Families (TANF) block grant to replace the previous welfare program and help welfare recipients transition into employment. To this end, states are required to enforce work requirements, and face financial penalties if a minimum percentage of adults receiving cash assistance do not participate in work or work-related activities each year. This federal participation rate requirement has increased each year, reaching 50 percent for all families in fiscal year 2002, but it can be adjusted if caseload declines. In addition to work requirements, TANF places a 60 month lifetime limit on the amount of time families with adults can receive cash assistance. To receive TANF block grants, each state must also spend a specified amount of its own funds, referred to as state maintenance-of-effort (MOE) funds. The law allows states considerable flexibility to exclude families from work requirements and time limits. In addition, states may provide cash assistance to families and exempt them from work requirements and time limits by using state MOE in specified ways. States provided cash assistance funded by federal TANF or state MOE dollars to 2.1 million families in 2001. For 736,000 of these families, only the children in the family received assistance. When only children receive the benefits, it is typically because they are cared for by someone who is not their parent or because their parents are noncitizens. The percentage of adults in work or work-related activities varied greatly among the states because of the flexibility allowed. Most states met or exceeded their adjusted required rate in fiscal year 2000. However, the fiscal year 2000 federal participation rates varied, ranging from 6 percent to more than 70 percent. States excluded 154,000 families from federal or state time limits, or 11 percent of the 1.4 million families with an adult receiving cash assistance. |
Due to improved battlefield medicine, soldiers who might have died in past conflicts are now surviving, many with multiple serious injuries that require extensive outpatient rehabilitation, such as amputations, burns, and traumatic brain injuries. Prior to the establishment of WTUs, the Army provided care for soldiers recovering from serious medical conditions through Medical Hold and Holdover Units. According to Army documents, the previous system did not have a uniform structure and exhibited varying levels of resources. For example, some units fell under the command of the local military treatment facility, while others fell under the command of the local installation. In addition, the Army did not have a uniform system of staffing, as the relevant command had to resource the units by reassigning personnel from other missions and functions. Further, the increased workload associated with care for these soldiers at the local military treatment facility was not reflected in increased staffing levels or resources. In response to congressional interest and media coverage regarding inadequate and substandard care for Army soldiers recovering from serious medical conditions at the former Walter Reed Army Medical Center, the Army in 2007 developed its Army Medical Action Plan. This plan laid out a series of steps to address the problems highlighted at Walter Reed and other facilities, including the establishment of the WTU program. According to Army documents, the primary differences between the previous system of Medical Hold and Holdover Units and the WTU program was the establishment of (1) a uniform structure and ratios of staff to WTU soldiers by specialty and (2) a program (the Comprehensive Transition Plan) to facilitate the soldiers’ transition to either a return to the force or separation from the Army. The central structure of the WTU program is the Triad of Care model, which includes a primary care manager, a nurse case manager, and a squad leader or platoon sergeant, who direct and supervise the WTU soldiers’ healing process. Along with these Triad of Care key staff, there are other medical and nonmedical providers—such as social workers and occupational therapists—who work together to develop a plan of care specific to each soldier. The plan of care is intended to address the soldiers’ medical needs and support their transition either back to duty or to separation from the Army. This staffing model formalized the relationship between some of the same positions that had existed in the prior Medical Hold and Holdover Units’ and established ratios for the number of WTU soldiers that were to be under the care of each of the Triad of Care’s key staff. Figure 1 provides position descriptions and staffing ratios for the Triad of Care model. In 2007, the Army began establishing individual WTUs at geographically dispersed locations to serve active-duty, Army Reserve, and Army National Guard soldiers needing this type of assistance. Figure 2 shows trends in the WTU soldier population over time. In line with this reduction in population, the Army reduced the number of WTUs from a high of 45 in 2008 to 25 in 2014, with plans to inactivate another 11 by August 2016. At that point, the Army will have 14 WTUs remaining. The Army uses an annual process known as a Strategic Posture Review to determine the required capability and capacity for each WTU location and for WTUs across the Army enterprise, such as determinations for the number of WTUs. This process includes estimating the future population of WTU soldiers using a model validated by the Army and then applying the estimated population to the current capacity of WTUs based on established ratios of the numbers of WTU soldiers to the numbers of primary care managers, nurse case managers, and squad leaders. If excess capacity is identified, the Warrior Transition Command reviews a series of criteria to determine which WTUs should remain open. For example, as part of the most recent review that recommended the inactivation of 11 sites, Army officials stated that they had selected sites to remain open that were co-located with force projection platforms and the best Army medical facilities. These recommendations are ultimately approved by senior Army officials. The ability to reverse these inactivations if demand for WTUs were to rise in the future was a key planning consideration in the decision to inactivate 11 WTUs by August 2016. As a result, the Army has issued policy to maintain control and oversight of former WTU facilities at inactivating locations. For example, the policy requires that the Installation Management Command conduct annual inspections of inactivated WTU facilities to ensure that compliance with Army standards for WTU facilities. The same policy calls for the Army’s Installation Management Command to be prepared to return deactivated facilities to WTU program use within 180 days of notification of the unit’s reactivation. While assigned to the WTU program, soldiers are to work toward meeting the goals identified in their individualized Comprehensive Transition Plan. This standardized framework has six main phases: (1) in- processing, (2) goal setting, (3) transition review, (4) rehabilitation, (5) reintegration, and (6) post-transition. As part of the Comprehensive Transition Plan and in addition to attending medical appointments, soldiers might, depending on their needs, undergo physical rehabilitation, meet with behavioral health therapists, participate in adaptive sports and reconditioning programs, participate in internships and training, and prepare to transition out of the WTU program. The last phase, post- transition, occurs after a soldier has left the WTU program. The point in the WTU soldiers’ rehabilitation process when it can be determined whether or not further medical care will cause a soldier to be found fit for duty is called the medical retention determination point. During their stay, WTU soldiers take part in activities to help them transition either back to the Army or to separate from the Army. For WTU soldiers who are returning to the Army, these activities could include taking Army career-related education or training, including college courses and other soldier development classes. For WTU soldiers who are transitioning out of the Army, these activities could include going through the disability evaluation systems, as well as participating in education and readiness opportunities that fit with the soldiers’ career goals. See appendix II for additional information concerning trends in WTU soldiers’ separation from the Army and in WTU soldiers’ lengths of stay in the WTU program. The Army has not assessed the effectiveness of the Triad of Care model, the core structure of the WTU program and consisting of a team of three key staff that provide medical case management. The Army designed the Triad of Care model at a time when WTU soldiers’ diagnoses were primarily for physical conditions. However, since then, the composition of diagnoses has changed significantly. Despite this change, the Army has not assessed its approach for managing soldiers’ medical care. The five WTUs we visited reported having taken ad-hoc measures to help meet the increase in behavioral health needs in the absence of such an assessment. For example, medical officials at each of the five WTUs that we visited told us that they include social workers as a fourth member of the Triad of Care staff. The Army established the Triad of Care model at a time when WTU soldiers’ diagnoses were primarily for physical conditions. Since then, the composition of diagnoses has changed significantly. Specifically, in 2008, the first full year of the WTU program, about 36 percent of the 12,228 WTU soldiers had a behavioral health diagnosis, while in 2015, over half of the 2,628 soldiers, about 52 percent, had such a diagnosis. According to Warrior Transition Command officials, these diagnoses include post-traumatic stress disorder diagnoses and all behavioral health or psychiatric diagnoses that are not categorized as post-traumatic stress disorder, such as major depression, anxiety disorder, panic disorder, and schizophrenia. Our analysis of these data showed that in 2008, 2,553 of 12,228 WTU soldiers (about 21 percent) entered the WTU program with a behavioral health issue as their primary diagnosis, compared with 830 of the 2,628 WTU soldiers in 2015 (about 32 percent). Moreover, over that same time frame, 4,424 of 12,228 WTU soldiers (about 36 percent) entered the WTU program with a behavioral health issue as their primary, secondary, or tertiary diagnosis, compared with 1,355 of 2,628 WTU soldiers in 2015 (about 52 percent). Warrior Transition Command officials stated that the greater prevalence of behavioral health issues is likely related to the Army’s efforts to de-stigmatize behavioral health issues. They told us that, as a result, it has now become acceptable for soldiers to notify medical personnel when these issues arise, impacting the number of soldiers being recognized and coming forward for assistance with behavioral health diagnoses. While the Army conducts reviews and inspections of the WTU program and WTUs, it has not assessed the effectiveness of the Triad of Care model, in light of the change in the composition of diagnoses. Warrior Transition Command senior officials told us that validation reviews of the population and staffing models have been conducted on an approximately 3-year basis and that periodic unit inspections have also been conducted. Validation reviews. The U.S. Army Manpower Analysis Agency conducts validation reviews of the Warrior Transition Command’s population and staffing models within the WTUs. These reviews examine, among other things, the staffing ratios for different types of personnel (including for members of the Triad of Care). Unit inspections. The Warrior Transition Command conducts inspections of each WTU under the Organizational Inspection Program, inspections that are to assess numerous aspects of each WTU’s operations and provide an avenue for other concerns to be raised. These inspections generally include a pre-questionnaire to WTU soldiers, their family members, and staff; a process evaluation to determine a WTU’s adherence to policies and procedures; and an after-action review with a WTU’s leadership and staff to rate the inspection process. According to Warrior Transition Command officials, these reviews and inspections should indicate whether changes are needed to the Triad of Care model. However, we found that while both validation reviews and unit inspections assess important aspects of the Army’s approach to care, neither has specifically assessed whether changes are needed to the Triad of Care model to address, for example, the greater prevalence of WTU soldiers’ behavioral health needs. The Army designed the Triad of Care model at a time when the preponderance of injuries among WTU soldiers were physical. Since that time, the Army has not assessed whether these significant changes to the WTU soldier population’s diagnoses have impacted its approach for managing soldiers’ medical care. Federal standards for internal control state that management should analyze relevant risks associated with achieving a program’s objectives. The increasing prevalence of behavioral health diagnoses in WTU soldiers and the resulting increase in soldiers’ need for behavioral health services needs is one such relevant risk. In the absence of an assessment by the Warrior Transition Command, officials from each of the five WTUs we visited told us that they have taken various ad-hoc steps to meet the challenges posed by the increasing prevalence of behavioral health diagnoses. For example, medical officials at each of the WTUs we visited told us that they include social workers as an additional member of the Triad of Care model in response to the need for specialized medical case management. While social workers have always played a role in the WTUs’ interdisciplinary team, officials at these WTUs told us that the greater prevalence of behavioral health diagnoses among soldiers warranted a greater role for social workers, who serve as the WTUs’ behavioral health experts. Further, senior officials at several of these WTUs told us that they now refer to their model as the “Quad of Care” or “Square of Care” in response to the social workers’ inclusion. In addition, at four of the five WTUs, social workers told us that they have been directly providing certain types of behavioral health care to soldiers, such as therapy sessions, in part because obtaining behavioral health appointments at the local military treatment facility can be difficult. Medical officials from one of these four WTUs also told us that, in addition to providing the social workers’ therapy sessions, the unit borrows a psychiatrist from the local military treatment facility 2 days each week to provide behavioral health care. At the remaining WTU, the local military treatment facility contracted for a full-time psychiatrist in order to meet soldiers’ need for behavioral health services. While these local adaptations represent efforts to meet an immediate need, they are not supported by analysis of whether the Triad of Care model must change to meet the increasing behavioral health needs of the WTU soldier population. For example, the differing approaches of various WTUs, with some making greater use of social workers and others turning to psychiatrists, merits review and assessment by the Army to determine which approach best benefits WTU soldiers. Assessing the Triad of Care model in light of changes in, for example, the prevalence of behavioral health conditions would position the Army to better determine how to meet WTU soldiers’ medical needs. The Army has established selection processes and updated its selection criteria to require additional information about potential squad leaders and platoon sergeants for its WTUs, but the Army is not monitoring full adherence to policy, specifically the requirement to interview candidates for these positions. Further, while the Army had made improvements to its training program, the program does not incorporate a post-training assessment on the application of training to the work environment. In addition, the Army has not developed a plan that explains how to meet any potential increases in demand for staff, if needed, at the WTUs. The Army has established processes to select WTU squad leaders and platoon sergeants, and has updated its selection criteria to require greater experience and additional screening. WTU squad leaders and platoon sergeants are selected by various methods based on whether they are active duty or members of the reserve components, and the senior Commander at the installation is the final approval authority for all assignments. WTU positions are designed to be representative of the entire Army, with a mix of military occupational specialties and all three active and reserve components. Prior to selection, squad leaders and platoon sergeants must meet minimum grade, experience, and training qualifications. Squad leader and platoon sergeant positions can be filled by personnel sourced through one of the following: from the installation where the WTU is located; the Human Resources Command; and, for reserve component soldiers, the Tour of Duty system. Senior Commanders can identify personnel from the installation where the WTU is located to interview for squad leader and platoon sergeant positions. If senior Commanders are unable to identify staff at the installation, a request is sent to the Human Resources Command to identify personnel to be screened and sent to the WTU program. Using the Tour of Duty system, reserve component personnel can apply for open squad leader and platoon sergeant positions, and local Commanders and their selection panels interview and select the best-qualified candidates. Figure 3 shows the three sources and the processes for filling the squad leader and platoon sergeant positions. In November 2015, the Army updated its selection policy for WTU squad leaders and platoon sergeants. The November 2015 policy increases the minimum grade and experience requirements and identifies minimum training requirements. According to WTC officials, the updated requirements give WTU squad leaders and platoon sergeants more experience to draw from when working with WTU soldiers. In addition, the policy identifies squad leaders and platoon sergeants as positions of significant trust and authority. Through this designation, the Army requires more screening, including through the use of records concerning police encounters, substance abuse, sex offender status, and behavioral health and records held by Army personnel offices. The updated policy also requires WTU squad leaders and platoon sergeants to have previously served successfully in a grade-equivalent leadership position and to meet minimum grade and training requirements. For example, potential squad leaders are required to hold the minimum grade of E-6 and to have completed the Advanced Leader Course, and potential platoon sergeants are required to hold the minimum grade of E-7 and have completed the Senior Leader Course. Table 1 lists the WTU program selection criteria for squad leaders and platoon sergeants before and after November 2015. Although the Army has updated its policy for selecting squad leaders and platoon sergeants, the Warrior Transition Command is not monitoring full adherence to this policy, specifically the requirement to interview candidates for these positions. According to the policy effective before November 2015, Commanders or their staff selection panels are required to review the records of candidates and interview them to validate whether a candidate possesses the required skills and attributes to work as WTU staff. This provision did not change with the new regulation effective November 2015. However, the Army does not have a mechanism in place to monitor whether the interviews are conducted. According to Warrior Transition Command guidance, the WTU selection process for squad leaders and platoon sergeants is important to ensure the selection of individuals who are best suited for the position. Warrior Transition Command officials stated that squad leader and platoon sergeant positions within the WTU are categorized as “broadening positions” because they reflect responsibilities and duties outside of their Army military occupational specialty. Candidates for these positions are drawn from a mix of Army occupations, such as infantry or transportation corps, and the selection process, including interviews, is intended to ensure the suitability of the staff selected for the position. Warrior Transition Command guidance recommends using the structured interview process when evaluating candidates to significantly improve the likelihood of selecting good candidates. A structured interviewed process involves the WTU Commander or a staff selection board asking the same questions of each candidate and scoring the answers using a pre- developed rating scale. The guidance explains that interviewing provides Commanders and their selection panels the ability to learn more about the candidate and gives candidates the opportunity to demonstrate their responses to situational job scenarios such as supervising WTU soldiers with behavioral health issues. The Warrior Transition Command, which is responsible for oversight of the WTU program and operations, has not monitored whether units adhere to the requirement to interview squad leaders and platoon sergeants for WTU positions. The Warrior Transition Command directs the Army Human Resource Action Branch to conduct a quarterly analysis of a random sample of squad leader and platoon sergeants’ records to validate that candidates meet, for example, minimum grade and experience requirements. However, this quarterly analysis does not include validation of whether the selection process was followed as required, including whether the candidate was interviewed. At four of the five WTUs we visited, squad leaders and platoon sergeants told us that they were not interviewed prior to assuming their positions. In contrast, senior officials at three of the five WTUs we visited stated that they believed that interviews for squad leaders and platoon sergeants are conducted. However, in the absence of a mechanism to monitor this, the Army does not have assurance that the interviews are being conducted. When questioned why some interviews may not be conducted, Warrior Transition Command officials stated that the interviews were encouraged, which stands in contrast to the stated policy. Federal internal control standards state that management should design control activities to achieve objectives and respond to risks. Although both the regulations effective as of November 2015 and before November 2015 state that Commanders or their staff selection panels will interview and review the records of candidates to validate whether a candidate possesses the required skills and attributes to work as WTU squad leaders and platoon sergeants, there is no internal control procedure to monitor full adherence to this requirement. By not monitoring full adherence to this requirement, the Army does not have assurance that squad leaders and platoon sergeants being selected are well suited to carry out the sensitive mission of the WTUs. The Army Medical Department Center and School has made efforts to make improve WTU training for squad leaders and platoon sergeants, but the program does not incorporate a post-training assessment of the application of training to the work environment. At three of the five sites we visited, WTU squad leaders and platoon sergeants, along with other WTU staff, expressed concerns regarding squad leader and platoon sergeants’ training. While the Army has implemented several practices to incorporate feedback from participants and other WTU professionals into its training program for squad leaders and platoon sergeants, these efforts may not fully address the concerns. Currently, according to school officials, the Army Medical Department Center and School offers a 3-week residential training course that squad leaders and platoons sergeants must take within 90 days of assuming their duties. Prior to attending the residence course, staff must complete a staff orientation distance learning course. In the first week of residence training, squad leaders and platoon sergeants are required to take the Cadre Resilience Course, designed to assist those caring for wounded, ill, and injured soldiers and their families. According to Army Medical Department Center and School officials, the second and third weeks of training are designed specifically for the WTU and include information related to, for example, communication skills, behavioral health issues, the Comprehensive Transition Program, the case management system, and role-playing activities. Army Medical Department Center and School officials stated that the vast majority of squad leaders and platoon sergeants come to the course with little to no knowledge of the WTU concept. Army Medical Department Center and School officials also stated that the training is designed to provide a foundation for squad leaders and platoon sergeants to be able to perform their duties, and is not intended to provide them with expert-level proficiency. While the WTU training program is extensive, WTU squad leaders and platoon sergeants at three of the five sites we visited stated that they were not sufficiently prepared for their positions after taking the required training. Specifically, squad leaders and platoon sergeants stated that their training did not address the actual requirements of their positions, such as the use of data systems and other day-to-day responsibilities. Squad leaders and platoon sergeants expressed frustration with their training, noting that there is no military occupational specialty similar to their positions, and that their roles and responsibilities were unfamiliar and dissimilar to anything in their prior Army experience. Candidates for these positions are drawn from a mix of Army occupations, such as infantry or transportation corps, and new squad leaders and platoon sergeants may have no background working with soldiers with issues typical of the WTU population, such as behavioral health issues. Other WTU staff at these sites, such as nurse case managers, social workers, and a WTU Commander stated that they also believe that current training does not sufficiently prepare squad leaders and platoon sergeants for their duties. For example, they noted that in some instances squad leaders and platoon sergeants did not have a sufficient understanding of behavioral health issues to be charged with responsibility for individuals having these issues. Our prior work on assessing strategic training efforts summarizes attributes of effective training and development programs, including those related to the evaluation of agency training and development efforts. Such attributes include how the agency incorporates evaluation feedback into the planning, design, and implementation of its training and development efforts. According to school officials, the Army Medical Department Center and School currently uses end-of-course surveys and feedback from focus groups, WTU leadership, and staff to collect information on the effectiveness of the courses offered and make changes to course curricula. In addition, Army Medical Department Center and Schools officials stated that the Warrior Transition Command provides feedback from its Organizational Inspection Program regarding training. Warrior Transition Command officials stated that they send questionnaires prior to their Organizational Inspection Program and ask questions during informal feedback sessions regarding the effectiveness of squad leader and platoon sergeant training. These officials stated that they believe their current approach provides sufficient feedback to improve the training program. While these efforts represent attempts to improve training through feedback, they do not assess the application of training to the work environment. Our prior work emphasizes that agencies should use analytical approaches appropriate for assessing training and development programs, such as assessing the application of training to the work environment. Application of training to the work environment assesses expected changes in behavior that trainees should exhibit on the job because of the training. Army Medical Department Center and School officials stated that the WTUs do not evaluate training of squad leaders and platoon sergeant through assessments of how well they are able to apply their training to the work environment. These officials noted that they have previously proposed such an assessment to the Warrior Transition Command as the Warrior Transition Command is responsible for post-training assessments of staff. Army Medical Department Center and School officials noted that 90 days post-training would be an opportune time to collect information on the application of squad leaders and platoon sergeants’ training to the work environment. However, no action has been taken to date. Without information that could be obtained from conducting post-training assessments, the Army Medical Department Center and School may miss a valuable opportunity to improve its programs by incorporating useful information concerning the practical application of training. Similarly, the Warrior Transition Command and WTUs may miss a valuable opportunity to further assess the performance of the squad leaders and platoon sergeants for the benefit of WTU soldiers. The Army has not developed plans for increasing its WTU staff levels in the event of increased demand for WTUs. A key planning consideration in the decision to inactivate 11 WTUs by August 2016 was the Army’s ability to reverse these changes in the event that the demand for the Army’s WTU program was to increase. The Army’s guidance highlights the importance of reversibility, and establishes a number of relevant policies, for example, to maintain control of inactive WTU facilities. According to the Warrior Transition Command, the enduring 14 WTUs, if staffed by a full complement of 1,475 staff, could manage up to 4,400 WTU soldiers. With additional staff, the facilities at the enduring WTUs could support up to 8,100 WTU soldiers. In line with this projection, the Army has outlined plans for “expansion companies” at its enduring locations, with the exception of Walter Reed National Military Medical Center and Brooke Army Medical Center, that could operate on a temporary basis for a period of 2 years, with an assessment after the first 9 months to determine whether the company should be added to the unit’s permanent staffing document. Each expansion company would require 55 WTU personnel and could manage the care of up to 200 WTU soldiers. While these plans represent positive steps toward planning for a potential increase in WTU numbers, the Army has not addressed how it would staff the expansion companies at these enduring WTUs, or at any of the 11 inactivating WTUs, if the need arises. Warrior Transition Command officials stated that any change in the need for staff would be gradual. In addition, the Warrior Transition Command’s population projection model takes account of deployment cycles and could possibly anticipate a spike in WTU admissions due to an increased operational tempo. However, upswings in deployments and operational tempo could create spikes in WTU soldier admissions, possibly resulting in the need to expand WTU staff at a pace that is greater than the Army’s current expectations. Regardless of the pace of any increase, the Army could face challenges in its efforts to staff these units with appropriately selected and screened personnel in a timely manner. As previously noted, the Warrior Transition Command recently updated its selection requirements for squad leaders and platoon sergeants. As the Warrior Transition Command implements requirements for squad leaders and platoon sergeants to undergo background checks and behavioral health records checks, effects from these changes could result in an increase in the time required to select WTU staff. The Warrior Transition Command has emphasized the importance of the screening and selection process, noting that it is central to the integrity of the WTU program and the duty to care for WTU soldiers. In addition, WTUs are staffed by a number of other professionals, including primary care managers, nurse case managers, social workers, and transition coordination specialists, and hiring well- qualified candidates for these positions will also take time. For example, after WTU nurse case managers attend a 3-week training course, they must then complete a 4-week preceptorship at their assigned WTU before they can manage a normal caseload of 20 WTU soldiers. Army guidance notes the possibility of hiring term or temporary employees to address any gaps. However, such an approach could negatively impact the continuity of care, a central principle of the Army’s approach to WTU care, because of turnover in key WTU staff. To handle any sudden increase in demand, WTUs also could potentially increase their ratios of squad leaders, nurse case managers, or primary care managers to WTU soldiers to accommodate the additional demand, but this too could have negative effects due to the heavier case workloads that would result for these individuals. Warrior Transition Command officials stated that they regularly monitor the ratios of these team members to WTU soldiers, and the regulation underscores the relationship between adhering to these ratios and the quality of care provided. Further, officials such as squad leaders, platoon sergeants, social workers, and nurse case managers at the five WTUs we visited stated that the current ratios may be too high. At one site we visited, officials have changed local policy to lower the ratio of squad leaders to WTU soldiers. Federal internal control standards emphasize the need for control activities, such as the management of human capital, to maintain a continuity of needed skills and abilities. While the Army’s recently issued policy does not require the Warrior Transition Command to develop a plan for increasing its staff levels in response to future demand, the policy states that the Warrior Transition Command should “be prepared to reactivate previously inactivated WTUs.” Senior Warrior Transition Command officials told us that this is an implied requirement for a plan related to staffing. However, they stated that they have not yet taken steps to address this issue. Absent such a plan, the Army does not have assurance that it can, with limited notice, expand its staffing to effectively carry out the WTU mission. The Army has implemented a structured process for reviewing the eligibility of soldiers to be admitted to WTUs, but it does not track instances in which Commanders have made exceptions to these criteria for active-duty soldiers. In addition, Warrior Transition Command and Army Reserve officials stated that they came to an agreement that admittance criteria for members of the reserve component will not change until a WTU-alternative program is expanded to the Army Reserve. However, the Army has not yet examined the costs and benefits of expanding this alternative program relative to the current system. According to Warrior Transition Command officials, the Army does not track instances in which individual WTUs have made exceptions to the Army’s WTU admittance criteria for active-duty soldiers. While the Army has established a structured process for reviewing the eligibility of soldiers to enter WTUs, Warrior Transition Command officials stated that the senior mission Commander at the relevant installation can choose to admit soldiers to WTUs outside of this process by approving an exception to the eligibility criteria. Army regulations state that for admittance into the WTU program, active- duty component soldiers must either (1) need care requiring 6 months or more of treatment or (2) have a significant behavioral health issue that presents a danger to themselves or others. The Army’s structured process for reviewing the eligibility of soldiers to enter WTUs comprises the requirement that the WTU Commander, local hospital Commander, and senior mission Commander compare potential WTU soldiers’ medical case history with the WTU admittance criteria and decide whether soldiers are eligible. Warrior Transition Command officials stated that senior mission Commanders are able to approve exceptions to the criteria under a variety of circumstances, such as the need for soldiers stationed overseas to process disability status through DOD’s Integrated Disability Evaluation System. Officials at the Warrior Transition Command could not provide data of how often exceptions to WTU admittance criteria were approved. Officials noted that admitting soldiers who do not meet admittance criteria can be appropriate in some situations. For example, the Department of Defense and the Veterans Affairs system for assessing individuals’ medical fitness for service and level of service-related disability, known as the Integrated Disability Evaluation System, is available only to soldiers in the continental United States, and soldiers stationed overseas can be transferred to a military treatment facility in the continental United States with an associated WTU to complete this process. WTU soldiers who will not be returning to duty normally go through this process while in the WTU. Officials at our site visits expressed concern related to instances in which soldiers were far into the Integrated Disability Evaluation System process, near to the point of separation from the Army, and were assigned to a WTU to stay for fewer than 6 months, sometimes for fewer than 30 days. By virtue of the fact that a soldier is in the late stages of the Integrated Disability Evaluation System process, the soldier is unlikely to be in the Army for the standard admittance criteria period of 6 months or more of care. At three of the five sites we visited, a variety of WTU officials, such as nurse case managers, squad leaders, and social workers, among others, expressed concern about this issue. Officials at these locations noted that this was not the best use of WTU resources, and that such soldiers did not have the necessary time to benefit from the medical and career opportunities that the WTU program provides. According to Warrior Transition Command officials, they do not track instances in which individual WTUs make exceptions to the Army’s admittance criteria because application of the eligibility criteria is the responsibility of the senior mission Commander on the relevant installation. Standards for Internal Control in the Federal Government state that management should design control activities to achieve objectives. These control activities might include performing top-level reviews, for example, to ensure that performance is consistent with the WTU program’s goals. However, by not tracking this information, the Army does not know how frequently such exceptions are made and cannot ensure the best use of resources. The Army is planning to expand an alternative WTU program to the Army Reserve, but it has not yet examined the costs and benefits of this expansion. Specifically, the Army has not yet estimated the costs and benefits of expanding the Reserve Component Managed Care program, which currently treats only National Guard soldiers with low-acuity, low- risk, non-complex medical needs. In addition, the Army has not compared the cost of expanding this program with maintaining the status quo for soldiers with similar low-acuity, low-risk, non-complex medical needs. Army Reserve officials have confirmed their intention to participate in the program, but have not yet established a timeline for its introduction. Currently, active-duty soldiers must need care lasting 6 months or longer to be admitted to a WTU. In contrast, reserve component soldiers must need care lasting 30 days or longer. If reserve component soldiers meet this threshold, they can be considered for admittance to a WTU Community Care Unit if their medical needs are low-acuity, low-risk, and non-complex. Warrior Transition Command and Army Reserve officials stated that expansion of the Reserve Component Managed Care program to the Army Reserve would be necessary if the Warrior Transition Command were to change current WTU admittance policy, specifically by applying the more stringent criteria for active-duty soldiers to members of the reserve components. However, during the course of our review, Warrior Transition Command officials stated that they came to an agreement with the Army Reserve that admittance criteria will not change until the Reserve Component Managed Care program is expanded to the Army Reserve. A senior Army Reserve official stated that the lower threshold for members of the reserve components stems from requirements under law and DOD instructions for such soldiers to remain in active federal service for the purposes of disability evaluation or medical treatment. Senior officials at the Warrior Transition Command noted that active-duty soldiers needing fewer than 6 months of care can, instead of being assigned to a WTU, remain with their line unit and receive medical care in that setting. Meanwhile, reserve component soldiers must demobilize as their active-duty orders end and, while they can receive DOD-funded medical care for a period of time, cannot continue to receive their active- duty pay and other benefits. The 30-day treatment threshold therefore allows those soldiers to enter the WTU, whereas active-duty soldiers with similar short-term care needs can remain with their unit. Table 2 summarizes the admittance criteria for the WTU program and for the WTU-alternative program. Warrior Transition Command and Army Reserve officials told us that were the more stringent active-duty criteria to be applied to members of the reserve components, those Army Reserve soldiers needing fewer than 6 months of care would not be eligible for the WTU program, and would not be able to receive their active duty pay while receiving short-term medical treatment. According to Warrior Transition Command and Army Reserve officials, access to active-duty pay is important because these soldiers may not be medically able to return to civilian employment while receiving medical care. Warrior Transition Command officials acknowledged that this would put Army Reserve soldiers at a disadvantage compared with active component soldiers. National Guard soldiers, meanwhile, could potentially access the Reserve Component Managed Care Program, which provides an alternative to WTUs for soldiers needing 179 days or fewer of low-acuity, low-risk medical care while on active-duty orders, entitling soldiers to active duty pay and benefits. Army Reserve or National Guard soldiers with low-acuity, low-risk, non-complex conditions can also be assigned to a WTU Community Care Unit, in which soldiers are not physically located at a WTU, but receive remote medical case management from their assigned WTU and receive medical care in their local area, either in a military treatment facility or from a provider in the TRICARE network. To be eligible, soldiers must not be in need of complex medical case management, and must meet behavioral health- risk standards. According to Army data, as of February 2016, there were 155 Army Reserve soldiers in 11 Community Care Units across the United States. Army Reserve officials told us that they are in the early planning stages of considering Reserve Component Managed Care expansion, and that no cost estimates of such an expansion have yet been developed. As in the National Guard, the active duty Army would pay for active duty pay and medical care costs for the reservists in this program. However, officials told us that the Army Reserve would incur costs because of the need to tender a contract for nurse case managers for the program. Unlike the National Guard, the Army Reserve does not currently have a large network of contracted nurse case managers charged with improving medical readiness. While the National Guard is able to utilize its existing nurse case manager contract to service the program, officials stated that the Army Reserve would incur new costs in procuring these services. Further, Army officials have not yet articulated why expansion of the Reserve Component Managed Care is preferable to any alternatives, such as the continued use of WTU Community Care Units, especially in light of the alternative program’s increased costs. One official told us that this may stem from a desire to limit the time that reserve component personnel spend in the WTU program. Our Business Process Reengineering Assessment Guide states that when considering program changes, officials should develop a performance-based analysis of the benefits and costs for each alternative, followed by a formal business case analysis making the case for a change. In this case, such an analysis could assess the costs and benefits of expanding the Reserve Component Managed Care program to the Army Reserve with alternatives, such as the option of continuing the current system of Community Care Units. As noted above, the Army Reserve is in the early stages of considering Reserve Component Managed Care expansion and no cost estimate has yet been conducted. Senior officials from Warrior Transition Command and the Army Reserve stated that expansion of the Reserve Component Managed Care program requires fewer resources than WTUs and soldiers in this program, on average, demonstrate shorter lengths of stay than soldiers in WTUs or Community Care Units. However, without conducting a cost-benefit analysis to analyze such factors, the Army Reserve may continue with plans to expand the Reserve Component Managed Care program and incur significant costs without clearly articulated benefits. The Army has several methods that WTU soldiers can use to register a complaint or express a concern about the WTU program, medical care, or other issues. Although Army Medical Command oversees five complaint methods available to WTU soldiers, it does not have an approach to ensure that the Warrior Transition Command, which is charged with oversight of the WTU program, has access to all of this information. Standards for Internal Control in the Federal Government state that information should be communicated to management and others who need it in such a way that they can carry out their responsibilities. However, absent information on potential challenges with the WTU program, the Warrior Transition Command cannot fully carry out its oversight and policy development responsibilities. The Army has the following methods, among others, by which WTU soldiers can register a complaint: the WTU chain of command; WTU town hall meetings, to be held at least quarterly at the Commander’s discretion in order for the Commander to address the WTU soldiers and listen to their concerns; the local Army ombudsman, who is independent from WTU command and located at most WTU sites; a toll-free hotline, including the Wounded Soldier and Family Hotline managed by the Army Medical Command’s Medical Assistance Group, which oversees both the hotline and the Ombudsman program; and, the WTU chaplain, a confidential source within the command. The Army Medical Command, with direct purview over Warrior Transition Command and the WTUs, oversees these five complaint methods. Local WTU Commanders, who coordinate with military treatment facility Commanders, manage their respective town hall meetings, and are part of the WTU chain of command. The Army Medical Command’s Medical Assistance Group oversees both the hotline and the Ombudsman program. The Regional Medical Commands, which report to Army Medical Command, provide chaplain support to the WTUs. The chaplain corps adheres to standards of privileged and confidential communications but, according to chaplains contacted throughout our review, they can provide general and trend information to Commanders. According to the Army’s Warrior Transition Regulation, the Warrior Transition Command provides centralized oversight, guidance, and advocacy in support of WTU soldiers and their families, including policy development and oversight of the WTUs’ daily operations. In addition, the regulation states that WTU soldiers and their families are to be assisted through effective collaboration efforts, proactive communication, responsive policy, and program oversight. However, information from the three of the five complaint methods—chain of command, town hall meetings, and chaplains—that Army Medical Command has purview over may not be shared with the Warrior Transition Command, which therefore may not be able to sufficiently identify and address systemic issues. Warrior Transition Command officials stated that they do not generally receive information about complaints from these complaint methods and that complaints are mostly handled by the local chain of command, which has the responsibility to investigate and resolve these complaints. WTU staff at four sites we visited said they handle complaints locally as much as possible, and infrequently contact senior leadership or the Warrior Transition Command with complaints information. In interviews with WTU staff at three of the sites we visited, officials stated that town halls may provide useful information to local leadership and referrals to other methods, but information from them typically is not forwarded to senior leadership. If complaints are expressed and resolved locally and information about them is not communicated further, senior leaders would remain unaware, and, if present at multiple WTUs, their possible systemic nature might go unidentified. In addition, at least one local senior official at a site we visited expressed a lack of confidence in the town hall system, stating that it yielded no useful information. Further, while Army chaplains are bound by confidentiality rules concerning communication between them and any soldier or family member, chaplains can share general information with local Commanders and regional medical commands, such as data on trends in the types of issues discussed, and such information can provide valuable insight into the issues concerning WTU soldiers. The Warrior Transition Command does receive consistent information from some complaint methods, specifically from the two managed by the Ombudsman program. According to its operating procedures, the Ombudsman program provides information on issues reported by soldiers to local ombudsmen and the toll-free hotline through weekly reports to the Army’s Office of the Surgeon General and the Warrior Transition Command, as well as daily reports on WTU-related issues to the Warrior Transition Command’s Commander and staff. However, the Ombudsman program’s complaints information represents only two complaint methods identified by the Warrior Transition Command and does not present complete information about possible concerns raised by WTU soldiers. For example, ombudsman data would not include complaints from town hall meetings or the WTU chain of command unless WTU soldiers had repeated their complaints to the local ombudsman or toll-free hotline. While the Warrior Transition Command receives information from the two programs run by the Ombudsman program, it does not consistently receive information from the other three methods of expressing complaints. Standards for Internal Control in the Federal Government state that information should be communicated to management and others who need it in such a way that they can carry out their responsibilities. The Warrior Transition Command, as the entity which provides strategic oversight and policy development support for the WTU program, provides a focal point for coordination and support for the program, and requires relevant information from the complaint methods to inform its oversight responsibilities. Quality information and effective communication about relevant complaints from all WTUs would enable the Warrior Transition Command and the Army to identify and implement necessary policy or program changes. However, the Army does not have an approach to ensure relevant complaints information from the various methods it presents to WTU soldiers is communicated to appropriate senior leadership, such as the Warrior Transition Command. In addition, the Army has not clarified which complaints information is important for WTU program management and oversight. Warrior Transition Command officials stated that they engage on a policy level when there is a trend or ongoing problem, but did not explain how it would identify such systemic issues absent relevant information from available complaint methods. Given this, the Army cannot ensure that the Warrior Transition Command has relevant, reliable, and timely information necessary to achieve its objectives for oversight of and policy development for the WTU program and may not be able to identify and address potential systemic issues concerning the WTUs. The Army has taken the following steps that signify its commitment to strengthening its WTU program and not repeating some of the mistakes made which led to the crisis of care at the former Walter Reed Army Medical Center: WTUs are staffed with standard ratios of WTU soldiers to providers; soldiers’ transition through the WTUs follows a structured process; minimum standards for squad leaders and platoon sergeants are being strengthened; and WTU soldiers have a range of methods for expressing complaints. These changes represent efforts to develop policy for the WTU program to help ensure quality of care and soldiers’ trust in the program. However, the Army has not assessed how fundamental aspects of the WTUs, such as the Triad of Care model, are impacted by the changing composition of diagnoses for WTU soldiers, particularly the increasing prevalence of behavioral health diagnoses. Additionally, in areas ranging from screening requirements for WTU staff, WTU soldiers’ admittance criteria, and the complaints process, the Army has yet to implement management controls to ensure that its policies are maintained and implemented in practice. As the Warrior Transition Command moves toward its transition to becoming a directorate within the Army Medical Command, it will be important that its successor organization increase oversight of the program to maintain the commitment to high-quality care for WTU soldiers. To increase oversight of the Army’s Warrior Transition Unit program, we recommend that the Secretary of the Army direct the Army Surgeon General to take the following six actions: Assess the Triad of Care model’s effectiveness in light of the changes in WTU diagnoses and take the appropriate action. Exercise oversight responsibility to track full adherence to selection processes for squad leaders and platoon sergeants, including the requirement to conduct interviews for these positions. Develop a mechanism to conduct post-training assessments on squad leaders and platoon sergeants’ application of training to the work environment and incorporate the results into the training program. Develop plans to adjust staff levels, if needed, to accommodate a potential future surge in demand. Establish a process that assigns oversight responsibility for tracking instances in which Commanders make exceptions to WTU entrance criteria so that the Army Surgeon General is aware of the extent Commanders’ decisions are consistent with program goals. Develop and implement an approach and associated procedures for providing senior leadership, such as the Warrior Transition Command, with complaints information concerning the WTU program and WTU soldiers. To help ensure the best use of resources for managing the medical care of soldiers recovering from serious medical conditions, we recommend that the Secretary of the Army direct the Chief of the Army Reserve, in conjunction with the Army Surgeon General, to take the following action: Develop an analysis that compares the costs and benefits of maintaining the current system of Community Care Units with the costs and benefits of expanding the Reserve Component Managed Care program. In written comments on a draft of this report, DOD concurred with our seven recommendations to increase oversight of the Army’s Warrior Transition Unit program and to help ensure the best use of resources for managing the medical care of Army Reserve soldiers recovering from serious medical conditions. DOD’s comments are reprinted in appendix III. We are sending copies of this report to the appropriate congressional committees, the Secretary of Defense, the Deputy Secretary of Defense, the Secretary of the Army, the Army Surgeon General, and the Chief of the Army Reserve. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions regarding this report, please contact me at (202) 512-3604 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. To address the objectives for this review, we reviewed policies governing the Army’s Warrior Transition Unit (WTU) program, including the Army Medical Action Plan; analyzed program documents and data; and interviewed officials from the Warrior Transition Command, the Office of the Army Surgeon General, and other Army offices with responsibilities for medical and personnel management. We conducted site visits to 5 of the WTUs existing during the course of our review, which we selected based on the mix of active-duty and reserve components, the number of complaints reported to the ombudsman, and geographic dispersion. The sites we visited were Walter Reed National Military Medical Center in Bethesda, MD; Joint Base San Antonio in San Antonio, TX; Fort Hood in Killeen, TX; Fort Carson in Colorado Springs, CO; and Fort Eustis in Newport News, VA. At each site, we conducted interviews with squad leaders, command staff, nurse case managers, social workers, and other professionals. The results of these site visits are non-generalizable but provide useful information on WTU program operations and relevant issues. To determine the extent to which the Army has assessed the effectiveness of the Triad of Care staffing model for managing WTU soldiers’ care, we compared policy and other documents concerning the use of the Triad of Care model with federal internal control standards, which emphasize the need for management to identify and analyze relevant risks associated with achieving defined objectives. To identify changes in the WTU population over time, we analyzed aggregate data from the Warrior Transition Command on WTU soldiers’ medical diagnoses upon entry into the WTU program from June 1, 2007 through December 2015. We did not request or have access to any individual WTU soldier’s information, including an individual soldier’s medical records or diagnoses. We decided not to present data for soldiers who entered a WTU in 2007 because it did not represent a full year of the program. We found these data to be sufficiently reliable to show the prevalence of behavioral health diagnoses at the time of the soldier’s entry into the WTU program. We made this determination based on a data reliability questionnaire filled out by Army officials, logic tests of the data, and our conversations with Warrior Transition Command officials about how the data were captured, stored, and checked for accuracy. We also reviewed relevant documentation related to the Warrior Transition Command’s most recent unit inspections conducted at each of the five individual WTUs we visited (one WTU inspected in 2011, one WTU inspected in 2014, and the remaining three inspected in 2015), and observed one of these inspections in process to determine what types of information were analyzed by the Warrior Transition Command. We spoke with Warrior Transition Command officials about the Triad of Care model, changes in the WTU soldier population, and their WTU inspections. We also interviewed officials at each of the five WTUs we visited about the Triad of Care model and the management of their soldiers’ care. To determine the extent to which the Army’s has established processes to oversee its WTU personnel selection, assess the training of these personnel, and adjust staff levels, we interviewed officials from the Warrior Transition Command, the Human Resources Command, and officials at various levels at each of the sites we visited concerning their views of the selection of squad leaders and platoon sergeants. We also interviewed officials at the Army Medical Department Center and School regarding the WTU training program. We compared current and past Army policies regarding the selection and training of squad leaders and platoon sergeants and reviewed the content of our discussions with squad leaders and platoon sergeants at various sites we visited. We reviewed federal internal control standards which state that management should demonstrate a commitment to recruit, develop, and retain competent individuals and our prior work which summarizes the attributes of effective training, including the need to ensure that training goals and strategies are aligned with organizational goals and attributes of effective training and development programs. Though WTU policies address issues related to staff in various roles, collectively referred to as “cadre,” we decided to focus on the selection and training of squad leaders and platoon sergeants based on the content of our discussions with officials at each of our site visits. We also discussed the Army’s ability to adjust WTU staff levels with officials from the Warrior Transition Command. We reviewed the Army’s policy in this area and the principle in federal internal control standards that states that agencies need to demonstrate a commitment to competence through succession and contingency plans and to establish control activities, such as for the management of human capital, to maintain a continuity of needed skills and abilities. To determine the extent to which the Army has assessed adherence to WTU program admittance criteria and the impact of any changes to these criteria for the active-duty and reserve components, we reviewed policies and procedures for admitting soldiers into the WTU program, including any proposed changes to the admittance criteria and discussed the application of these policies and procedures with officials at each of the sites we visited and with officials from the Warrior Transition Command. We compared these policies and procedures with federal internal control standards which require that management design control activities, such as establishing and reviewing performance measures and indicators, to achieve objectives and respond to risks, and noted any differences. In addition, we reviewed documentation concerning the Army’s Reserve Component Managed Care program, a WTU-alternative program, and its proposed introduction to the Army Reserve, and discussed this information, including estimated costs, with officials from the Army Reserve and the Warrior Transition Command. We compared plans for the introduction of the program to the Army Reserve with the requirement in our Business Process Reengineering Assessment Guide for a performance-based analysis of benefits and costs for each alternative when considering program changes, and noted any differences. To determine whether the Army had instituted methods to address the complaints of WTU soldiers, we analyzed information on the Army’s approach to handling complaints by WTU soldiers. We interviewed program staff and officials regarding the Army’s approach and reviewed related Army documentation, such as Army policies on its WTU and chaplain corps programs. We compared the Army’s approach for handling complaints with federal internal control standards which state that information should be communicated to management and others who need it in such a way that they can carry out their responsibilities. We analyzed data regarding the number of soldiers in a WTU over time, the date when they entered a WTU, and their average length of stay from 2007 to 2015. We found the data on the number of soldiers in a WTU over time to be sufficiently reliable to present monthly changes in the WTU soldiers’ census. For the data on when a soldier entered a WTU and their lengths of stay, we determined that the 2007 and 2008 data were unreliable because of missing data. We also chose not to report data on when a soldier entered a WTU and their lengths of stay for 2015 because of the large number of soldiers that were still in the WTU program when the data were reported by Warrior Transition Command officials. Other than these exceptions, we found the data to be sufficiently reliable for showing WTU soldiers’ lengths of stay from the time of entry in the WTU program to the time of (1) exiting the WTU and (2) until they had reached their medical retention decision point. For each type of data, we made these determinations based on a data reliability questionnaire filled out by Army officials, logic tests of the data, and our conversations with Warrior Transition Command officials about how the data were captured, stored, and checked for accuracy. We conducted this performance audit from August 2015 to July 2016 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Of the Army soldiers who entered a Warrior Transition Unit (WTU) between 2008 and the end of 2015, generally a higher percentage separated from the Army than returned to the Army (see fig. 4). WTU soldiers’ length of stay—from entry in the WTU until separation— peaked for soldiers who entered in 2009 and has generally decreased since then. For example, the average length of stay was 407 days for those soldiers who entered a WTU in 2009, and that number decreased to 308 for those who entered a WTU in 2014. In addition, WTU soldiers in the active component generally had shorter lengths of stay than those in the Army National Guard or Army Reserve (see fig. 5). Similarly, the average length of time between a WTU soldier’s entrance into the WTU program and the medical retention determination point has significantly decreased. For example, the average length of time to reach the medical retention determination point has decreased by about 45 percent for those soldiers who had entered a WTU in 2009 as compared with those who entered a WTU in 2014 (see fig.6). In addition to the contact named above, Lori Atkinson, Assistant Director; Rebekah Boone; Nicole Collier; Mae Jones; Amie Lesser; Jeffrey Mayhew; Michael Silver; Adam Smith; and Sabrina Streagle made key contributions to this report. Military and Veteran Support: DOD and VA Programs That Address the Effects of Combat and Transition to Civilian Life. GAO-15-24. Washington, D.C.: November 7, 2014. Recovering Servicemembers and Veterans: Sustained Leadership Attention and Systematic Oversight Needed to Resolve Persistent Problems Affecting Care and Benefits. GAO-13-5. Washington, D.C.: November 16, 2012. Army Health Care: Progress Made in Staffing and Monitoring Units That Provide Outpatient Case Management, but Additional Steps Needed. GAO-09-357. Washington, D.C.: April 20, 2009. DOD and VA: Preliminary Observations on Efforts to Improve Care Management and Disability Evaluations for Servicemembers. GAO-08-514T. Washington, D.C.: February 27, 2008. DOD and VA: Preliminary Observations on Efforts to Improve Health Care and Disability Evaluations for Returning Servicemembers. GAO-07-1256T. Washington, D.C.: September 26, 2007. | The Army established its WTU program in 2007 after congressional interest and media coverage about substandard care for soldiers at the former Walter Reed Army Medical Center. The program is to coordinate care for soldiers recovering from serious physical and behavioral health conditions. As the WTU soldier population has declined, the Army has reduced its WTUs--from 45 in 2008 to a planned total of 14 by August 2016. The House Report accompanying a bill for the National Defense Authorization Act for Fiscal Year 2016 included a provision for GAO to review the WTU program. GAO evaluated, among other things, the extent to which the Army has (1) assessed the effectiveness of the Triad of Care model; (2) established processes to oversee the selection of WTU personnel, assess their training, and adjust staff levels; and (3) assessed adherence to WTU admittance criteria and the impact of any changes to them. GAO conducted site visits to 5 WTUs, based on a mix of active and reserve component soldiers and other variables. The Army has not assessed the effectiveness of the Triad of Care model, the core structure of the Warrior Transition Unit (WTU) program, consisting of a team of three key staff that provide medical case management. The Army established the Triad of Care model at a time when WTU soldiers' diagnoses were primarily for physical conditions. Since then, the composition of diagnoses has changed significantly. Specifically, in 2008, about 36 percent of the 12,228 soldiers who entered the WTUs had a behavioral health diagnosis. In 2015, however, over half of the 2,628 soldiers who entered the WTUs, about 52 percent, had such a diagnosis. Despite the change in the composition of diagnoses, the Army has not assessed its approach for managing soldiers' care. Officials from the five WTUs that GAO visited stated that they have added social workers to the Triad as an ad-hoc measure to provide better case management and certain types of behavioral health services. These local adaptations represent efforts to meet an immediate medical need and support the need for analysis of whether the Triad model should change. Assessing the Triad in light of the changes in WTU soldiers' diagnoses would position the Army to better determine how to meet WTU soldiers' medical needs. The Army faces challenges in its oversight of the selection of squad leaders and platoon sergeants to staff WTUs, in the evaluation of staff training, and in the ability to adjust future staff levels if needed. Specifically, the Army has established selection processes and updated its selection criteria for these WTU personnel, but it is not exercising oversight responsibility to track full adherence to these policies, specifically the Army's requirement to interview candidates for these positions. Candidates for these positions are drawn from a mix of Army occupations, and the selection process, including interviews, is intended to ensure the suitability of the staff selected for these sensitive positions. While the Army has taken steps to improve its training program for squad leaders and platoon sergeants, the program does not incorporate a post-training assessment of the application of training to the work environment. Without information that could be obtained from such assessments, the Army may miss an opportunity to incorporate information concerning the practical application of training. In addition, the Army has not developed plans for how it would increase WTU staff levels, if needed, to support any potential future increase in demand. The ability to reverse the decision to inactivate 11 WTUs by August 2016 was a key planning consideration for the Army. However, without a plan to address staff level changes, the Army lacks assurance that it can select, train, and assign staff to its WTUs in a timely manner. While the Army has implemented a process for reviewing the eligibility of soldiers to be admitted to WTUs, it does not track instances in which Commanders have made exceptions to these criteria. By not tracking this information, the Army does not know how frequently such exceptions are made and cannot ensure the best use of resources. In addition, the Army is planning to expand a WTU-alternative program to the Army Reserve, but has not examined the costs and benefits of such an expansion. Without comparing the costs and benefits of program expansion with the current system, the Army could incur significant costs without clearly articulated benefits. GAO's recommendations include that the Army assess the Triad of Care model's effectiveness; track adherence to selection processes for WTU staff; assess the application of their training; develop plans to ensure the ability to adjust staff levels, if needed; track exceptions to WTU admittance criteria; and compare the costs and benefits of expanding a WTU-alternative program for Army Reserve soldiers. DOD concurred with each of GAO's recommendations. |
Among the federal statutes that affect the reuse process, four are of particular importance: (1) the base realignment and closure acts of 1988 and 1990, (2) the Federal Property and Administrative Services Act of 1949, and (3) the 1987 Stewart B. McKinney Homeless Assistance Act (McKinney Act). Amendments to these acts enacted within the past year are leading to ongoing changes in reuse planning and implementation at closing bases. The Defense Authorization Amendments and Base Closure and Realignment Act and the Defense Base Closure and Realignment Act of 1990—collectively referred to as the Base Realignment and Closure (BRAC) acts—are the two statutes that authorize the Secretary of Defense to close military bases and dispose of property. Title XXIX of the National Defense Authorization Act for Fiscal Year 1994 amended the BRAC Acts to enable local redevelopment authorities to receive government property at no initial cost if the property is used for economic development. The Federal Property and Administrative Services Act of 1949 requires disposal agencies to provide DOD and other federal agencies an opportunity to request property to satisfy a programmed requirement. Property may be conveyed at no cost under various public benefit discount programs, sold for not less than the appraised fair market value through negotiated sale to state governments or their instrumentalities, or sold at a competitive public sale. Surplus property can be made available to providers of services to the homeless as provided for by the McKinney Act. At the time of our 1994 report, the McKinney Act assigned such providers higher priority than local communities when conflicts over reuse planning for surplus property at military bases occurred. However, the Base Closure Community Redevelopment and Homeless Assistance Act of 1994 amended the BRAC acts and the McKinney Act to incorporate homeless assistance requests into the community reuse planning process and to eliminate the higher priority given to requests for property at bases undergoing realignment and closure. The information contained in this report reflects the June 1995 status of property disposal plans at 37 of the 120 installations closed by the 1988 and 1991 closure commissions (see fig. 1). About three fifths of the property at the 37 closing military bases will be retained by the federal government because it is contaminated with unexploded ordnance, has been retained by decisions made by the BRAC commissions or by legislation, or is needed by federal agencies. The remaining two fifths of the property is available for conversion to community reuse. Communities’ plans for this property involve a variety of public benefit and economic development uses. Little property is planned for negotiated sale to state and local jurisdictions or for public sale, as shown in figure 2. (See app. I for a summary of property disposal plans.) Public benefit conveyances (37,268 acres) Economic development conveyances (23,633 acres) 6% Undetermined (12,110 acres) 4% Public sale (6,849 acres) Mandatory retention by federal agencies (22,154 acres) While the federal government plans to retain about 58 percent of the property at closing bases, only 17 percent has been requested to satisfy federal agency needs. About 29 percent is contaminated with unexploded ordnance and will be retained by the federal government because the cost of cleanup and environmental damage that would be caused by cleanup are excessive. Another 12 percent of the property has been either retained per either BRAC decisions or legislation. An example of property retained per a BRAC decision would be the 100-acre parcel at Fort Benjamin Harrison, Indiana, for the Defense Finance and Accounting Service facility. An example of property retained by legislation would be the 1,480-acre Presidio of San Francisco, California, which was transferred to the National Park Service. Of the 58 percent, the Department of Interior’s Fish and Wildlife Service and Bureau of Land Management are to receive about 42 percent of the property. Much of the property is contaminated with unexploded ordnance. DOD will retain about 13 percent to support Reserve, National Guard, Defense Finance and Accounting Service facilities, and other active duty missions. Other federal agencies will receive about 3 percent of the property for such uses as federal prisons and national parks. (See app. II for a summary of federal uses.) Communities also are planning to use about 20 percent of the base property for various public benefits. The largest public benefit use is for commercial airport conversions, which will total about 14 percent under current plans. About 4 percent is to go to park and recreation use, the second largest public benefit use. Plans call for transferring another 2 percent of the property to such public benefit uses as education, homeless assistance, and state prisons. Communities are planning to acquire about 12 percent of the property under economic development conveyances, and DOD plans to sell about 4 percent of the property either through negotiated sales to state and local jurisdictions or through direct sales to the public. Communities have not determined how the remaining 6 percent of the property should be incorporated into their reuse plans. Land sales for all BRAC closures totaled $138.8 million as of June 1995. The sale of 641 acres of developed land at Norton Air Force Base, California, to the local redevelopment authority for $52 million under an economic development conveyance is the largest sale to date. The 1989 sale of the Kapalama Military Reservation, Hawaii, to the state of Hawaii for $38.5 million is the next largest sale. When we last reported, land sales totaled $69.4 million. The largest increase in sales has been to local reuse authorities under the new economic development conveyance authority, which allows for no-cash downpayment terms and up to 15 years to pay. Overall, progress is being made in converting properties at the closing bases we reviewed to civilian use. Communities are creating new airport facilities, jobs, education and job training centers, and wildlife habitats. (See app. III for a more detailed discussion of each installation’s conversion progress.) Converting military airfields to civilian airports is a goal at most communities that have bases with closing airfields. For example, the city of Austin, Texas, is converting Bergstrom Air Force Base’s airfield and facilities into a new municipal airport. The Federal Aviation Administration has provided over $110 million toward the conversion. Buildings are being demolished to build an additional runway, while design work is underway on the conversion, which is scheduled for completion in 1998. DOD officials believe that one meaningful measure of base conversion success is in the number of jobs created. The 37 bases will have lost 54,217 civilian jobs when they are all closed. To date, 25 of the bases have closed. At these 25 bases, 29,229 jobs were lost. So far, 8,340 jobs have been created. (See app. IV for a summary of each community’s success at creating jobs.) Community efforts to create jobs have been a key component of economic recovery strategies in a number of locations. Successful efforts in a few communities have led to the creation of more jobs than were lost due to closures. At England Air Force Base, Louisiana, the community has attracted 16 tenants that have created over 700 jobs replacing the nearly 700 civilian jobs lost as a result of the base’s closure. The largest tenant has hired 65 employees to refurbish jet aircraft. Another large tenant has hired 58 people to operate a truck driving school. (See p.44.) At Chase Naval Air Station, Texas, newly constructed state prison facilities and several small manufacturers have created over 1,500 jobs, a net increase of 600 jobs over the level of civilian employment by the Navy. (See p. 38.) At Pease Air Force Base, New Hampshire, a commercial airport, an aircraft maintenance complex, a government agency, and a biotechnology firm are among the 41 tenants that have created over 1,000 jobs at the base, over twice the 400 civilian jobs lost. (See p. 86.) Several communities have begun developing or planning centers for higher education and job training. In some instances, these efforts have involved pooled efforts by local schools and state institutions and agencies. At Lowry Air Force Base, Colorado, a consortium of Colorado colleges and the Denver public school system are providing educational and job training opportunities. Currently, 80 classes with a total of 800 students are in session at the former base. (See p. 73.) At Fort Ord, California, classes at the new California State University, Monterey Bay, are scheduled to begin in the fall of 1995. About 700 graduate and undergraduate students are expected to enroll in the university’s fall class. (See p. 51.) The U.S. Fish and Wildlife Service plans to set aside land at several bases for preservation as natural wildlife habitats. In some locations, the preservation of wildlife habitats reduces the level of environmental cleanup, particularly where unexploded ordnance is involved. At Jefferson Proving Ground, Indiana, the Army plans to transfer about 47,500 acres to the Fish and Wildlife Service for a wildlife refuge, which could potentially save the Army billions of dollars in costs otherwise needed to remove unexploded ordnance. (See p. 63.) At Woodbridge Army Research Facility, Virginia, all 580 acres are to be transferred to the Service for inclusion in the Mason Neck Wildlife Refuge. Service plans for the property envision showcasing habitat and wildlife not routinely seen so close to a metropolitan area and providing environmental education opportunities. (See p. 109.) Early experiences indicate that a new form of conveyance authority called an economic development conveyance can be mutually beneficial to both the federal government and local communities. This new authority calls for (1) DOD to convey property to a local redevelopment authority for the purpose of creating jobs when it is not practicable to obtain fair market value at the time of the transfer and (2) DOD and the local authorities to negotiate the terms and conditions of the conveyances. In qualifying rural areas, conveyances are at no cost to the communities. This new authority benefits local redevelopment authorities by allowing them to take possession of properties with no initial payment so that they can implement their job creation and economic development plans. The federal government benefits by eliminating the costs of maintaining and protecting idle properties and by generating revenues to help pay for base realignment and closure costs. Several communities are planning to use this new conveyance mechanism to obtain property for economic development. Two economic development conveyance agreements—one at Norton Air Force Base, California, and another at Sacramento Army Depot, California—have been successfully negotiated. The local redevelopment authority and the Air Force have agreed that for a 641-acre parcel at the Norton Air Force Base, the local reuse authority will pay the government 40 percent of gross lease revenues and 100 percent of gross land sales revenues up to a total of $52 million, the estimated fair market value of the property. If the $52 million has not been paid in full at the end of 15 years, the local redevelopment authority is obligated to pay the Air Force the balance. The local redevelopment authority is negotiating or has entered into 7 leases that it projects will result in about 2,250 new jobs by next year. (See p. 83.) At the Sacramento Army Depot, the city of Sacramento has acquired 371 acres of the 487-acre depot from the Army. Under the terms of the economic development conveyance agreement, the Army will be paid $7.2 million either at the end of 10 years or when the property is sold by the city, whichever is sooner. The city has negotiated a lease with Packard Bell that creates a projected 2,500 to 3,000 jobs that nearly offset the 3,200 lost from the depot’s closure. (See p. 100.) Successful conversion of military bases to civilian uses involves various parties reaching a consensus on realistic reuse plans. But, before the plans can be implemented, necessary environmental cleanup actions must have been taken by DOD. In numerous communities, the failure to reach a consensus on reuse issues has caused delays in the development of acceptable reuse plans. At George Air Force Base, California, reuse was delayed about 2 years while lawsuits were settled between the city of Adelanto and the Victor Valley Economic Development Authority over which jurisdiction should have the reuse authority. (See p. 58.) At Tustin Marine Corps Air Station, California, homeless assistance groups are requesting about 400 family housing units and other buildings. The local reuse authority believes that 100 family housing units and some single-residence, multiple-unit buildings would provide a balanced living environment and that the request for additional facilities conflicts with other aspects of its reuse plan. At its request, the local reuse authority was granted a delay in DOD’s disposal process to give it more time to negotiate with the homeless assistance groups. Negotiations continue between the two groups to reach a consensus. (See p. 102.) At Puget Sound Naval Station (Sand Point), Washington, the city of Seattle and the Muckleshoot Indian Tribe are promoting competing reuse plans. The city plans to use the property for housing, parks and recreation, and educational activities. The tribe plans to use the property for economic development and educational activities. As long as 2 years ago, the Navy asked both parties to work on a joint reuse plan. However, no consensus on reuse has been reached by the two parties. DOD’s disposal decisions on the property are pending. (See p. 93.) Early efforts likewise indicate that even after a consensus is achieved, conversions are unlikely to prove successful if the resulting plans incorporate unrealistic reuse expectations. Some base conversions involve reuse expectations that may be unrealistic given their rural or relatively unpopulated geographic locations. Early experiences suggest that bases with airfields in remote locations pursuing reuse plans involving expanded airport operations are most prone to these types of expectations. Reuse plans for the airfields at Wurtsmith Air Force Base, Michigan, Eaker Air Force Base, Arkansas, and Loring Air Force Base, Maine, have been largely unsuccessful because the new tenants attracted are not capable of generating enough revenue to support the costs of airport operations. Environmental cleanup requirements delay the implementation of reuse plans. In February 1995, we reported that the Comprehensive Environmental Response, Compensation, and Liability Act of 1980 prohibits transferring property to nonfederal ownership until all necessary environmental cleanup actions are taken. However, much of the property is in the early stages of cleanup. Cleanup progress has been limited because the study and evaluation process is lengthy and complex and, with existing technology, take time. The National Defense Authorization Act for Fiscal Year 1994 allowed long-term leases of property prior to cleanup but few had been signed as of January 1995. Federal agencies have provided about $368 million to the 37 selected BRAC 1988 and 1991 communities to assist with the conversion of military bases to civilian reuse. Agencies have awarded grants for such purposes as reuse planning, airport planning, and job training, as well as for infrastructure improvements and community economic development. (See app. V for a summary of the federal assistance provided to each community.) The Federal Aviation Administration has awarded the most assistance, providing $151 million to assist with converting military airfields to civilian use. DOD’s Office of Economic Adjustment has awarded $85 million to help communities plan the reuse of closed BRAC 1988 and 1991 bases. The Department of Commerce’s Economic Development Administration has awarded $85 million to assist communities with infrastructure improvements, building demolition, and revolving loan funds. The Department of Labor has awarded $46 million to help communities retrain workers adversely affected by closures. We updated information that we had obtained from 37 installations closed by the 1988 and 1991 Base Closure Commissions. These 37 bases contain 190,000 of the 250,000 acres designated for closure by the 1988 and 1991 rounds, or about 76 percent of the total. To gather the most recent reuse information and to identify any changes since our earlier report, we interviewed base transition coordinators, community representatives, and DOD officials. We obtained up-to-date federal assistance information from the Federal Aviation Administration, the Economic Development Administration, the Department of Labor, and the Office of Economic Adjustment to determine the amount and type of assistance the federal government provided to the BRAC 1988 and 1991 base closure communities. For each base, the profiles provide (1) a description of size and location; (2) important milestone dates; (3) a reuse plan summary and a golf course reuse plan, which discusses the status of reuse implementation; (4) jobs lost and created; (5) federal assistance; and (6) environmental cleanup status. The information collected represents the status of reuse planning and actions as of June 1995. We did not obtain written agency comments. However, we discussed the report’s contents with DOD officials, and their comments have been incorporated where appropriate. Our review was performed in accordance with generally accepted government auditing standards between October 1994 and June 1995. Unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after its issue date. At that time, we will send copies to the Secretaries of Defense, the Army, the Navy, and the Air Force; the Directors of the Defense Logistics Agency and the Office of Management and Budget; and other interested parties. We will also make copies available to others upon request. Please contact me at (202) 512-8412 if you or your staff have any questions concerning this report. Major contributors to this report are listed in appendix VI. Davisville Naval Construction Battalion Center Long Beach Naval Station/Naval Hospital Myrtle Beach Air Force Base Philadelphia Naval Station/Naval Hospital/Naval Shipyard Puget Sound Naval Station (Sand Point) Chanute Air Force Base, Ill. 8 acres Fort Benjamin Harrison, Ind. 100 acres (BRAC recommendation) Fort Ord, Calif. Lexington Army Depot, Ky. Loring Air Force Base, Maine Lowry Air Force Base, Colo. 108 acres, also houses Air Reserve Personnel Center (BRAC recommendation) Norton Air Force Base, Calif. 34 acres Fort Wingate, N. Mex. Fort Ord, Calif. 740 acres of housing and support buildings to support other nearby military bases (BRAC recommendation) Fort Sheridan, Ill. 15 acres containing Army cemetery (BRAC recommendation) Davisville Naval Construction Battalion Center, R.I. 380 acres (Camp Fogarty) Fort Benjamin Harrison, Ind. 144 acres Fort Devens, Mass. 5,177 acres (BRAC recommendation) Fort Ord, Calif. Fort Sheridan, Ill. 104 acres (BRAC recommendation) Richards-Gebaur Air Reserve Station, Mo. Rickenbacker Air Guard Base, Ohio Sacramento Army Depot, Calif. 61 acres (BRAC recommendation) Tustin Marine Corps Air Station, Calif. 10 acres (also for Air National Guard and Coast Guard) Williams Air Force Base, Ariz. Fort Sheridan, Ill. Long Beach Naval Station, Calif. 592 acres to shipyard (BRAC recommendation) Philadelphia Naval Shipyard, Pa. 550 acres to be preserved by Navy for possible use in future (BRAC recommendation) Richards-Gebaur Air Reserve Station, Mo. Warminster Naval Air Warfare Center, Pa. Sacramento Army Depot, Calif. 19 acres (BRAC recommendation) Lowry Air Force Base, Colo. 7 acres (BRAC recommendation) Moffett Naval Air Station, Calif. 130 acres of housing to support nearby military base (BRAC recommendation) Norton Air Force Base, Calif. 78 acres of housing to support nearby military base (BRAC recommendation) Bergstrom Air Force Base, Tex. 330 acres (BRAC recommendation) Grissom Air Force Base, Ind. 1,398 acres (BRAC recommendation) Pease Air Force Base, N.H. 230 acres (BRAC recommendation) Rickenbacker Air Guard Base, Ohio 203 acres (BRAC recommendation) Bureau of Land Management Fort Ord, Calif. 15,009 acres, (including 8,009 acres of unexploded ordnance) Fort Wingate, N.Mex. 8,812 acres returned to public domain (legislative requirement) Fort Devens, Mass. Jefferson Proving Ground, Ind. 47,500 acres for wildlife refuge (contains unexploded ordnance) Loring Air Force Base, Maine 6,000 acres for wildlife refuge Pease Air Force Base, N.H. 1,095 acres for wildlife Puget Sound Naval Station (Sand Point), Wash. Woodbridge Army Research Facility, Va. 580 acres for wildlife refuge (legislative requirement) Wurtsmith Air Force Base, Mich. Presidio of San Francisco, Calif. 1,480 acres (legislative requirement) Philadelphia Naval Station, Pa. Puget Sound Naval Station, (Sand Point), Wash. Williams Air Force Base, Ariz. Department of Agriculture (Forest Service) Department of Justice (Bureau of Prisons) Castle Air Force Base, Calif. 659 acres for prison Fort Devens, Mass. George Air Force Base, Calif. Department of Health and Human Services (Public Health Service) Davisville Naval Construction Battalion Center, R.I. Department of Labor (Employment and Training Administration) Fort Devens, Mass. Long Beach Naval Station, Calif. Department of Transportation (Federal Aviation Administration) Moffett Naval Air Station, Calif. 1,440 acres (BRAC recommendation) Base description: The laboratory is located on 37 acres in Watertown on the Charles River, west of Boston. Its mission has been research and development of materials and manufacturing technology testing. Closure of this industrial facility, used by the Army since 1816, avoids major renovation costs. Date of closure recommendation: 1988. Estimated date of military mission termination: September 1995. Estimated date of base closure: September 1995. Summary of reuse plan: The community reuse plan calls for 30 acres to be developed for industrial, commercial, and residential use. The remaining 7 acres, comprising the commander’s mansion and grounds, are to be a public benefit conveyance through the National Park Service for a park and historic monument. The mixed-use plan emphasizes preserving the integrity of historic buildings and landscapes and providing greater public access to the riverfront. A homeless provider expressed interest in some base property, but no application was filed. Golf course: None. Implementation status: The local reuse authority will likely request that the 30 acres planned for development be conveyed through an economic development transfer. Local officials believe an economic development transfer will give the local authority greater assurance that the property is developed in accordance with the reuse plan than if the Army sells the property directly to a private developer. There is some question whether the laboratory meets one of the criteria for such a conveyance—adverse economic impact of the closure on the region—since it is small and located in a large metropolitan area. However, the plan does emphasize the job-creation criterion for economic development transfers by calling for the creation of 1,500 new jobs. Civilian jobs lost due to closure: 540. Civilian jobs created as of 3/31/95: Base not yet closed. The Economic Development Administration grant to the city of Watertown was to provide technical assistance to determine the most practical reuse for the facilities and to do a market feasibility study. National Priorities List site: Yes. Contaminants: Radionuclides, heavy metals, petroleum, oil, solvents, pesticides, and polychlorinated biphenyls. Estimated cleanup cost: $110 million. Cleanup at the facility is moving ahead gradually, with the radiological cleanup mostly complete. Estimated date cleanup complete or remedy in place: December 1997. Base description: Bergstrom is located on 3,216 acres on the southeast outskirts of Austin. The city bought the land for the government in 1941, retaining an equitable interest. Following its activation in 1942, Bergstrom was the home of troop carrier units. From 1966 to 1992, it was under the Tactical Air Command. Base closure legislation specified that the base would be turned over to the city. Date of closure recommendation: 1991. Date of military mission termination: September 1992. Date of base closure: September 1993. Summary of reuse plan: The city of Austin passed a referendum in May 1993 to support establishment of a new municipal airport, and it has decided that Bergstrom will be used for that purpose. Approximately 2,562 acres will revert back to the city. This property, along with an additional 324 acres conveyed to the city upon closure, will be used for the new airport. The Air Force will keep 330 acres as a cantonment area for the Reserves. The conveyance to the city will include the golf course and other property that can be leased to help support airport operations. The city plans to move 60 to 70 of the base’s 719 housing units downtown where they are to be sold to low-income home buyers. The city plans to demolish most of the rest of the units to build a new runway. Golf course: The golf course is being conveyed to the city to help support airport operations. Implementation status: The transfer of property to the city is being delayed until the base cleanup is complete. Meanwhile, DOD has entered into a long-term $1 lease with the city for the base. While DOD is not getting revenue from the lease, it is saving on operation and maintenance funds since the city has assumed responsibility for base maintenance, which, according to the site manager, averages about $9 million a year. The target date for opening the airport is November 1998. Civilian jobs lost due to closure: 942. Civilian jobs created as of 3/31/95: 0. The Federal Aviation Administration grants are for demolition of existing structures, supplemental environmental studies, and construction of new airport facilities. National Priorities List site: No. Contaminants: Domestic solid wastes, pesticides, paints, paint containers, incineration wastes, construction debris, petroleum/oil/lubricants, low-level radioactive waste, synthetic oils, oil/water separator wastes, silver, soaps, degreasers, air filters, battery acids, asphalt, and lead. Estimated cleanup cost: $53.2 million. Estimated date cleanup complete or remedy in place: December 1999. Base description: Cameron Station consists of 165 acres of administrative and warehouse space as well as park land in Alexandria. The park land includes a 6-acre lake. The government first purchased the land at the start of World War II for use as a general depot. It is a subinstallation of Fort Myer. Cameron Station is one of the few bases on the closure list that DOD considers to have high market value, but asbestos removal, demolition, and infrastructure costs affect the projected revenues. Date of closure recommendation: 1988. Estimated date of military mission termination: September 1995. Estimated date of base closure: September 1995. Summary of reuse plan: The plan calls for about 64 acres, including the lake and its perimeter, to be a public benefit transfer through the Department of Interior to the city for park and recreation and easements. Two homeless assistance providers are to receive about 8 acres of the property for an 80-bed shelter and a food redistribution center. The remaining 93 acres are to be sold to a private developer who will likely demolish the buildings and construct residential, commercial, and retail facilities. Cameron Station has no housing units. Golf course: None. Implementation status: The 93 acres for development were advertised for bids in January 1995, and the winning bid of $33.2 million was awarded in May 1995. Property transfer is scheduled for May 1996 if the environmental clearances have been completed by that time. Civilian jobs lost due to closure: 4,355. Civilian jobs created as of 3/31/95: Base not yet closed. National Priorities List site: No. Contaminants: Volatile organic compounds, heavy metals, petroleum products, polychlorinated biphenyls, pesticides, and herbicides. Estimated cleanup cost: $7 million. Cleaning up groundwater contamination could take 30 years, but base officials anticipate that the property can be sold once remediation measures are in place. Estimated date cleanup complete or remedy in place: September 1995. Base description: Castle is located on 2,777 acres in the agricultural San Joaquin Valley, 6 miles from the city of Merced and 100 miles southeast of Sacramento. First activated in December 1941 to provide flight training, its primary mission since the 1950s has been B-52 and KC135 crew training. Date of closure recommendation: 1991. Date of military mission termination: October 1994. Estimated date of base closure: September 1995. Summary of reuse plan: The Federal Bureau of Prisons will receive 659 acres for prison construction. The Bureau will preserve a portion of this acreage, containing seasonal wetlands and endangered species, as a prison buffer. The plan calls for 1,581 acres to be an airport public benefit transfer. The local reuse authority hopes that attracting aviation-related businesses will be a stimulus to economic development for the area. The Federal Aviation Administration will get about 1 acre of property in conjunction with the airport. Additionally, the plan calls for a public benefit transfer of 132 acres for public school and community college programs, 18 acres for parks and recreation, and 13 for health facilities. In October 1994, the Department of Health and Human Services approved homeless assistance providers’ applications for about 8 acres of property, including 8 family housing units. The plan calls for the remaining 365 acres to be sold based on the fair market value. This acreage includes 188 acres of residential areas, which may be used for a senior citizens cooperative and starter homes for first-time home buyers. Golf course: None. Implementation status: Implementation of reuse plans, including the design and construction of the federal prison, has been delayed due to difficulties related to air quality conformity, environmental cleanup, infrastructure upgrading, and leasing. The property disposition plans were approved in January 1995. Approval of the environmental impact statement was delayed about 4 months because of air quality issues. The Navy’s plans to expand operations at nearby Lemoore Naval Air Station raised concerns about air emissions from future development and aircraft traffic at Castle. The local utility company determined that the base gas distribution system should be abandoned. The local reuse authority is negotiating with this company and the Bureau of Prisons to install a new gas line to the prison site to provide gas service to tenants that may be attracted to the base in the interim. Questions concerning upgrading or replacing other aging base utility systems are also being addressed. The local authority at Castle has been having difficulty attracting businesses that will support airport operations. Castle is competing with other closing airfields for a limited number of potential aviation-related businesses. Civilian jobs lost due to closure: 1,149. Civilian jobs created as of 3/31/95: Base not yet closed. The Economic Development Administration grants included $3.5 million to the city of Atwater to connect the base sewer system to the city’s system and $1 million to Merced County to establish a revolving loan fund to be used to induce businesses to locate at Castle by providing a source of financing. The Federal Aviation Administration grants included $115,000 for an airport feasibility study and master plan and $2,028,000 for airport facilities and equipment. National Priorities List site: Yes. Contaminants: Spent solvents, fuels, waste oils, pesticides, cyanide, and cadmium. Cleanup efforts have been hampered by delays in release of funds. Castle has ground water contamination from an underground plume of trichloroethylene and other volatile organic compounds. Estimated cleanup cost: $146 million. Estimated date cleanup complete or remedy in place: October 1996. Base description: Chanute is located on 2,132 acres adjacent to the city of Rantoul, which has annexed the base property. The base was constructed in 1917 and used initially for pilot training and as a storage depot for aircraft engines and paint. Since World War II, it has served as a training installation for aerospace and weapon system support personnel. Date of closure recommendation: 1988. Date of military mission termination: July 1993. Date of base closure: September 1993. Summary of reuse plan: The plan primarily involves developing a civilian airport and attracting aviation-related businesses, as well as other types of economic development. A no-cost airport public benefit transfer of 1,181 acres is planned once cleanup is completed. DOD will retain 8 acres for a Defense Finance and Accounting center. Additionally, 147 acres will be transferred to the local community for park and recreation use and 62 acres to the University of Illinois for a research facility. The remaining 734 acres, including the golf course and housing areas, will be sold once cleanup is completed. Golf course: The golf course was sold in March 1993 to the highest bidder for $711,502, but the deed transfer has been delayed due to questions involving environmental cleanup. Meanwhile, the purchaser is operating the course on a no-cost prevention and maintenance lease. Implementation status: While environmental cleanup is underway most of the base property is being leased. Property sales have been negotiated for some parcels, but deeds cannot be transferred until the parcels are cleaned up or remediation is satisfactorily in place. Development has also been hampered by utility system issues, such as the high cost to tenants for unmetered service from the base’s steam heat system. Despite such difficulties, the community has successfully attracted businesses that have created jobs. A base official reported that about 78 businesses have located at Chanute thus far. Since development cannot be financed on short-term leases, the city is negotiating 55- and 99-year leases, which can be converted into deed transfers when cleanup is completed. The city has also used an Economic Development Administration grant to finance building renovation and asbestos removal, and one business is paying back the renovation cost through increased rent. Civilian jobs lost due to closure: 1,035. Civilian jobs created as of 3/31/95: 1,002. Economic Development Administration grants to Rantoul provided $1 million to establish a revolving loan fund to assist businesses locating at Chanute, $400,000 for planning, and $1.1 million for a road improvement project to improve traffic access to base facilities. Federal Aviation Administration grants included $194,930 for planning, an environmental audit, and a utility survey and $742,900 for resurfacing a runway. National Priorities List site: No. Contaminants: Household and industrial waste, spent solvents, fuels, and waste oils. Estimated cleanup cost: $43.5 million. Despite repeated environmental studies and surveys, the Environmental Protection Agency has determined that more testing will be needed to determine the extent of groundwater contamination and identify remediation measures. Test wells will be drilled off the base to determine whether the contamination is occurring naturally or the result of base operations. Estimated date cleanup complete or remedy in place: September 1997. Base description: This 3,757-acre base is located 5 miles east of Beeville in southern Texas, about 60 miles northwest of Corpus Christi. The base included the main air station, a 96-acre housing tract adjacent to the town, and an auxiliary airfield in Goliad County 30 miles away. Date of closure recommendation: 1991. Date of military mission termination: October 1992. Date of base closure: February 1993. Summary of reuse plan: Under the plan, 96 acres of housing were sold to the local reuse authority, and the state received a 285-acre public benefit transfer for a state prison. Local authorities requested the remaining 3,376 acres, including the auxiliary field, as economic development conveyances. While the plan calls for using the airfield as an airport, local officials are requesting an economic development conveyance rather than an airport public benefit conveyance because they believe that an economic development conveyance will allow them more latitude in their future actions than the more restrictive airport conveyance would. Golf course: The property containing the golf course is being used to construct a state prison. Implementation status: All the property has been leased, sold, or transferred, except for three sites that have been retained by the Navy until cleanup is complete. The state prison facilities are in operation, resulting in an increase in jobs for the area. In addition, according to a base closure official, the local authority has eight or nine subleases with small businesses. In a letter to the Navy, we raised questions concerning the propriety of the negotiated sale of 396 family housing units for $168,000, which is $424 a unit, to the local authority. The units are being rented for $400 to $650 per month each. Civilian jobs lost due to closure: 914. Civilian jobs created as of 3/31/95: 1,520. The Economic Development Administration grant to the Beeville/Bee County Economic Development Authority provided funds to improve the wastewater treatment facility, roads, and housing areas. The Federal Aviation Administration grant was for developing an airport master plan. National Priorities List site: No. Contaminants: Acids, heavy metals, paints, polychlorinated biphenyls, petroleum fuels and hydrocarbons, photographic chemicals, and solvents. Estimated cleanup cost: $5.4 million. Estimated date cleanup complete or remedy in place: June 1995. Base description: The center is located on 1,280 acres on the shoreline of Narragansett Bay in North Kingstown. Between 1939 and 1942, the Navy constructed a naval air station and pier in the area. In 1974, the Navy declared the air station surplus, and operations at the center were greatly reduced. In response, the state established the Port Authority and Economic Development Corporation to develop the area as a business and industrial park, which did not meet initial expectations. Date of closure recommendation: 1991. Date of military mission termination: March 1994. Date of base closure: April 1994. Summary of reuse plan: The plan calls for 380 acres to be retained by DOD for the Army Reserves and 10 acres to be retained by the Public Health Service. The Port Authority has requested an economic development transfer of 512 acres. However, the Department of Interior, in June 1994, requested 35 of the 512 acres on behalf of the Narragansett Indian tribe. The outcome of this request is unclear even though the federal screening process for the base was completed in May 1993. The Calf Pasture Point and Allen’s Harbor shoreline will be part of a 289-acre park and recreation public benefit transfer, which will go to North Kingstown, the tribe, or a partnership of both. Included in this transfer will be the gym and the yacht club, which the town will receive. Use of the remaining 89 acres, which include open space and wetlands, is undetermined. Golf course: None. Implementation status: Although the Narragansett Indian tribe has a representative on the local reuse committee, the committee opposes the tribe’s request to obtain sovereignty over the property it is requesting. The community wants to maintain zoning and land use jurisdiction and fears that the tribe will establish a casino there as the tribe is attempting to do on its reservation 25 miles away. Base closure officials are seeking a clarification of the rights and priorities of Native Americans in the base closure property screening process. Property disposition is also awaiting the completion of the environmental impact statement and the base cleanup plan. The community is urging the Navy to provide additional assistance to demolish 160 to 170 unwanted buildings. Thus far, the Navy has agreed to demolish 17 buildings it has determined to be structurally unsafe. Civilian jobs lost due to closure: 125. Civilian jobs created as of 3/31/95: 29. National Priorities List site: Yes. Contaminants: Heavy metals, polychlorinated biphenyls, pesticides, petroleum-based hydrocarbons, and volatile organic compounds. Estimated cleanup cost: $37.9 million. Estimated date cleanup complete or remedy in place: May 1998. Base description: Eaker is located on 3,286 acres, with portions of the base lying within the towns of Blytheville and Gosnell, about 68 miles northwest of Memphis, Tennessee. The base is in an agricultural area in the Mississippi River floodplain, 11 miles west of the river. It was activated as an Army airfield in 1942, serving as an advanced flying school. It was deactivated in 1945, and control of the land was transferred to the city of Blytheville. It was reactivated in 1955 as an Air Force base and was used for Strategic Air Command refueling tankers and jet fighter trainers. Date of closure recommendation: 1991. Date of military mission termination: April 1992. Date of base closure: December 1992. Summary of reuse plan: The plan centers around developing a civilian airport and attracting aviation-related businesses to support its operations. The Air Force is conveying about 1,690 acres of base property for airport-related activities, including 192 acres that reverted to the city of Blytheville at closure. The plan also includes a public benefit transfer of 484 acres for park and recreation use, which include some archaeological sites. The Presbytery of Memphis is interested in acquiring through an educational public benefit conveyance about 65 acres for an educational program to aid underachieving students. The redevelopment authority will likely receive 1,044 acres through an economic development conveyance at no cost since the base is in a rural area. The Presbytery is interested in using about 235 of the 1,044 acres that include base housing, retail exchange and commissary buildings, and the hospital for a retirement community and convention center. A chapel on 3 acres is to be sold. Golf course: The golf course is currently being leased for an annual fee of $19,000 plus maintenance. If the local authority and the Air Force agree on an economic development conveyance for the remaining base property, the course is to be included. Otherwise, the Air Force would like to sell the course. Implementation status: Questions remain about the viability of establishing a civilian airport and attracting sufficient aviation-related businesses to support it in a rural area. Nevertheless, the local airport authority is negotiating a long-term lease for about 1,690 acres of airport facilities. The local authority hopes the long-term lease will make locating at Eaker more attractive to potential business tenants. The Air Force continues to cover caretaker and maintenance costs for those portions of the base not under lease, but it would like to terminate its caretaker operations by 1997. Civilian jobs lost due to closure: 792. Civilian jobs created as of 3/31/95: 106 (jobs related to caretaker operations). The Economic Development Administration grant to the Blytheville-Gosnell Regional Airport Authority provided funds to repair the runway, taxiway, and ramps; to install instrument landing equipment; and to upgrade the airfield lighting system. The Federal Aviation Administration grant was for developing an airport master plan. National Priorities List site: No. Contaminants: Household and industrial waste, spent solvents, fuels, waste oil, paints, pesticides, chromic acid, paint stripper, medical wastes, lead acid, and nickel/cadmium batteries. Estimated cleanup cost: $47 million. Estimated date cleanup complete or remedy in place: December 2000. Base description: England is located on 2,282 acres about 5 miles west of Alexandria in central Louisiana. Constructed as a municipal airport, the base was first leased to the Army Air Force at the onset of World War II. In 1949, the property was returned to the city, but with the outbreak of hostilities in Korea in 1950, it was acquired by the Air Force. In 1955, the Air Force began constructing permanent facilities at the base. Date of closure recommendation: 1991. Date of military mission termination: June 1992. Date of base closure: December 1992. Summary of reuse plan: The plan calls for the entire 2,282-acre base to be an airport public benefit transfer to the local England Authority. All profits from revenue-generating properties, including the golf course and family housing, are planned to support airport operations. Golf course: The golf course is included in the long-term lease and provides revenue generation to the airport. Implementation status: Local officials are optimistic that England’s aviation-centered reuse plan will be successful, predicting that the authority’s operations at England will be self-sustaining within 10 years. The reuse plan calls for moving air carrier service from a small regional airport nearby to England. The Federal Aviation Administration insisted that it would only support one airport in the area. In July 1994, local officials voted unanimously for moving air carrier service to England. The Federal Aviation Administration has since approved the England plan, and it now supports a public benefit transfer of all the property to support airport operations. A long-term lease to the England Authority for the base property was signed in March 1995, ending the Air Force’s responsibility for funding about $2 million in operations and maintenance costs. The England Authority has attracted 16 tenants to help support aviation operations at England. Two weeks a month, for 10 months a year, the Joint Readiness Training Center flies wide-bodied planes in and out with military personnel for exercises at nearby Fort Polk. However, this lease only produces five full-time jobs at England. Other tenants at England include (1) a company that refurbishes jet aircraft, which employs 65; (2) a trucking company, which operates a driver training school on base with 58 jobs; (3) an operator for the golf course; (4) the local school district, which leases an elementary school; and (5) a university conducting classes on base. A state hospital will use the base medical facility to expand charity care services. Civilian jobs lost due to closure: 697. Civilian jobs created as of 3/31/95: 718. The Economic Development Administration grants to the England Economic and Industrial Development District were to construct a concrete cargo pad, security fencing, and access control; rehabilitate runways, taxiways, approach lighting, and signage; renovate an air terminal building and a railway spur; and make access road improvements. The Federal Aviation Administration grant was for developing an airport master plan. National Priorities List site: No. Contaminants: Household and industrial waste, spent solvents, fuels, waste oil, paints, lead, pesticides, alkali, low-level radioactive waste, chlorine gas, polychlorinated biphenyls, and medical waste. Estimated cleanup cost: $42.1 million. Estimated date cleanup complete or remedy in place: December 1999. Base description: The base is located on 2,501 acres about 12 miles northeast of downtown Indianapolis, near the city of Lawrence. It has been used periodically as a training ground and an infantry garrison. It was abandoned from 1913 to 1917. In 1947, it was declared surplus, but later that same year it was returned to active status as a permanent military post. Date of closure recommendation: 1991. Estimated date of military mission termination: October 1996. Estimated date of base closure: October 1996. Summary of reuse plan: DOD will retain 144 acres for use by the Reserves. In addition, 100 acres containing the Defense Finance and Accounting Service facility will be transferred to the General Services Administration. The state will receive 1,550 acres as a public benefit transfer for a state park. Homeless assistance providers will receive 4 acres, including a building with six family housing units and a barracks. The Army plans to sell the 150-acre golf course. The plan calls for the remaining 553 acres, including the Harrison Village housing complex, to be an economic development transfer. The community hopes to attract light industry. Portions of this property have historic preservation and wetlands considerations. Golf course: The state originally requested that the golf course be included as part of the public benefit transfer for the state park, but the Army has decided to sell it. The state has made an offer for the golf course and the Army is evaluating it. Implementation status: The community submitted its reuse plan to the Army in December 1994. Although the base will not close until October 1996, most of the property will be available for reuse by October 1995. Base closure officials are hoping to conclude a master lease by that time, which will facilitate the subleasing of properties as they are cleaned up and made available. The Army and the General Services Administration are coordinating to obtain Office of Management and Budget approval for a no-cost transfer of the Defense Finance and Accounting Service facility (Building #1) from the Army to the General Services Administration. The transfer is expected to take place October 1, 1995. Civilian jobs lost due to closure: 4,240. Civilian jobs created as of 3/31/95: Base not yet closed. The Economic Development Administration grant was to the state of Indiana to plan for economic adjustment associated with the closure of the base. National Priorities List site: No. Contaminants: Petroleum products, heavy metals, volatile organic compounds, and pesticides. Estimated cleanup cost: $17.6 million. Estimated date cleanup complete or remedy in place: June 1998. Base description: Fort Devens is located on 9,311 acres near the town of Ayer, about 35 miles northwest of Boston. It was created as a temporary cantonment in 1917 for training soldiers from the New England area. In 1921, it was placed in caretaker status and used for summer National Guard and Reserves training. In 1931, it was declared a permanent installation, and it was used during World War II as a reception center for draftees. In 1946, it reverted to caretaker status, but again it became a reception center during the Korean Conflict. It has remained an active Army facility since that time. Date of closure recommendation: 1991. Estimated date of military mission termination: September 1995. Estimated date of base closure: March 1996. Summary of reuse plan: About 68 percent of the base will be retained by federal agencies. Under provisions designated by the 1991 BRAC Commission, 5,177 acres will be retained by the Army for facilities and a training area for Reserve components. The Fish and Wildlife Service will receive 890 acres for a wildlife refuge. The Bureau of Prisons will receive 245 acres for a federal prison medical facility. The Department of Labor will receive 20 acres for a Job Corps Center. Two homeless assistance applications totaling 29 acres have been approved. However, the local community may find alternative means to meet these homeless requests. The remaining 2,950 acres will be an economic development conveyance. A consortium of Indian groups has expressed interest in one parcel for a cultural center and museum, but it has not submitted a formal request. Golf course: A portion of the golf course and the adjacent hospital property will be used for construction of the federal prison medical facility. Plans call for a reconfiguration of the golf course to reestablish the full 18 holes. Implementation status: The community approved a final reuse plan in December 1994. A final decision on property disposition by the Army is expected in July 1995. An interim lease with one private company is in place. The Army and the reuse authority are negotiating a master lease/purchase agreement that mirrors the profit-sharing provisions of an economic development conveyance. It calls for property that can be sold to be sold and the remainder to be leased. The local authority would receive 60 percent and the federal government 40 percent of net revenues from subleases and sales. Civilian jobs lost due to closure: 2,178. Civilian jobs created as of 3/31/95: Base not yet closed. The Economic Development Administration grants to the state provided a $750,000-revolving loan fund and $875,000 in technical assistance for businesses locating at the base. National Priorities List site: Yes. Contaminants: Volatile organic compounds, heavy metals, petroleum products, polychlorinated biphenyls, pesticides, herbicides, and explosive compounds. Estimated cleanup cost: $49.4 million. Estimated date cleanup complete or remedy in place: March 1998. Base description: Fort Ord consists of 27,725 acres on the Monterey Peninsula by the towns of Seaside and Marina, about 80 miles south of San Francisco. About 20,000 acres of the base are undeveloped property, which were used for training exercises. Since its opening in 1917, Fort Ord has served as a training and staging facility for infantry troops. From 1947 to 1975, it was a basic training center. Date of closure recommendation: 1991. Date of military mission termination: September 1993. Date of base closure: September 1993. Summary of reuse plan: The plan calls for DOD to retain 760 acres: 740 acres of housing for military personnel remaining in the area, 12 acres for the Reserves, and 8 acres for the Defense Finance and Accounting System center. The Bureau of Land Management will receive 15,009 acres, which will be preserved from development, including 8,000 acres contaminated with unexploded ordnance. State, county, and city agencies will receive public benefit transfers of 2,605 acres for parks and recreation, including beaches and sand dunes. California State University and the University of California will receive 2,681 acres as economic development conveyance to establish university and research facilities. Included in the California State University conveyance are 1,253 family housing units. Other educational institutions will receive public benefit transfers totaling 338 acres for schools. The city of Marina will be given the airport—a public benefit transfer of 750 acres. Homeless assistance providers are to receive 84 acres, including 196 family housing units, 35 single housing units, and other buildings. The Army plans to negotiate a sale of the 404-acre parcel containing two golf courses. The disposition of the remaining 5,094 acres has not been determined, but it will likely include market sales, as well as additional public benefit transfers. Golf course: The Army’s main interest is that the revenues from the two golf courses continue to support the Morale, Welfare, and Recreation programs for military personnel remaining in the area. The Army is negotiating an agreement with the city of Seaside under which the two 18-hole golf courses will be operated by the city. The agreement will stipulate shared use by military personnel and the public. Army officials reported that the Army intends to sell the golf courses to the city. Proceeds from the sale would go to support the Morale, Welfare, and Recreation programs. Enabling legislation has been introduced. Implementation status: The transfer of property has been initiated. In July 1994, the first phase of transfers to two universities took place. The new California State University, Monterey Bay, received an initial 630 acres. The university plans to open classes for an estimated 700 students in the fall of 1995. The University of California, Santa Cruz, also received 949 acres in July 1994 to establish a research center. In November 1994, 5 schools and 93 acres were transferred to the federal sponsor, Department of Education, for deeding to the Monterey Peninsula Unified School District. Civilian jobs lost due to closure: 2,835. Civilian jobs created as of 3/31/95: 92. The Office of Economic Adjustment provided nearly $2 million in planning grants to help develop and implement the reuse plan. The Office also provided $5 million to the city of Monterey to help establish a center for international trade at Fort Ord in conjunction with the Monterey Institute for International Studies. The center plans to develop the capacity and resources for international marketing of technologies and applications from university research programs being established at Fort Ord. The Economic Development Administration provided $15 million to the new California State University, Monterey Bay, to renovate buildings for educational use and meet seismic and Americans with Disabilities Act requirements. A university official estimated that an additional $140 million would be requested from DOD over the next 10 years to complete renovations. Monterey County received $1 million to establish a revolving loan fund, and the city of Marina received $900,000 for road, water system, and sewer improvements for an interim commercial development project outside the base gate. In addition, the county and the University of California, Santa Cruz, each received $750,000 for an infrastructure, economic, and job development analysis. The university also received $1.2 million to help establish its Science, Technology, and Policy Center at the base. The Federal Aviation Administration provided $88,200 to the local reuse authority to complete an airport master plan for the reuse of the base airfield and $67,500 for an environmental assessment of airport plans. The Department of Labor provided $800,000 to fund an array of retraining and reemployment services for workers affected by Fort Ord’s closure. National Priorities List site: Yes. Contaminants: Petroleum wastes and volatile organic compounds. Estimated cleanup cost: $156.6 million. Estimated date cleanup complete or remedy in place: September 1998. Base description: Fort Sheridan is located on 712 acres of high-value suburban land on the shores of Lake Michigan between Lake Forest and Highland Park, 25 miles north of Chicago. Acquired in 1887, its major mission initially was cavalry training. More recently, the fort served as headquarters of the Nike missile antiaircraft defense systems in the midwest. Its latest mission was administration and logistical support for Army recruiting and Reserve centers in the midwest. Date of closure recommendation: 1988. Date of military mission termination: May 1993. Date of base closure: May 1993. Summary of reuse plan: The Army originally proposed exchanging 156 acres at the fort with the Equitable Life Assurance Society for about 7.1 acres of land in Arlington, Virginia, where the Army wanted to build a national Army museum. The local community supported this plan, but the Secretary of Defense rejected it as inappropriate to the base closure process. The local reuse committee has submitted a new reuse plan to the Army. The Army plans to keep 104 acres for use by the Reserves and the existing 15-acre military cemetery. The Navy acquired approximately 182 acres, consisting of 392 housing units, in January 1994 for $20 million. Three homeless assistance providers were awarded approximately 46 acres, including 106 family housing units and 36 single housing units. The Lake County Forest Preserve District has requested the open space on the shoreline, bluffs, and ravines (about 103 acres) as a public benefit transfer for park and recreation use. The Department of Education has approved two public benefit transfers, totaling 4 acres and including the library and gymnasium, for educational use. The 174-acre golf course will be sold. Disposition of the remaining 84 acres, including the historic district, is undetermined. The reuse plan foresees residential and public use for this property. Golf course: Originally, the Forest Preserve District offered to purchase the golf course along with the shoreline, bluffs, and ravines for $10 million. At that time, the Army had a request from the Department of Veteran Affairs for some of that property for a national cemetery. Therefore, the Army turned down the offer from the district. When the Veteran Affairs’ offer fell through, district officials said they could not buy the property because it failed to pass a local bond measure. Consequently, the district requested the property through a public benefit transfer. However, the Army notified the district that the golf course will be sold and opened negotiations with the district regarding sale terms. Implementation status: The Army now must decide on the public benefit transfer requests. In turn, the reuse committee must decide whether to form a local redevelopment authority and request the developable property through an economic development conveyance or negotiated sale or whether to have the Army sell the property directly to developers. Civilian jobs lost due to closure: 1,681. Civilian jobs created as of 3/31/95: 18. National Priorities List site: No. Contaminants: Volatile and semivolatile organic compounds, polynuclear aromatic hydrocarbons, thallium, and unexploded ordnance. Estimated cleanup cost: $26.9 million. Estimated date cleanup complete or remedy in place: 1997 for surplus property and 1999 for retained Navy/Army property. Base description: Fort Wingate is located on 21,812 acres in northwest New Mexico. The base is bordered by the Cibola National Forest on the south and is within 10 miles of the city of Gallup to the west, the Navajo Indian Reservation to the north, and the Zuni Indian Reservation to the southwest. Additional Navajo Reservation land lies south of the National Forest. Both tribes consider Fort Wingate to be part of their ancestral lands. The base includes sites considered sacred by the Zunis, including Fenced Up Horse Canyon, site of ancestral Anasazi ruins. The southern portion of the base is also part of the watershed for the Zuni Reservation. The depot is a subinstallation of Tooele Army Depot, and it has been used for ammunition storage. There are more than 700 concrete ammunition storage bunkers. Between 1963 and 1967, the base was used by White Sands Missile Range to fire several Pershing missiles to test the missile’s mobility and accuracy. Most of the property is undeveloped. Before the Army acquired the property, it was public domain land. As such, it reverts to the Department of Interior, Bureau of Land Management, when it is not needed by DOD. Date of closure recommendation: 1988. Date of military mission termination: January 1993. Date of base closure: January 1993. Summary of reuse plan: DOD wants to retain approximately 13,000 acres for 7 years for use by the Ballistic Missile Defense Office for missile launching activity in conjunction with the White Sands Missile Range. To retain this land, either the Army would not include that portion of the base in its relinquishment notice or the missile defense office would have to lease the land from the Bureau of Land Management. Both the Navajo and Zuni tribes oppose use of Fort Wingate for missile testing, and several federal agencies have expressed environmental and land use concerns. Any property not retained by DOD will revert to the Bureau. Once the Army cleans up the contamination at Fort Wingate, the Bureau will consult with other Department of Interior agencies concerning possible uses for the property. The Department’s Bureau of Indian Affairs has requested the entire base to hold in trust on behalf of the two tribes. The tribes want the land for preservation of sacred sites, watershed protection, economic development, and use for other tribal programs. The city of Gallup opposes the conveyance of Fort Wingate property to the Indians, and it has indicated interest in a portion of the base for economic development. The city has retained an attorney to challenge the requirement that the property be relinquished to Interior when the Army’s need for it ceases. Golf course: None. Implementation status: DOD tried to get officials from Gallup, McKinley County, and the two Indian tribes to agree on forming a reuse committee under its base closure rules and guidelines. However, Interior Department and Bureau of Land Management officials maintain that this effort was inappropriate because the property will revert to the Bureau and will be handled under the Bureau’s authorities and rules. The missile defense office competed an environmental impact study with a decision in March 1995 to proceed with the proposed missile program. Meanwhile, Interior is cooperating with DOD in facilitating a private company’s use of some of the facilities to carry out a contract with the Army to deactivate Army pyrotechnics, which will provide 25 to 30 jobs for this economically depressed area. Civilian jobs lost due to closure: 90. Civilian jobs created as of 3/31/95: Not available, property to be retained by federal agencies. Federal assistance: None. National Priorities List site: No. Contaminants: Explosive compounds, polychlorinated biphenyls, pesticides, heavy metals, asbestos, and lead-based paint. Estimated cleanup cost: $22.5 million. Estimated date cleanup complete or remedy in place: Unknown. Base description: George is located on 5,068 acres between the towns of Adelanto and Victorville in the Mojave Desert northeast of Los Angeles. The base was first activated in 1941 as a pilot training location. It was placed on standby status in 1945 and used for aircraft storage. In 1950, it was reopened after hostilities began in Korea. During the Vietnam conflict, the Air Force designated George as one of its major training bases for fighter crews, and it continued as a fighter operations and training base thereafter. Date of closure recommendation: 1988. Date of military mission termination: December 1992. Date of base closure: December 1992. Summary of reuse plan: Approximately 900 acres are to be transferred to the Bureau of Prisons for a federal prison. About 2,300 acres will be an airport public benefit transfer, and 63 acres will be conveyed under public benefit transfers for schools. Homeless assistance providers will receive 34 acres, including 64 family housing units. Initially, the Air Force designated the remaining acres, including the golf course and over 1,500 family housing units, for negotiated or public sale. However, local authorities are planning to request 1,471 acres of this property as an economic development conveyance. The Air Force will dispose of the 300-acre golf course at a public sale. Golf course: The Air Force plans to dispose of the golf course by negotiated or public sale. Implementation status: Reuse of George was delayed for 2 years due to a jurisdictional dispute over reuse authority between the city of Adelanto and the Victor Valley Economic Development Authority, which was supported by Victorville, Apple Valley, Hesperia, and the county. Another reason was differences in their reuse plans over the proposed size of the airport. The Air Force recognized the Victor Valley authority as the airport authority and leased the 2,300-acre airport to the authority. Adelanto is receiving some public benefit transfers for schools. Lawsuits between Adelanto and the authority were settled in February 1995, and the authority is proceeding with plans to attract tenants and create jobs. Under the new provisions of the Base Closure Community Redevelopment and Homeless Assistance Act of 1994, the community has until September 1995 to incorporate plans for accommodating homeless needs in its reuse plan, which must be completed before the Air Force can consider an economic development conveyance request. A chapel on 2 acres was sold to a local church for $510,000. In addition, the Air Force is transferring the land for the federal prison and negotiating the sale of the 295-acre golf course and a 3-acre parcel containing the credit union. Civilian jobs lost due to closure: 506. Civilian jobs created as of 3/31/95: 209. The Economic Development Administration grants were provided to the Victor Valley authority to improve roads, the water system, the sewer system, and the airport. The Federal Aviation Administration grant was awarded for developing an airport master plan. National Priorities List site: Yes. Contaminants: Petroleum/oils/lubricants, volatile organic compounds, and heavy metals. Estimated cleanup cost: $75.8 million. Estimated date cleanup complete or remedy in place: December 1997. Base description: Grissom is located on 2,722 acres in an agricultural area of central Indiana, about 6 miles southwest of Peru and 65 miles north of Indianapolis. The base was established in 1942 as a naval air station and was used as a training site throughout World War II. It was deactivated in 1946 and was reactivated as Bunker Hill Air Force Base in 1955. It is currently home to an Air Reserve wing whose mission is air refueling operations. Date of closure recommendation: 1991. Date of military mission termination: July 1993 (active duty mission). Date of base closure: September 1994. Summary of reuse plan: According to the plan, the Air Force will retain about 1,398 acres, including the airfield, for the Reserves and will transfer 901 acres as an economic development conveyance. The remaining 423 acres, including the 1,128 family housing units, will be sold via a public sale. A primary goal of the plan is to attract businesses and replace the jobs lost due to the closure. Golf course: The 9-hole golf course is currently under interim lease to a private operator through the local redevelopment authority. The reuse plan calls for the land to be part of an economic development conveyance and used for the development of light industry. Implementation status: According to local officials, reuse efforts have been hampered by a lack of specificity in the local reuse plan, delays in property disposition decisions, and delays in negotiating a caretaker agreement and leases. The final property decision has been delayed pending the Air Force’s approval of the proposed size of the Reserve cantonment area. Despite these delays, some actions have been completed. The caretaker agreement has been finalized, and a caretaker account has been established and funded. The lease on the golf course has also been signed and two more leases have been requested. The Air Force informed local officials that lease processing procedures have been improved and leases can now be processed within 120 days. Civilian jobs lost due to closure: 807. Civilian jobs created as of 3/31/95: 28. The Economic Development Administration grant was awarded to the State of Indiana to plan for mitigating the adverse effects associated with the base’s closure. National Priorities List site: No. Contaminants: Household and industrial waste, spent solvents, fuels, waste oil, pesticides, lead, silver, munitions, and asbestos. Estimated cost of cleanup: $25.6 million. Estimated date cleanup complete or remedy in place: March 1998. Base description: The base is located on 55,264 acres, mostly forest land, near Madison in southeastern Indiana about 45 miles northeast of Louisville, Kentucky. Over 50,000 acres are contaminated with unexploded ordnance. The facility was constructed in 1941 and has been used over the years to test ammunition and weapon systems. Most of the facility was placed on standby status in 1946, reactivated in 1950, again placed on standby in 1958, and reactivated in 1961. Date of closure recommendation: 1988. Date of military mission termination: September 1994. Estimated date of base closure: September 1995. Summary of reuse plan: The Army plans to transfer about 47,500 acres to the U.S. Fish and Wildlife Service for preservation as a wildlife refuge for migratory birds. Such an action would eliminate the need to cleanup the unexploded ordnance, which could cost between $215 million and $2 billion, depending on the level of cleanup. The three adjoining counties want the remaining land conveyed to them for economic development. However, the International Union of Operating Engineers has proposed purchasing about 5,000 acres of the property for a training center. The union is offering to buy the property and do the environmental remediation, since that would fit into the kind of training it plans for the site. A plastics manufacturer has indicated interest in the same property. Consequently, the Army is considering a market sale to the highest bidder of the 5,000 acres, conveying the remaining 2,764 acres to the counties for economic development. Golf course: None. Implementation status: The Army issued an Invitation to Bid for 4,320 acres of property not contaminated with unexploded ordnance. Meanwhile, the local authority is submitting an economic development conveyance request for the same property. The Army plans to have all the property disposed of by September 30, 1995, when base closure funds for operations and maintenance costs run out. Initially, disposal to non-federal agencies would be through leases. Later, when cleanup requirements were met, the property would be sold. Army officials think that Jefferson Proving Ground will be a significant base closure success story, due to the savings in cleanup costs made possible by the property transfer to the Fish and Wildlife Service and due to the envisioned economic development of the remaining property. However, transfer of the property to the Wildlife Service faces several obstacles. According to a base official, the Wildlife Service is concerned about possible liability should someone enter the property and be injured by the unexploded ordnance, and the Wildlife Service lacks money in its budget to staff and maintain the preserve. Meanwhile, the Air National Guard has asked the Air Force to request the property for an expanded bombing and strafing area. Furthermore, the Environmental Protection Agency has not agreed that no environmental remediation is needed in the proposed wildlife refuge. The Wildlife Service opposes remediation because the agency does not want the habitat disturbed. The Environmental Protection Agency, however, is considering whether to place the base on the National Priorities List, which would require environmental remediation at the base. The Army maintains that the unexploded ordnance is a safety problem, not a hazardous waste problem. A joint committee is studying the issue. The Environmental Protection Agency will likely require the Army to drill some wells to monitor subsurface, as well as surface water, for years to come. Civilian jobs lost due to closure: 387. Civilian jobs created as of 3/31/95: Base not yet closed. The state of Indiana received a $50,000 Economic Development Administration grant to plan for economic adjustment associated with closure of the base. The Madison Chamber of Commerce received an Economic Development Administration grant of $850,000 to construct a new building in Madison for business incubator and technical training programs. Former base employees will have priority in terms of starting new businesses at the site. National Priorities List site: No. Contaminants: Solvents, petroleum products, heavy metals, depleted uranium, and unexploded ordnance. Estimated cleanup cost: $10.9 million (assuming unexploded ordnance will not have to be cleaned up). Estimated date cleanup complete or remedy in place: May 1997. Base description: The depot is located on 780 acres, 10 miles east of Lexington. It has 1.8 million square feet of covered storage space. It was established in 1941, and it has been used to store radar and communications equipment. Depot properties, including buildings and the golf course, have deteriorated since the closure decision was announced and the Army curtailed its maintenance. Date of closure recommendation: 1988. Estimated date of military mission termination: September 1995. Estimated date of base closure: September 1995. Summary of reuse plan: The Army is retaining one building located on 4 acres of land for a Defense Finance and Accounting center. The state of Kentucky has signed a 7-year lease for the rest of the property, and it is covering the cost of renovation and repair instead of lease payments to the Army. The state plans to request 210 acres as a public benefit transfer for park and recreational use, and it is requesting that the remaining 566 acres of the property be conveyed to it through an economic development conveyance. Golf course: The deterioration of the 9-hole golf course has made it unusable as a golf course. In determining the course’s fair market value, the appraisal was modified to categorize it as unimproved ground. The state plans to request the golf course as part of the public benefit transfer for park and recreation purposes. Implementation status: The state appropriated $1.8 million to rehabilitate deteriorating buildings and cover operating costs. Current operations by a military contractor at the base are providing about 500 jobs. The state also has a sublease with the Kentucky National Guard for training-related use of several buildings and some base land. Under a DOD contract, the state is using some buildings for processing military equipment being brought back from Europe, and it is negotiating to sublease additional space to several other organizations. Civilian jobs lost due to closure: 1,131. Civilian jobs created as of 3/31/95: Base not yet closed. National Priorities List site: No. Contaminants: Volatile and semivolatile organic compounds, heavy metals, polychlorinated biphenyls, pesticides, and herbicides. Estimated cleanup cost: $25 million. Base officials are awaiting Environmental Protection Agency and state approval of remediation plans. Estimated date cleanup complete or remedy in place: To be determined. Base description: The naval station and hospital, as well as several housing areas and a golf course, are located on 932 acres at various sites in the Long Beach area. Portions of the property lie within the Long Beach city limits, while other portions are in nearby Los Angeles county towns. The Navy began acquiring property for the station in 1935. In 1946, the station was chartered to provide welfare, recreation, and social facilities, in addition to maintaining facilities for the operation and berthing of tugboats, barges, and similar vessels. In 1964, the U.S. government purchased the land for the hospital from the city of Long Beach, and the hospital was commissioned in 1967. Date of closure recommendation: 1991. Date of military mission termination: Hospital—March 1994 and Naval Station—September 1994. Date of base closure: Hospital—March 1994 and Naval Station—September 1994. Summary of reuse plan: The Navy plans to transfer 592 acres, including the main station, the golf course, and over 1,000 family housing units, to the naval shipyard. It also plans to transfer 17 acres to the Department of Labor for a Job Corps training center. The Long Beach school district received 62 acres as an educational public benefit transfer in September 1994. California State University, Long Beach, requested an economic development conveyance of 30 acres, which include 294 family housing units. The Navy expects that 148 acres will be conveyed for future expansion of Long Beach and Los Angeles port facilities and transportation corridors to the ports. Plans call for at least 26 acres to be used for homeless assistance, including 204 family housing units. Disposition of the remaining 57 acres, including the naval hospital, is undetermined. Additional acres are being considered for homeless assistance groups under the Base Closure Community Redevelopment and Homeless Assistance Act of 1994. DOD has recommended to the 1995 Base Realignment and Closure Commission that the naval shipyard be closed. If this recommendation is sustained, the property being transferred to the shipyard will be disposed of as part of the shipyard closure process. Golf course: The golf course property is owned by the Army and leased to the Navy through an indefinite lease. The Navy plans to retain the golf course, which is located about 10 miles from the naval station and 3 miles from the naval hospital, transferring the course to the naval shipyard. Implementation status: A reuse plan for the Los Angeles portion of the property has not yet been completed. Reuse disputes between Long Beach and nearby communities have led to delays in property disposition decisions. The Long Beach plan calls for the hospital to be converted into a retail center, while an opposing plan supported by nearby communities calls for it to become a Los Angeles County Office of Education administrative building. DOD’s Office of Economic Adjustment hired a consultant to do an independent study that the Navy will use, along with the environmental impact study, to determine the preferred use for the property. The Long Beach plan calls for the Navy to sell the property for about $20 million, while the other plan involves an educational public benefit transfer. A draft environmental impact statement for the hospital was published in February 1995. The Navy expects to make its property disposition decisions in July 1995. Homeless assistance plans have not been settled. In response to a community challenge, the Department of Housing and Urban Development reversed its position and declared that 66 of the 140 housing units at the Taper Avenue housing site designated for a homeless assistance provider are unsuitable for that purpose because they are located too close to some aviation fuel tanks. A community group also asked the Department of Health and Human Services to reexamine the provider’s suitability to undertake such a project. Another homeless assistance provider that was approved to receive a portion of the Savannah/Cabrillo housing lost financial backing and was therefore disqualified to receive it. Both Los Angeles and Long Beach are developing new plans to address homeless needs. The city of Long Beach is still committed to using 26 acres of the property for homeless assistance, possibly through temporary leasing of some facilities. Civilian jobs lost due to closure: 417. Civilian jobs created as of 3/31/95: Not available; most of the property is being retained for naval shipyard. National Priorities List site: No. Contaminants: Petroleum hydrocarbons, paints, solvents, asbestos, trichloroethylene, and battery acid. Estimated cleanup cost: $125.3 million. Estimated date cleanup complete or remedy in place: To be determined. Base description: Loring is located on 9,482 acres and is 5 miles from the Canadian border in Limestone, Maine, near the town of Caribou. Along with the approximately 8,700-acre main base, Loring has several off-site parcels in nearby towns, which include housing tracts. Prior to closure, Loring was home to B-52 bombers and KC-135 tankers. Date of closure recommendation: 1991. Date of military mission termination: March 1994. Date of base closure: September 1994. Summary of reuse plan: DOD will retain 400 acres for use by the National Guard and 14 acres for a Defense Finance and Accounting Service center. The Fish and Wildlife Service will receive 6,000 acres for a wildlife preserve. The Bureau of Indian Affairs will receive about 600 acres of property at the main base and about 60 housing units on 20 acres in the nearby town of Presque Isle. This property will be held in trust by the Bureau for reuse by the Aroostook Band of the Micmac Indian Tribe. The Air Force also plans to transfer 50 acres to the Department of Labor for a Job Corps training center and 18 acres through public benefit transfers for several educational programs. The remaining 2,380 acres will likely be disposed of through an economic development conveyance. The initial reuse plan called for the base to be used for an airport and aviation-related enterprises. The reuse plan asks the federal government to pay $35 million of the projected $40 million in base conversion costs over 20 years, including the cost of demolition of buildings. In addition, local officials want DOD to cover base caretaker costs for 15 years. Golf course: The 9-hole golf course has been leased to the local authority. The authority plans to request the golf course as part of an economic development conveyance. Implementation status: A joint study by the Federal Aviation Administration and the Maine Department of Transportation concluded that another airport was not needed in the region. The Federal Aviation Administration indicated, however, that it would consider approving plans for an airport at Loring if a market developed for an air cargo operation that needed a heavy, long runway. Loring, however, is experiencing the same difficulties as other rural bases in attracting aviation-related businesses. Since its closure, the base has been maintained under a 3-year caretaker agreement. Under the agreement, the Air Force covers nearly 100 percent of the caretaker costs for the first year, but the percentage is expected to decline in subsequent years if businesses can be attracted to the base. According to a base official, the Defense Finance and Accounting System center should be in operation by the summer of 1995, which will provide about 500 jobs within 2 years. The local authority is hopeful the center will act as a catalyst to attract other businesses to the base. Civilian jobs lost due to closure: 1,326. Civilian jobs created as of 3/31/95: 144 (these jobs are related to caretaker operations). The Economic Development Administration awarded $1,590,000 to the city of Fort Fairfield to increase the capacity of the sewage treatment facility and $677,000 to the Northern Maine Development Commission for technical assistance. The Federal Aviation Administration grant was awarded to the local authority for airport facilities and equipment. Environmental cleanup: National Priorities List site: Yes. Contaminants: Volatile organic compounds, waste fuels, oils, spent solvents, polychlorinated biphenyls, pesticides, and heavy metals. Estimated cleanup cost: $141.9 million. The cleanup of contaminants at Loring is progressing. The Air Force is signing agreements with environmental regulators, and the base cleanup team is facilitating the work. Through interagency cooperation, $10 million was saved by combining the cleanup of two sites. The Environmental Protection Agency granted a waiver to allow marginally contaminated soil that had to be cleaned from a quarry to be used to cap a land fill. However, the base’s inclement weather restricts cleanup work to the summer months, slowing cleanup completion. Estimated date cleanup complete or remedy in place: September 1999. Base description: Lowry is located on 1,866 acres in a suburban area between Denver and Aurora. The base was established in 1937 as an Army Air Corps technical school, and it has been used as a technical training center since that time. In addition, a Defense Finance and Accounting center and the Air Reserve Personnel Center are located on the base. Date of closure recommendation: 1991. Date of military mission termination: April 1994. Date of base closure: September 1994. Summary of reuse plan: The plan calls for mixed-use urban development combining business, training, education, recreation, and residential uses to make maximum use of existing facilities and land. DOD will retain 115 acres for the Defense Finance and Accounting center, an Air Reserve personnel center, and the 21st Space Command Squadron. The Air Force is conveying 220 acres in educational public benefit transfers to a consortium of Colorado colleges and the Denver public school system for educational and job training centers. Initially, homeless assistance providers were approved to receive 47 acres, including 200 family housing units and dormitories. However, under a plan worked out with the city of Denver and the Department of Housing and Urban Development, the providers will withdraw their requests for some of this property in return for funding to establish homeless facilities at dispersed locations in the Denver metropolitan area. The officials involved believe this plan will better meet the needs of the homeless than would concentrating the facilities at Lowry. In addition, parks and recreation public benefit transfers will total 175 acres. Health-related public benefit transfers totaling 22 acres will be used for such purposes as a blood bank and a research center. An economic development transfer of 711 acres will go to the Lowry Economic Redevelopment Authority. This acreage will increase if homeless providers withdraw some of their requests for base property as expected. Market sales are planned for the remaining 576 acres, including the golf course and residential areas. Golf course: The golf course is under interim lease to the city of Denver. Its sale awaits environmental clearances. A residential landfill adjacent to the golf course may not require appreciable cleanup if its future use is open space or recreational, such as extension of the golf course. Implementation status: Following final decisions on the disposition of base property in August 1994, base closure officials have been proceeding with disposition agreements. The community college educational consortium has signed an interim lease and is conducting 80 courses for 800 students. Several other leases have been signed or are being negotiated. Four of the homeless providers have withdrawn their requests for base housing in return for a contract with the city to provide space elsewhere in the community. Pending final environmental clearances, long-term leases will be used to promote immediate reuse on most parcels. In addition, negotiations have begun between the Air Force and the local authority regarding property sales and the economic development conveyance. The economic development conveyance negotiations involve an up-front fair market settlement price in accordance with recent regulations. Civilian jobs lost due to closure: 2,290. Civilian jobs created as of 3/31/95: 104. The Economic Development Administration grant to the cities of Aurora and Denver has provided funds to prepare a work plan for identifying market opportunities for businesses affected by the base closure. National Priorities List site: No. Contaminants: Waste oil, general refuse, fly ash, coal, metals, and fuels. Estimated cleanup cost: $18.8 million. Estimated date cleanup complete or remedy in place: September 1999. Base description: Mather is located on 5,716 acres in the suburbs of Sacramento. The base was first activated in 1918 as a combat pilot training school, then placed on inactive status from 1922 until 1930 and again from 1932 until 1941. More recently, Mather hosted a Strategic Air Command Bombardment Wing and an Air Refueling Group. Date of closure recommendation: 1988. Date of military mission termination: May 1993. Date of base closure: September 1993. Summary of reuse plan: Under the plan, the Air Force will retain the 26-acre hospital and the Army will retain 31 acres for the National Guard. In addition, the Veterans Administration is requesting a 20-acre site to construct a new clinic and nursing home. Public benefit transfers will include 2,883 acres for the airport, 1,462 acres for county parks and recreation, and 95 acres for educational purposes such as a law enforcement training center. In addition, 28 acres are to be transferred to the Sacramento Housing and Redevelopment Agency to provide facilities for the homeless, including 60 family housing units and 200 single housing units. The plan calls for the remaining 1,171 acres to be sold, including the 174-acre golf course and 997 acres for commercial, industrial, and residential development. Golf course: The Air Force disposed of the golf course through a negotiated sale to the county for $6 million. Implementation status: The airport transfer was delayed over air quality issues. However, a long-term lease conveyance was signed in March 1995 to begin civilian airport use. Some of Mather’s missions moved to nearby McClellan Air Force Base, and some air emission mitigation measures may be needed to permit civilian aviation activities at Mather. Utility system and infrastructure costs have also posed some difficulties. Local utility companies have been asked to purchase these systems, but they are concerned about the cost of upgrading the systems. The municipal utilities district estimated it would cost between $2.5 million and $3 million to upgrade the electrical distribution system. The negotiated sale of the housing has been abandoned due to contentions over fair market value. Instead, the Air Force will sell the housing publicly. Furthermore, according to a base official, sale of developable parcels of land at Mather will likely be piecemeal, requiring more time and effort. Civilian jobs lost due to closure: 1,012. Civilian jobs created as of 3/31/95: 241. Sacramento received the Economic Development Administration grant to assist with the preparation of an economic development plan and the Federal Aviation Administration grant for an airport reuse feasibility study. National Priorities List site: Yes. Contaminants: Solvents, cleaners, volatile organic compounds, plating waste, and heavy metals. Estimated cleanup cost: $94 million. Estimated date cleanup complete or remedy in place: September 1997. Base description: The station was located on 1,577 acres on San Francisco Bay in Mountain View, near Sunnyvale, 7 miles north of San Jose. It was originally commissioned in 1933 as the home base for a Navy dirigible. Its recent mission was to support anti-submarine warfare training and patrol squadrons. The National Aeronautics and Space Administration’s (NASA) Ames Research Center lies adjacent to the Naval Air Station at Moffett. Lockheed Missile and Space Company and other government contractors in the adjacent community also use the airfield. The Onizuka Air Force Station, a satellite tracking and control operation, is also located adjacent to Moffett, but it has no airfield or planes, and it does not use the Moffett runway. The 1991 BRAC Commission recommended that the federal government transfer the entire naval air station directly to NASA. Date of closure recommendation: 1991. Date of military mission termination: July 1994. Date of base closure: July 1994. Summary of reuse plan: The Navy’s plan called for the no-cost transfer of 1,440 acres to NASA and 130 acres of base housing to the Air Force. A 7-acre off-base site of former housing is to be sold for a negotiated price to the city of Sunnyvale, which plans to use the site for developing affordable housing. NASA plans for airfield facilities to be used by various NASA tenants, including Lockheed, an Army medical evacuation unit, and Bay Area Reserve and National Guard units, some of which are relocating from other closing Bay Area bases. NASA itself will only use 10 percent to 20 percent of the property, and its operations are expected to make up only about 30 percent of the airfield’s use. Golf course: The golf course is part of the property being transferred to NASA, which is having the Air Force operate it through its Morale, Welfare, and Recreation program. As with other federal agency uses of Moffett facilities, the Air Force contributes proportionally to NASA for overall operations and maintenance costs. Implementation status: The active duty Navy mission ceased, and the base was transferred to NASA on July 1, 1994. As of November 1994, a NASA official reported that NASA had received commitments for about 80 percent of the available buildings and 50 percent of the airfield use. NASA is marketing Moffett property only to federal agencies and contractors because of the BRAC decision that it be kept as a federal facility. As more bases close, NASA hopes to attract more military and military-related units. However, DOD has recommended to the 1995 BRAC Commission that the Air National Guard unit at Moffett be moved to McClellan Air Force Base and that the Onizuka Air Force Station be downsized. Furthermore, NASA faces major budget cuts in coming years and is questioning whether it can handle the operational costs of Moffett Field under the current arrangements. Civilian jobs lost due to closure: 633. Civilian jobs created as of 3/31/95: 194. National Priorities List site: Yes. Contaminants: Volatile and semivolatile organic compounds, petroleum products, heavy metals, polychlorinated biphenyls, battery acid, polynuclear aromatic hydrocarbons, benzene, toluene, ethylbenzene, and xylene. Estimated cleanup cost: $52.9 million. According to the agreement between the Navy and NASA, the Navy did not have to certify that the property was clean before the transfer took place. However, the agreement calls for the Navy to remain responsible for the cleanup, which may extend to the year 2010. Estimated date cleanup complete or remedy in place: 2010. Base description: This base is located on 3,744 acres by the Atlantic coast, 100 miles north of Charleston, in an area with many resort beaches and golf courses. Beginning in 1939, the site was used as a municipal airport. In 1941, the War Department acquired the airfield from the city of Myrtle Beach. It was used for training throughout World War II and was then deactivated, and the runways and tower were given to the city. The Air Force reacquired the airfield from the city in 1955. Most recently, it was home to a tactical fighter mission. Date of closure recommendation: 1991. Date of military mission termination: September 1992. Date of base closure: March 1993. Summary of reuse plan: The plan calls for a 1,247-acre airport public benefit transfer. It further designates 1,555 acres to be included in a land exchange with the state of South Carolina, as authorized by Public Law 102-484, section 2832. In return, the Air Force will receive 12,521 acres of forested land near Shaw Air Force Base for a bombing range, a portion of which the Air Force had been leasing. Also under the plan, the 224-acre golf course will be a public benefit transfer to the city for a municipal golf course, and a 12-acre site is designated as an educational public benefit transfer for a fire training center. The Air Force plans to sell the chapel and credit union properties, totaling about 4 acres. The disposition of the remaining 702 acres, including 800 housing units, is undetermined, but could include mixed-use redevelopment and airport expansion. Accordingly, the redevelopment authority and the Air Force are discussing possible negotiated sale or economic development conveyance. A developer has offered $11.1 million for the housing. Several housing units have been requested for homeless assistance, which DOD indicated is consistent with the planned residential use of the facilities. Golf course: The Air Force planned to dispose of the golf course through a negotiated sale to the state. However, the city requested the course as a public benefit transfer for use as a municipal course. This request was subsequently endorsed by the Department of the Interior and approved by the Air Force. A private developer had offered $3.5 million for the course. Implementation status: A conflict between the city and the county over the need for and expansion of the airport caused delays in property disposition decisions. State legislation created a central authority to handle the dispute and make reuse decisions. Of the property exchanged with the state, the state has sold 69 acres to an electronics firm and is in the process of selling much of the rest for private development of a tourist resort complex. However, environmental cleanup clearances are needed before the deal is finalized. The Air Force will sell the 1.78-acre credit union site to the credit union for about $76,500, and a tentative agreement has been reached to sell a 2-acre chapel site for $280,000. One homeless assistance request remains under consideration by the Air Force. Civilian jobs lost due to closure: 799. Civilian jobs created as of 3/31/95: 588. The Economic Development Administration grants consisted of $1 million to the Grand Strand water and sewage authority and $2.5 million to the city of Myrtle Beach to construct water and sewage facilities. The Federal Aviation Administration grants were awarded for planning, a noise abatement study, airport construction projects, and equipment, such as rescue and fire-fighting equipment. In addition, before the 1991 base closure decision, the Federal Aviation Administration provided $13.1 million in grants to help develop civilian airport facilities. National Priorities List site: No. Contaminants: Spent solvents, fuel, waste oil, volatile organic compounds, heavy metals, asbestos, and paints and thinners. Estimated cleanup cost: $27 million. Estimated date cleanup complete or remedy in place: March 1997. Base description: Norton is located on 2,115 acres adjacent to the city of San Bernardino, 60 miles east of Los Angeles. The base was activated in 1942, and its primary mission included maintenance of aircraft and aircraft engines. In 1966, its mission changed to maintaining airlift capability. Date of closure recommendation: 1988. Date of military mission termination: June 1993. Date of base closure: March 1994. Summary of reuse plan: The plan calls for 78 acres of housing to be retained by the Air Force for personnel at nearby March Air Force Base. When the March base, which was recommended for realignment in 1993, declares this property excess, it will be disposed of. Under the plan, DOD will retain 34 acres for a Defense Finance and Accounting Service center and transfer 33 acres, including a headquarters building and aircraft space, to the Forest Service for its fire-fighting operations. Furthermore, public benefit transfers will include 1,267 acres for an airport, 24 acres for parks and recreation, and 10 acres for educational purposes to local colleges. Other public benefit transfers will include the 4-acre chapel and youth center sites, which will go to a homeless assistance provider, 24 acres for roads and road widening, and the base’s water and sewer system. The remaining 641 acres will be an economic development conveyance, under the terms of an agreement that guarantees $52 million in revenue to DOD after 15 years. Under this agreement, DOD will receive 40 percent of the gross revenues from leases and 100 percent of the proceeds from any property sales. After 15 years, the authority is to pay DOD any remaining balance. The San Manuel Indians have expressed interest in purchasing a parcel of land for light manufacturing use, and they are also pursuing a request through the Bureau of Indian affairs for a building to be used as a clinic. Golf course: The local redevelopment authority submitted a $6-million bid for the golf course as part of the $52-million economic development package, which was accepted by the Air Force. The authority leased the course for $190,000 annually prior to the sale. Implementation status: Reuse was delayed by a homeless request for a major portion of the base that subsequently fell through. Initially, the disposition of the utility systems was also disputed, but the dispute was resolved. A final agreement on the economic development conveyance was signed in March 1995; the agreement obligates the authority to pay the Air Force $52 million within 15 years for the 641 acres, including the golf course and the utility systems other than sewer and water, which will be conveyed for public health purposes. The authority is already negotiating seven subleases, under which the tenants will receive free rent for 6 to 12 months in return for renovating the old buildings. Until the environmental cleanup is complete, most property is being disposed of under leases instead of deed transfers. According to base closing officials, processing leases and deed transfers has been time-consuming. Public benefit transfers have been delayed because the sponsoring federal agencies are reluctant to transfer property where cleanup has not been completed. The Air Force is preparing long-term leases in lieu of assignment to sponsoring agencies. Civilian jobs lost due to closure: 2,133. Civilian jobs created as of 3/31/95: 25. The Economic Development Administration funds were awarded to the city of San Bernardino to improve the roads and water system at Norton. The Federal Aviation Administration grants were awarded to the local authority for $118,638 to develop an airport master plan and for $2.1 million for airport construction and improvements. National Priorities List site: Yes. Contaminants: Waste oils and fuel, spent solvents, paints, refrigerants, heavy metals, and volatile organic compounds. Estimated cleanup cost: $117.4 million. Estimated date cleanup complete or remedy in place: December 2000. Base description: Pease is located on 4,257 acres at Portsmouth in southeastern New Hampshire. It started operations in 1956 as a Strategic Air Command base; its mission was to maintain a force capable of long-range bombardment and air-to-air refueling operations. Date of closure recommendation: 1988. Date of military mission termination: September 1990. Date of base closure: March 1991. Summary of reuse plan: The Air Force retained 230 acres for the Air National Guard and transferred 1,095 acres to the Fish and Wildlife Service for a wildlife refuge. Local authorities requested a 2,305-acre airport public benefit transfer and a 600-acre economic development conveyance, which would include revenue-generating property to support airport operations. The New Hampshire state transportation agency will receive a 27-acre conveyance for highway widening. Golf course: The local authority requested that the golf course be included as part of the economic development conveyance, but they are reevaluating their request. Meanwhile, the golf course is being leased to the local authority for $100,000 annually. Implementation status: A portion of the base, including the airfield, is under lease to the local authority, and 41 tenants have created more than 1,000 jobs thus far. A commercial airport and an aircraft maintenance complex are in operation. Other tenants include the U.S. Department of State’s passport and visa processing center and a biotechnology firm. The state has made a large financial commitment to the fledgling airport, including $16 million a year in operating loans and over $100 million in bonding guarantees for business development. The Air Force remains the caretaker of about 1,050 acres that have not been leased. Although it has been 3 years since property disposition decisions were made, no deeds have been transferred. According to base officials, considerable time and effort have been spent on preparing environmental studies and reports and seeking cleanup approvals, but no end is in sight. On August 29, 1994, in a suit brought by the Conservation Law Foundation and the town of Newington, the U.S. District Court ruled that the Air Force violated section 120(h) of the Comprehensive Environmental Response, Compensation and Liability Act by transferring property under a long-term lease without an approved remedial design. However, the lease was not invalidated. This ruling has affected DOD’s leasing practices at other closing bases. The court also ordered the Air Force to prepare a supplemental environmental impact statement, which will be complete in July 1995. Civilian jobs lost due to closure: 400. Civilian jobs created as of 3/31/95: 1,038. To assist with industrial development, the Economic Development Administration awarded grants amounting to $8,475,000 to the Pease Development Authority to renovate or demolish buildings and to widen the main roadway entrance to the base to facilitate public access. In addition, the Pease community and the Portsmouth Naval Shipyard community are expected to share the benefits of a $3,450,000 Economic Development Administration grant to the New Hampshire state port authority for the construction of a barge facility in the area. Federal Aviation Administration grants were awarded for planning, preparing a noise compatibility study, installing equipment, and improving the airport. The largest of the grants was $3.8 million to rehabilitate a runway. In addition to the grants shown above, the Department of Transportation provided $400,000 for a surface transportation study and the Environmental Protection Agency provided $120,000 for a watershed restoration study. National Priorities List site: Yes. Contaminants: Volatile organic compounds, organic solvents, spent fuels, waste oils, petroleum/oils/lubricants, pesticides, paints, and elevated metals. Estimated cleanup cost: $140 million. Estimated date cleanup complete or remedy in place: November 1997. Base description: These naval facilities are located on 1,502 waterfront acres, 4 miles south of Philadelphia’s central business district. The 348-acre shipyard includes piers and water acres that contain a mothballed fleet. The BRAC Commission determined that the shipyard should be closed and preserved so that it would be available if needed in the future. The 1,105-acre naval station is adjacent to the shipyard. The property was deeded to the Navy by the city in 1868. The 49-acre hospital property is located about 1 mile from the base. The main hospital building was completed in 1935. Date of closure recommendation: Hospital—1988, Naval Station and Shipyard—1991. Estimated date of military mission termination: September 1995. Estimated date of base closure: Naval Station—January 1996 and Shipyard—September, 1996. Summary of reuse plan: Under the current plan, the Navy will retain 550 acres, including the shipyard. The plan calls for the National Park Service to receive 1 acre and for most of the hospital property to be public benefit transfers—30 acres for park land and 6 acres for a nursing home. The remaining 13 acres of hospital property to be sold for residential development. Reuse plans for 902 acres containing most of the naval station property have not been determined. The emphasis of the reuse plan is on economic development and job creation. The reuse authority hopes to encourage businesses, both large and small, to use existing buildings, and there is one large open site, the former airfield, that is suitable for large site development. Golf course: None. Implementation status: Local authorities’ initial challenge of the closure decision delayed the start of reuse planning for the closing facilities. In early 1994, the U.S. Supreme Court ruled against the challenge. The local reuse committee has completed a conceptual reuse plan, which seeks to attract private business and redevelop the area through economic development transfers and long-term leases. According to the base closure officer, although base cleanup will take 5 more years, most base property could be leased and no environmental issues should prevent reuse from occurring. In November 1994, the Navy and the city executed a master lease that permits the city to sublease the preserved shipyard facilities, thus allowing for job creation at the facility. Civilian jobs lost due to closure: 8,119. Civilian jobs created as of 3/31/95: Base not yet closed. The Office of Economic Adjustment has provided about $2 million in planning grants. In April 1995, the Office also awarded a $50-million grant to establish a revolving loan fund to invest in projects that would accelerate the conversion of the naval station and shipyard to civilian use. Economic Development Administration grants awarded to the city of Philadelphia included $1.6 million to establish a revolving loan fund to assist in the conversion of defense dependent industries and $1.1 million for a feasibility study on the potential commercial reuse of shipyard and hospital buildings and specialized equipment to determine the best use and whether there are market matches. The study also was to determine the feasibility of extensive asbestos removal from the hospital building. In addition, the Navy is expending $16 million in military construction funds to improve utility systems on the retained portion of the base. Furthermore, the 1995 Defense Appropriations Act directed the Navy to spend $14.2 million for similar utility improvements on the portion of the base that is being disposed of. National Priorities List site: No. Contaminants: Heavy metals, polychlorinated biphenyls, petroleum/oil/lubricants, solvents, and volatile organic compounds. Estimated cleanup cost: $120 million. Estimated date cleanup complete or remedy in place: 1999. Base description: The Presidio is located on 1,480 acres in San Francisco fronting the ocean and San Francisco Bay. It has been a military garrison for 220 years, occupied by Spain, Mexico, and the United States, and was designated a national historic landmark in 1962. The property includes the Letterman Army Medical Center and the Army Institute of Research, as well as a former Public Health Service hospital. Legislation enacted in 1972 to create the Golden Gate National Recreation Area included a provision mandating the transfer of the Presidio to the National Park Service if DOD determined the base was in excess of its needs. Date of closure recommendation: 1988. Date of military mission termination: September 1994 (Sixth Army Headquarters—September 1995). Estimated date of base closure: September 1995. Summary of reuse plan: The Army is transferring the entire 1,480-acre base to the National Park Service to become part of the Golden Gate National Recreation Area. The plan calls for the creation of a nonprofit corporation called the Presidio Trust to manage the conversion of the base into a park and to be responsible for the renovation and leasing of facilities. Golf course: The golf course will be transferred to the Park Service by October 1995. The Park Service is seeking a concessionaire to operate the course, and it plans to use revenues from the course, which could exceed $1 million annually, to help support park operations. Implementation status: After months of discussions and considerable controversy, the Army and the Park Service agreed on the transfer terms, and the property was transferred to the Park Service on October 1, 1994. The Army retained an irrevocable special use permit for a portion of the base to be used by Sixth Army headquarters. However, in December 1994, the Army announced that it would cease operations at the Presidio by October 1995, at which time the Park Service will have sole responsibility for the costly maintenance of the Presidio. Since the 1994 Congress did not authorize the Presidio trust, the Park Service is handling conversion efforts. The Park Service had hoped to lease the Letterman complex to the University of California Medical Complex, but the university announced in December 1994 that it would not lease the facility. Civilian jobs lost due to closure: 3,150. Civilian jobs created as of 3/31/95: 725. In addition, before turning the property over to the Park Service, the Army spent $69 million to upgrade various features of the base’s infrastructure, including its sewer systems, water treatment facilities, electrical systems, and roofs. However, these repairs do not address bringing the base’s buildings up to local codes. National Priorities List site: No. Contaminants: Petroleum hydrocarbons, heavy metals, solvents, and pesticides. Estimated cleanup cost: $104.6 million. Estimated date cleanup complete or remedy in place: July 1996. Base description: The 151-acre base is located on Lake Washington in Seattle. In 1922, the Navy established a 366-acre air station at the site. The Navy surplused 215 acres, including the airfield in 1973, which became home to the National Oceanic and Atmospheric Administration and the city’s Magnuson Park. The remaining property has served as a Navy administrative facility and includes a small research facility for the Fish and Wildlife Service. Date of closure recommendation: 1988—partial closure; 1991—full closure. Estimated date of military mission termination: September 1995. Estimated date of base closure: September 1995. Summary of reuse plan: The Navy plans to transfer 10 acres to the National Oceanic and Atmospheric Administration, which it will use to expand operations at its adjacent facility. The Fish and Wildlife Service is to receive 4 acres, which is the site of an on-base laboratory it currently operates. Seattle’s reuse plan calls for the remainder of the base to be public benefit transfers of 18 acres for homeless assistance, 82 acres for parks and recreation, 21 acres for educational activities, and 16 acres for roadways. Under this plan, homeless providers would receive 18 acres, including 3 family housing units and 197 single housing units. Under the provisions of the Base Closure Community Redevelopment and Homeless Assistance Act of 1994, the city is interested in incorporating the homeless housing with the development of mixed housing on that property. The Bureau of Indian Affairs requested the majority of the base (85 acres) on behalf of the Muckleshoot Indian tribe, which seeks to use the property for educational and economic development activities. The Muckleshoots have indicated a willingness to reduce the size of their request if the city is willing to negotiate. Golf course: None. Implementation status: The city of Seattle opposes the Muckleshoot plan, saying it is incompatible with the community’s reuse plan. The city also opposes the tribe’s gaining sovereignty over base property, which would remove it from local zoning and land use regulations. The Department of Interior has asked DOD to give the Bureau of Indian Affairs’ request priority under federal rules for disposing of excess property. As long as 2 years ago, DOD asked the parties to work on a joint reuse plan. DOD’s property disposition decision is pending because of the issue, which is delaying reuse progress at the base. Base closure and community officials doubt that the stalemate at the local level will be broken without a DOD policy decision more clearly defining Native American status in the base closure screening process and the concept of sovereignty as it applies to base closure sites not located on a reservation. Civilian jobs lost due to closure: 754. Civilian jobs created as of 3/31/95: Base not yet closed. National Priorities List site: No. Contaminants: Petroleum products and metals. Estimated cleanup cost: $5.2 million. Estimated date cleanup complete or remedy in place: January 1995. Base description: The station is located on 428 acres on the southern edge of Kansas City. The city conveyed the property to the Air Force to establish the base in 1953. Until 1970, the Air Defense Command had the primary mission on the base. In 1979, the Air Force phased down the base, and in 1980, the Air Force Reserve assumed operational control. In 1985, the Air Force transferred ownership of much of the airfield to the city, but the city was unable to develop a successful commercial airport, and the Air Force Reserve has remained the biggest user. Date of closure recommendation: 1991. Date of military mission termination: July 1994. Date of base closure: September 1994. Summary of reuse plan: DOD plans to retain 238 acres—184 acres for the Army Reserves and 54 acres for the Marine Corps. Most of the remaining property, about 178 acres, will be a public benefit transfer to the city to expand the airport. The city of Belton plans to purchase the remaining 12 acres at fair market value. Golf course: None. The golf course was disposed of when the Air Force property was transferred in 1985. Implementation status: The Air Force has turned responsibility for control tower operations and navigational maintenance over to the city. In addition, annual Air Force payments of $265,000 to partly cover airfield operations ceased as of October 1994. A final decision on property disposition was signed in April 1995. Civilian jobs lost due to closure: 569. Civilian jobs created as of 3/31/95: 0. The Federal Aviation Administration grants awarded to the Kansas City aviation department included $228,000 for an airport master plan, $744,000 for facilities and equipment, and $600,000 for grading and drainage. In addition, prior to the 1991 closure decision, the department received $955,800 in Federal Aviation Administration funds in 1990 for new runway approach lights. National Priorities List site: No. Contaminants: Petroleum/oil/lubricants, aqueous film-forming foam, polynuclear aromatic hydrocarbons, and solvents. Estimated cleanup cost: $5 million. Estimated date cleanup complete or remedy in place: September 1998. Base description: The base is located on 2,015 acres about 12 miles southeast of downtown Columbus. Construction of the base began in January 1942, and it was activated as a training center for Army Air Corps glider pilots. The base was deactivated in 1949 and reactivated in 1951 as a Strategic Air Command base supporting the Korean War build-up. The Air Force base closed in 1978, and the airfield was leased long-term to the community in 1984. However, most of the support for airport operations has continued to come from the Air National Guard. Air Guard base property to be disposed of under the current closure will include conveyance of property included in the long-term lease, as well as other runways and taxiways. Date of closure recommendation: 1991. Date of military mission termination: No active duty missions. Date of base closure: September 1994. Summary of reuse plan: The Air Force will retain 203 acres for use by the Air National Guard, and it will transfer 164 acres to the Army for use by the Army National Guard and Reserves. The remaining 1,648 acres will be an airport public benefit transfer to the port authority. Golf course: None. The golf course was disposed of when the Air Force base was closed in 1979. Implementation status: Final property screening of acreage and buildings under the McKinney Homeless Assistance Act was completed, and no formal homeless requests have been received. The public comment period on the environmental impact statement has concluded, and the statement was issued in February 1995. A final decision on property disposition was signed in May 1995. The port authority has been having difficulty attracting sufficient tenants to support airport operations. It currently receives an annual $3 million subsidy from the county. However, a local official reported that the port authority recently has had much greater success in attracting businesses. Civilian jobs lost due to closure: 1,129. Civilian jobs created as of 3/31/95: 8 (these jobs are related to caretaker operations). The Federal Aviation Administration grants were awarded for planning and airport improvements. In addition, prior to the 1991 closure decision, the Federal Aviation Administration had provided grants totaling $13.5 million to help develop civilian airport facilities. National Priorities List site: No. Contaminants: Pesticides, paint, spent fuel, waste oil, solvents, and heavy metals. Estimated cleanup cost: $41.7 million. Estimated date cleanup complete or remedy in place: June 1997. Base description: The depot is located on 487 acres in an industrial area, 7 miles southeast of downtown Sacramento. The depot first occupied its present site in 1945. Date of closure recommendation: 1991. Date of military mission termination: March 1994. Date of base closure: March 1995. Summary of reuse plan: DOD plans to retain 80 acres for use by the Army Reserve and the Navy. The Department of Health and Human Services has approved requests by homeless assistance providers for 28 acres of property, including warehouse and cold storage space for food distribution to homeless groups. The city opted for an alternative to another approved request from a homeless provider for two buildings on either side of the main administration building. Adopting the view that the operation of a homeless facility in the location would likely disrupt the economic development plan, the city instead agreed to fund the acquisition of facilities elsewhere for the homeless provider. According to a city official, the increased property tax revenue from economic development at the depot is expected to more than offset the cost of the relocation. California State University Sacramento is receiving about 8 acres for a manufacturing technology center. The remaining 371 acres have been transferred to the city of Sacramento through an economic development conveyance. Under the terms of the conveyance, the city will pay the Army $7.2 million for the property after 10 years. Golf course: None. Implementation status: Army officials consider the depot to be a model of successful fast-track efforts to clean up contaminants, convert facilities to civilian use, and create jobs at a closing base. Central to this success was the city’s ability to convince Packard Bell to locate its computer manufacturing operations at the depot. Key factors contributing to the company’s decision were the state’s approval of an enterprise zone, which enabled the company to qualify for tax breaks, and the city’s offer to finance renovation costs at the base. The city is financing $17 million in renovation costs to be covered by lease payments from Packard Bell. The city is allowing Packard Bell to sublease some of the property it has received and use the proceeds to help with renovation costs. Packard Bell has an option to buy the 269 acres it is leasing from the city for $8.9 million. Local officials expect the Packard Bell move to Sacramento will create 2,500 to 3,000 direct manufacturing jobs and up to 2,500 additional jobs for suppliers in the area. The total more than offsets the jobs lost due to depot closure. Civilian jobs lost due to closure: 3,164. Civilian jobs created as of 3/31/95: 630. National Priorities List site: Yes, the base is expected to be removed from the list in June 1996. Contaminants: Waste oil and grease, solvents, metal plating wastes, and wastewaters containing caustics, cyanide, and metals. Estimated cleanup cost: $62.4 million. Considerable progress has been made in base cleanup so that the property being transferred to Packard Bell was suitable for transfer. Some additional base cleanup activities have been slowed by a contract award bid protest. Estimated date cleanup complete or remedy in place: June 1996. Base description: The station is located on 1,620 acres in the Orange County town of Tustin south of Los Angeles. It was first commissioned in 1942 and was used to support observation blimps and personnel conducting antisubmarine patrols off the coast during World War II. It was decommissioned in 1949 but reactivated in 1951 and used solely for helicopter operations. DOD’s estimate of revenues from the disposal of property at the station is higher than for any other 1988 or 1991 base closure. Date of closure recommendation: 1991. Estimated date of military mission termination: June 1997. The 1993 BRAC Commission redirected the planned relocation of Tustin military missions, which resulted in a delay in terminating these missions at Tustin. Estimated date of base closure: July 1997. Summary of reuse plan: DOD plans to retain 10 acres for the Army Reserves. The city has agreed to include in its reuse plan about 38 acres for homeless assistance programs, including family and single housing units and a facility to be used for a children’s shelter. The plan calls for 219 acres to be educational public benefit transfers for public schools and an educational coalition involving the community college. In addition, public benefit transfer for parks and recreation will total 103 acres. The current reuse plans call for 1,142 acres of the base property to be an economic development conveyance with terms to be negotiated. The remaining 108 acres are undetermined. Disputes have arisen about additional federal requests: 12 acres by the Army Reserves, 25 acres by the Air National Guard, and 55 acres of housing, consisting of 274 family housing units, by the Coast Guard. These requests for property with high-market value are opposed by the community or Marine Corps headquarters or both. Other acreage requested by two Indian groups and a local homeless services’ coalition also conflict with local reuse plans. Golf course: None. Implementation status: The local authority has completed its reuse plan, and preparation of the environmental impact statement based on the plan is underway. DOD granted a request from the authority to delay the federal screening decision. The authority is concerned that if too much of the property is given to federal and homeless assistance agencies, the local tax base will be diminished and will be insufficient to support the many infrastructure improvements that are needed to develop the base, such as construction of new roads. One local official estimated that these infrastructure improvements will cost about $200 million, which will reduce the estimated revenue from developing base property. The authority has agreed that homeless assistance requests will be incorporated into the community plan in accordance with the Base Closure Community Redevelopment and Homeless Assistance Act of 1994. Homeless requesters want more property than has been agreed to by the authority. Determination of how much property will go to homeless requesters awaits a final decision on how much property will be transferred to federal entities. Resolution of the Indian requests for property at Tustin is on hold pending clarification at the federal level of where such requests should fit in the property screening process. Civilian jobs lost due to closure: 348. Civilian jobs created as of 3/31/95: Base not yet closed. National Priorities List site: No. Contaminants: Dichloroethane, naphthalene, pentachlorophenol, petroleum hydrocarbons, trichloroethylene, benzene, toluene, ethylbenzene, and xylene. Estimated cleanup cost: $86.2 million. Estimated date cleanup complete or remedy in place: November 1999. Base description: The center is located on 840 acres in Warminster, a populated suburban area about 20 miles north of the Philadelphia city center. The facility includes an airport, as well as office and research space. The Navy acquired the facilities in 1944 from Brewster Aeronautical Corporation, which manufactured aircraft during World War II. The facility has served as the principal naval research, development, and evaluation center for aircraft, airborne antisubmarine warfare, and aircraft systems other than aircraft-launched weapon systems. Date of closure recommendation: 1991. Estimated date of military mission termination: July 1996. Estimated date of base closure: September 1996. Summary of reuse plan: The Navy planned to retain 100 acres, including its dynamic flight simulator (centrifuge) and its inertial navigation laboratory, leaving 740 acres for reuse. The community has decided that it does not want to reuse the center as an airport. Instead, the community hopes to attract research facilities to the site. Discussions are also underway with a consortium of eight universities for a satellite campus, and the school district is interested in obtaining property for a new junior high school. County homeless assistance providers may also be interested in obtaining some center property. The community finalized its reuse plan in February 1995, which emphasizes public benefit and economic development transfers. Parks and recreation will account for approximately 296 acres, economic development conveyance 296 acres, education 67 acres, and the homeless 7 acres. The reuse authority has not developed a plan for how the remaining 74 acres will be disposed of, but has earmarked 44 acres for residential and 30 acres for municipal use. DOD has recommended to the 1995 BRAC Commission that the 100 acres the Navy was retaining also be closed. According to a base official, if the Commission approves this recommendation, this property will likely be added to the economic development conveyance. Golf course: None. Implementation status: The closure process is on schedule, and environmental remediation measures are expected to be in place by the time the base closes in 1996. Civilian jobs lost due to closure: 1,979. Civilian jobs created as of 3/31/95: Base not yet closed. The Federal Lands Reuse Authority of Bucks County, Pennsylvania, plans to establish a 35,000-square foot business incubator program in hangar and office space. According to a base official, the Economic Development Administration has promised a future grant of over $2 million to assist this program. National Priorities List site: Yes. Contaminants: Firing range wastes, fuels, heavy metals, industrial wastewater sludges, nonindustrial solid wastes, paints, polychlorinated biphenyls, sewage treatment sludge, solvents, unspecified chemicals, and volatile organic compounds. Estimated cleanup cost: $11.1 million. Estimated date cleanup complete or remedy in place: September 1996. Base description: Williams is located on 4,043 acres in Mesa, which is in the Phoenix metropolitan area. It was activated in 1941 as a flight training school, and pilot training was the base’s primary mission throughout its history. Date of closure recommendation: 1991. Date of military mission termination: January 1993. Date of base closure: September 1993. Summary of reuse plan: The reuse plan calls for the base to be converted into a civilian airport and for a consortium of educational and job training programs involving Arizona State University and Maricopa Community College. The local authority is to receive a 2,547-acre airport public benefit transfer. The colleges are to receive 657 acres through an educational public benefit transfer. This transfer would include the housing for the campus and the hospital, which would be operated jointly by Arizona State University and the Veterans Administration. The housing units will be leased until the university students occupy them. Two homeless providers will receive 42 acres, including 88 housing units and a chapel. The Army Reserve will receive 11 acres and the National Weather Service 1 acre. The Air Force will convey 16 acres as a public benefit transfer for public health purposes. The Air Force currently plans to sell the remaining 769 acres. The Gila River Indian Community is to receive the 158-acre golf course and an additional 144 acres through a negotiated sale. The remaining 467 acres, including property the local authority wanted to support the airport, is slated for negotiated sale. However, Public Law 102-484 authorized the Air Force to do a land exchange with the state of Arizona, whereby some of this property at Williams would be given to the state in exchange for about 85,000 acres of rangeland that the Air Force leases from the state. The Air Force has not acted on this prerogative, and the local authority does not favor property at Williams being conveyed to the state. Golf course: The golf course will be a negotiated sale to the Gila River Indian Community. Implementation plan: Negotiations are ongoing between the local airport authority, the education consortium, the homeless coalition, the Gila Indians, the Federal Aviation Administration, and the Air Force over property disposition issues and details. The airport authority and the Gila Indians are negotiating over possible Gila partnership in the airport authority. The education and job training programs are underway, with enrollment of over 600 students expected for the fall of 1995. Civilian jobs lost due to closure: 781. Civilian jobs created as of 3/31/95: 368. The Economic Development Administration grant was awarded to the city of Mesa to fund the educational consortium plan, a land use and economic development plan, and a transportation plan. The Federal Aviation Administration grants included $125,000 for developing an airport master plan and $2,893,000 for facilities and equipment. National Priorities List site: Yes. Contaminants: Volatile organic compounds, waste solvents, fuels, petroleum/oil/lubricants, and heavy metals. Estimated cleanup cost: $42.7 million. Estimated date cleanup complete or remedy in place: December 1997. Base description: The facility is located on 580 acres, 25 miles south of Washington, D.C. It is bounded on the west by the Marumsco National Wildlife Refuge and consists of some laboratory buildings and a wetlands area. The Army acquired the property in 1951 for use as a military radio station. The facility became inactive in 1969. In 1971, it became a satellite installation of the Harry Diamond Army Research Laboratory at Adelphi, Maryland. Date of closure recommendation: 1991. Date of military mission termination: September 1994. Date of base closure: September 1994. Summary of reuse plan: The Army plans to transfer the entire 580 acres at no cost to the Department of the Interior to be incorporated into the Fish and Wildlife Service’s Mason Neck Wildlife Refuge. An earlier community plan had called for the developed portion of the facility to be conveyed to the community for a regional employment center and environmental education, but August 1994 legislation gave the entire property to Interior. Golf course: None. Implementation status: No date has been established for transferring the facility to the Department of the Interior. At present, the facility remains under Army stewardship and continues to be maintained in a caretaker status. The Army is continuing the environmental restoration program at the base, and it will remain responsible for remediation activities until completion. According to a base official, Interior is reluctant to sign for ownership of the property because it lacks operations and maintenance funds to care for the property and upgrade or demolish buildings, particularly until the lease expires on space occupied by local Fish and Wildlife Service staff in the nearby community. Furthermore, Interior is reluctant to assume ownership until the cleanup is complete due to concern that DOD’s environmental restoration budget will be cut, leaving insufficient DOD funds to complete the cleanup. The operator of a homeless assistance seed distribution program has a no-cost temporary lease from the Army for a small warehouse operation at the base. It must make arrangements with Interior, if it wants to continue this activity after the transfer occurs. Civilian jobs lost due to closure: 90. Civilian jobs created as of 3/31/95: Not available; the property is being retained by a federal agency. National Priorities List site: No. Contaminants: Polychlorinated biphenyls, petroleum products. Estimated cleanup costs: $4.1 million. Other potential contaminants include ethylene glycol from a previous research and development activity and possible heavy metals in soils from past sewage sludge injection activities. Site investigation and sampling activities are continuing to confirm or deny potential remediation sites. Estimated date cleanup complete or remedy in place: April 1997. Base description: Wurtsmith is located in northeast Michigan on the coast of Lake Huron in the township of Oscoda. It is located on 2,205 acres of Air Force property and 2,995 acres of land leased from the state, the Forest Service, and the local power company. The base was initially established in 1924 and used as an Army Air Service gunnery range. It was closed in 1945, then reactivated in 1947. In 1958, the base was expanded to host a Strategic Air Command unit. Date of closure recommendation: 1991. Date of military mission termination: December 1992. Date of base closure: June 1993. Summary of reuse plan: The plan calls for 2 acres to be transferred to the Fish and Wildlife Service. Public benefit transfers will include 1,700 acres for a civilian airport, 15 acres for parks, 10 acres for an educational consortium, and 5 acres for a health facility. Two homeless assistance providers are requesting about 7 acres of property, including 9 family housing units and a 72-bed dormitory. The local authorities are planning to request the remaining 466 acres, including housing units, utilities, and property available for commercial development. Since Wurtsmith is a qualifying rural area, it may be a no-cost economic development conveyance. The Chippewa Indian tribe has expressed interest in buildings for a casino, as well as some base housing, but it had not made a formal request at the time of our review. Golf course: None. Implementation status: As of December 1994, the airfield facilities were being operated on a 30-year, long-term lease. Under the lease agreement, local authorities gave up the right to restoration, which otherwise would have required the Air Force to remove unwanted buildings and a runway from land originally leased from the state. The Air Force will continue to handle caretaker costs for the rest of the base. The local authority is subleasing some of the facilities to an aircraft remanufacturer, which has created over 200 jobs. A final decision on disposition of the remaining property cannot be reached until decisions are made on requests from the homeless assistance providers and the Indian tribe. Civilian jobs lost due to closure: 705. Civilian jobs created as of 3/31/95: 553. The Economic Development Administration granted Iosco County $7,717,500 for infrastructure improvements and other assistance, including funds to connect the base to municipal water and wastewater systems and to improve and expand the capacity of those systems to handle the increased load. The grant also included $375,000 for marketing and promotion and $750,000 for technical assistance to survey and subdivide the property and map public streets and the utility lines. The Economic Development Administration granted the county an additional $2 million to establish a revolving loan fund for financing the expansion of existing businesses and for attracting new businesses to the area. The Federal Aviation Administration grants were for airport facilities, equipment, and planning. National Priorities List site: No. Contaminants: Waste fuel and oil, spent solvents, and volatile organic compounds. Estimated cleanup cost: $70 million. Cleanup of groundwater contamination under the housing area will take some time, but base officials hope to have remediation measures in place by 1999. Estimated date cleanup complete or remedy in place: 1999. Recovery (Percent) Davisville Naval Construction Battalion Center Long Beach Naval Station/Naval Hospital Myrtle Beach Air Force Base Philadelphia Naval Station/Naval Hospital/ Naval Shipyard Puget Sound Naval Station (Sand Point) Tustin Marine Corps Air Station (continued) Recovery (Percent) GAO has issued the following reports related to military base closures and realignments: Military Base Closures: Analysis of DOD’s Process and Recommendations for 1995 (GAO/T-NSIAD-95-132, Apr. 17, 1995). Military Bases: Analysis of DOD’s 1995 Process and Recommendations for Closure and Realignment (GAO/NSIAD-95-133, Apr. 14, 1995). Military Bases: Challenges in Identifying and Implementing Closure Recommendations (GAO/T-NSIAD-95-107, Feb. 23, 1995). Military Bases: Environmental Impact at Closing Installations (GAO/NSIAD-95-70, Feb. 23, 1995). Military Bases: Reuse Plans for Selected Bases Closed in 1988 and 1991 (GAO/NSIAD-95-3, Nov. 1, 1994). Military Bases: Letters and Requests Received on Proposed Closures and Realignments (GAO/NSIAD-93-173S, May 25, 1993). Military Bases: Army’s Planned Consolidation of Research, Development, Test and Evaluation (GAO/NSIAD-93-150, Apr. 29, 1993). Military Bases: Analysis of DOD’s Recommendations and Selection Process for Closure and Realignments (GAO/T-NSIAD-93-11, Apr. 19, 1993). Military Bases: Analysis of DOD’s Recommendations and Selection Process for Closures and Realignments (GAO/NSIAD-93-173, Apr. 15, 1993). Military Bases: Revised Cost and Savings Estimates for 1988 and 1991 Closures and Realignments (GAO/NSIAD-93-161, Mar. 31, 1993). Military Bases: Transfer of Pease Air Force Base Slowed by Environmental Concerns (GAO/NSIAD-93-111FS, Feb. 3, 1993). Military Bases: Navy’s Planned Consolidation of RDT&E Activities (GAO/NSIAD-92-316, Aug. 20, 1992). Military Bases: Observations on the Analyses Supporting Proposed Closures and Realignments (GAO/NSIAD-91-224, May 15, 1991). Military Bases: An Analysis of the Commission’s Realignment and Closure Recommendations (GAO/NSIAD-90-42, Nov. 29, 1989). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO provided information on reuse planning and implementation at the 37 bases closed in the first two base realignment and closure (BRAC) rounds, focusing on: (1) planned disposal and reuse of the properties; (2) successful property conversions; (3) problems that delay reuse planning and implementation; and (4) assistance provided to communities. GAO found that: (1) under current plans, over half the land will be retained by the federal government because it: (a) is contaminated with unexploded ordinance; (b) has been retained by decisions made by the base realignment and closure commissions or by legislation; and (c) is needed by federal agencies; (2) most of the remaining land will be requested by local reuse authorities under various public benefit transfer authorities or the new economic development conveyance authority; (3) little land will be available for negotiated sale to state and local jurisdictions or for sale to the general public; (4) reuse efforts by numerous communities are yielding successful results; (5) hundreds of jobs are being created at some bases that more than offset the loss in civilian jobs from closures, new educational institutions are being established in former military facilities, and wildlife habitats are being created that meet wildlife preservation goals while reducing the Department of Defense's (DOD) environmental cleanup costs; (6) some communities are experiencing delays in reuse planning and implementation; (7) causes of delays include failure within the local communities to agree on reuse issues, development of reuse plans with unrealistic expectations, and environmental cleanup requirements; (8) the federal government has made available over $350 million in direct financial assistance to communities; (9) DOD's Office of Economic Assistance has provided reuse planning grants, the Department of Labor has provided job training grants, and the Federal Aviation Administration has awarded airport planning and implementation grants; and (10) grants from the Department of Commerce's Economic Development Administration are assisting communities in rebuilding or upgrading base facilities and utilities and are helping communities set up revolving loan funds that can be used to attract businesses to closed bases. |
Given the consequences of a severe influenza pandemic, in 2006 GAO developed a strategy for our work that would help support Congress’s decision making and oversight related to pandemic planning. Our strategy was built on a large body of work spanning two decades, including reviews of government responses to prior disasters such as Hurricanes Andrew and Katrina, the devastation caused by the 9/11 terror attacks, efforts to address the Year 2000 (Y2K) computer challenges, and assessments of public health capacities in the face of bioterrorism and emerging infectious diseases such as Severe Acute Respiratory Syndrome (SARS). The strategy was built around six key themes, as shown in figure 1. While all of these themes are interrelated, our earlier work underscored the importance of leadership, authority, and coordination, a theme that touches on all aspects of preparing for, responding to, and recovering from an influenza pandemic. Influenza pandemic—caused by a novel strain of influenza virus for which there is little resistance and which therefore is highly transmissible among humans—continues to be a real and significant threat facing the United States and the world. Unlike incidents that are discretely bounded in space or time (e.g., most natural or man-made disasters), an influenza pandemic is not a singular event, but is likely to come in waves, each lasting weeks or months, and pass through communities of all sizes across the nation and the world simultaneously. While the current H1N1 outbreak seems to have been relatively mild, the history of an influenza pandemic suggests it could return in a second wave this fall or winter in a more virulent form. While a pandemic will not directly damage physical infrastructure such as power lines or computer systems, it threatens the operation of critical systems by potentially removing the essential personnel needed to operate them from the workplace for weeks or months. In a severe pandemic, absences attributable to illnesses, the need to care for ill family members, and fear of infection may, according to the Centers for Disease Control and Prevention (CDC), reach a projected 40 percent during the peak weeks of a community outbreak, with lower rates of absence during the weeks before and after the peak. In addition, an influenza pandemic could result in 200,000 to 2 million deaths in the United States, depending on its severity. The Homeland Security Council (HSC) took an active approach to this potential disaster by, among other things, issuing the National Strategy for Pandemic Influenza (National Pandemic Strategy) in November 2005, and the National Pandemic Implementation Plan in May 2006. The National Pandemic Strategy is intended to provide a high-level overview of the approach that the federal government will take to prepare for and respond to an influenza pandemic. It also provides expectations for nonfederal entities—including state, local, and tribal governments; the private sector; international partners; and individuals—to prepare themselves and their communities. The National Pandemic Implementation Plan is intended to lay out broad implementation requirements and responsibilities among the appropriate federal agencies and clearly define expectations for nonfederal entities. The plan contains 324 action items related to these requirements, responsibilities, and expectations, most of which were to be completed before or by May 2009. HSC publicly reported on the status of the action items that were to be completed by 6 months, 1 year, and 2 years in December 2006, July 2007, and October 2008, respectively. HSC indicated in its October 2008 progress report that 75 percent of the action items have been completed. At the request of the House Homeland Security Committee, we have ongoing work assessing the status of implementing this plan. Federal government leadership roles and responsibilities for pandemic preparedness and response are evolving, and will require further testing before the relationships among the many federal leadership positions are well understood. Such clarity in leadership is even more crucial now, given the change in administration and the associated transition of senior federal officials. Most of these federal leadership roles involve shared responsibilities between the Department of Health and Human Services (HHS) and the Department of Homeland Security (DHS), and it is not clear how these would work in practice. According to the National Pandemic Strategy and Plan, the Secretary of HHS is to lead the federal medical response to a pandemic, and the Secretary of Homeland Security will lead the overall domestic incident management and federal coordination. In addition, under the Post-Katrina Emergency Management Reform Act of 2006, the Administrator of the Federal Emergency Management Agency (FEMA) was designated as the principal domestic emergency management advisor to the President, the HSC, and the Secretary of Homeland Security, adding further complexity to the leadership structure in the case of a pandemic. To assist in planning and coordinating efforts to respond to a pandemic, in December 2006 the Secretary of Homeland Security predesignated a national Principal Federal Official (PFO) for influenza pandemic and established five pandemic regions each with a regional PFO and Federal Coordinating Officers (FCO) for influenza pandemic. PFOs are responsible for facilitating federal domestic incident planning and coordination, and FCOs are responsible for coordinating federal resources support in a presidentially-declared major disaster or emergency. However, the relationship of these roles to each other as well as with other leadership roles in a pandemic is unclear. Moreover, as we testified in July 2007, state and local first responders were still uncertain about the need for both FCOs and PFOs and how they would work together in disaster response. Accordingly, we recommended in our August 2007 report on federal leadership roles and the National Pandemic Strategy that DHS and HHS develop rigorous testing, training, and exercises for influenza pandemic to ensure that federal leadership roles and responsibilities for a pandemic are clearly defined and understood and that leaders are able to effectively execute shared responsibilities to address emerging challenges. In response to our recommendation, HHS and DHS officials stated in January 2009 that several influenza pandemic exercises had been conducted since November 2007 that involved both agencies and other federal officials, but it is unclear whether these exercises rigorously tested federal leadership roles in a pandemic. In addition to concerns about clarifying federal roles and responsibilities for a pandemic and how shared leadership roles would work in practice, private sector officials told us that they are unclear about the respective roles and responsibilities of the federal and state governments during a pandemic emergency. The National Pandemic Implementation Plan states that in the event of an influenza pandemic, the distributed nature and sheer burden of the disease across the nation would mean that the federal government’s support to any particular community is likely to be limited, with the primary response to a pandemic coming from states and local communities. Further, federal and private sector representatives we interviewed at the time of our October 2007 report identified several key challenges they face in coordinating federal and private sector efforts to protect the nation’s critical infrastructure in the event of an influenza pandemic. One of these was a lack of clarity about the roles and responsibilities of federal and state governments on issues such as state border closures and influenza pandemic vaccine distribution. Coordination Mechanisms Mechanisms and networks for collaboration and coordination on pandemic preparedness between federal and state governments and the private sector exist, but they could be better utilized. In some instances, the federal and private sectors are working together through a set of coordinating councils, including sector-specific and cross-sector councils. To help protect the nation’s critical infrastructure, DHS created these coordinating councils as the primary means of coordinating government and private sector efforts for industry sectors such as energy, food and agriculture, telecommunications, transportation, and water. Our October 2007 report found that DHS has used these critical infrastructure coordinating councils primarily to share pandemic information across sectors and government levels rather than to address many of the challenges identified by sector representatives, such as clarifying the roles and responsibilities between federal and state governments. We recommended in the October 2007 report that DHS encourage the councils to consider and address the range of coordination challenges in a potential influenza pandemic between the public and private sectors for critical infrastructure. DHS concurred with our recommendation and DHS officials informed us at the time of our February 2009 report that the department was working on initiatives to address it, such as developing pandemic contingency plan guidance tailored to each of the critical infrastructure sectors, and holding a series of “webinars” with a number of the sectors. Federal executive boards (FEB) bring together federal agency and community leaders in major metropolitan areas outside of Washington, D.C., to discuss issues of common interest, including an influenza pandemic. The Office of Personnel Management (OPM), which provides direction to the FEBs, and the FEBs have designated emergency preparedness, security, and safety as an FEB core function. The FEBs’ emergency support role with its regional focus may make the boards a valuable asset in pandemic preparedness and response. As a natural outgrowth of their general civic activities and through activities such as hosting emergency preparedness training, some of the boards have established relationships with, for example, federal, state, and local governments; emergency management officials; first responders; and health officials in their communities. In a May 2007 report on the FEBs’ ability to contribute to emergency operations, we found that many of the selected FEBs included in our review were building capacity for influenza pandemic response within their member agencies and community organizations by hosting influenza pandemic training and exercises. We recommended that, since FEBs are well positioned within local communities to bring together federal agency and community leaders, the Director of OPM work with FEMA to formally define the FEBs’ role in emergency planning and response. As a result of our recommendation, FEBs were included in the National Response Framework (NRF) in January 2008 as one of the regional support structures that have the potential to contribute to development of situational awareness during an emergency. OPM and FEMA also signed a memorandum of understanding in August 2008 in which FEBs and FEMA agreed to work collaboratively in carrying out their respective roles in the promotion of the national emergency response system. International disease surveillance and detection efforts serve as an early warning system that could prevent the spread of an influenza pandemic outbreak. The United States and its international partners are involved in efforts to improve pandemic surveillance, including diagnostic capabilities, so that outbreaks can be quickly detected. Yet, as reported in 2007, international capacity for surveillance has many weaknesses, particularly in developing countries. As a result, assessments of the risks of the emergence of an influenza pandemic by U.S. agencies and international organizations, which were used to target assistance to countries at risk, were based on insufficiently detailed or incomplete information, limiting their value for comprehensive comparisons of risk levels by country. While the National Pandemic Strategy and National Pandemic Implementation Plan are important first steps in guiding national preparedness, important gaps exist that could hinder the ability of key stakeholders to effectively execute their responsibilities. In our August 2007 report on the National Pandemic Strategy and Implementation Plan, we found that while these documents are an important first step in guiding national preparedness, they do not fully address all six characteristics of an effective national strategy, as identified in our work. The documents fully address only one of the six characteristics, by reflecting a clear description and understanding of problems to be addressed. Further, the National Pandemic Strategy and Implementation Plan do not address one characteristic at all; they contain no discussion of what it will cost, where resources will be targeted to achieve the maximum benefits, and how benefits, risks, and costs will be balanced. Moreover, the documents do not provide a picture of priorities or how adjustments might be made in view of resource constraints. Although the remaining four characteristics are partially addressed, important gaps exist that could hinder the ability of key stakeholders to effectively execute their responsibilities. For example, state and local jurisdictions that will play crucial roles in preparing for and responding to a pandemic were not directly involved in developing the National Pandemic Implementation Plan, even though it relies on these stakeholders’ efforts. Stakeholder involvement during the planning process is important to ensure that the federal government’s and nonfederal entities’ responsibilities are clearly understood and agreed upon. Further, relationships and priorities among actions were not clearly described, performance measures were not always linked to results, and insufficient information was provided about how the documents are integrated with other response-related plans, such as the NRF. We recommended that HSC establish a process for updating the National Pandemic Implementation Plan and that the updated plan should address these and other gaps. HSC did not comment on our recommendation and has not indicated if it plans to implement it. We reported in June 2008 that, according to CDC, all 50 states and the three localities that received federal pandemic funds have developed influenza pandemic plans and conducted pandemic exercises in accordance with federal funding guidance. A portion of the $5.62 billion that Congress appropriated in supplemental funding to HHS for pandemic preparedness in 2006—$600 million—was allocated for state and local planning and exercising. All of the 10 localities that we reviewed in depth had also developed plans and conducted exercises, and had incorporated lessons learned from pandemic exercises into their planning. However, an HHS-led interagency assessment of states’ plans found on average that states had “many major gaps” in their influenza pandemic plans in 16 of 22 priority areas, such as school closure policies and community containment, which are community-level interventions designed to reduce the transmission of a pandemic virus. The remaining six priority areas were rated as having “a few major gaps.” Subsequently, HHS led another interagency assessment of state influenza pandemic plans and reported in January 2009 that although they had made important progress, most states still had major gaps in their pandemic plans. As we had reported in June 2008, HHS, in coordination with DHS and other federal agencies, had convened a series of regional workshops for states in five influenza pandemic regions across the country. Because these workshops could be a useful model for sharing information and building relationships, we recommended that HHS and DHS, in coordination with other federal agencies, convene additional meetings with states to address the gaps in the states’ pandemic plans. As reported in February 2009, HHS and DHS generally concurred with our recommendation, but have not yet held these additional meetings. HHS and DHS indicated at the time of our February 2009 report that while no additional meetings had been planned, states will have to continuously update their pandemic plans and submit them for review. We have also reported on the need for more guidance from the federal government to help states and localities in their planning. In June 2008, we reported that although the federal government has provided a variety of guidance, officials of the states and localities we reviewed told us that they would welcome additional guidance from the federal government in a number of areas, such as community containment, to help them to better plan and exercise for an influenza pandemic. Other state and local officials have identified similar concerns. According to the National Governors Association’s (NGA) September 2008 issue brief on states’ pandemic preparedness, states are concerned about a wide range of school-related issues, including when to close schools or dismiss students, how to maintain curriculum continuity during closures, and how to identify the appropriate time at which classes could resume. NGA also reported that states generally have very little awareness of the status of disease outbreaks, either in real time or in near real time, to allow them to know precisely when to recommend a school closure or reopening in a particular area. NGA reported that states wanted more guidance in the following areas: (1) workforce policies for the health care, public safety, and private sectors; (2) schools; (3) situational awareness such as information on the arrival or departure of a disease in a particular state, county, or community; (4) public involvement; and (5) public-private sector engagement. The private sector has also been planning for an influenza pandemic, but many challenges remain. To better protect critical infrastructure, federal agencies and the private sector have worked together across a number of sectors to plan for a pandemic, including developing general pandemic preparedness guidance, such as checklists for continuity of business operations during a pandemic. However, federal and private sector representatives have acknowledged that sustaining preparedness and readiness efforts for an influenza pandemic is a major challenge, primarily because of the uncertainty associated with a pandemic, limited financial and human resources, and the need to balance pandemic preparedness with other, more immediate, priorities, such as responding to outbreaks of foodborne illnesses in the food sector and, now, the effects of the financial crisis. In our March 2007 report on preparedness for an influenza pandemic in one of these critical infrastructure sectors—financial markets—we found that despite significant progress in preparing markets to withstand potential disease pandemics, securities and banking regulators could take additional steps to improve the readiness of the securities markets. The seven organizations that we reviewed—which included exchanges, clearing organizations, and payment-system processors—were working on planning and preparation efforts to reduce the likelihood that a worldwide influenza pandemic would disrupt their critical operations. However, only one of the seven had completed a formal plan. To increase the likelihood that the securities markets will be able to function during a pandemic, we recommended that the Chairman, Federal Reserve; the Comptroller of the Currency; and the Chairman, Securities and Exchange Commission (SEC); consider taking additional actions to ensure that market participants adequately prepare for a pandemic outbreak. In response to our recommendation, the Federal Reserve and the Office of the Comptroller of the Currency, in conjunction with the Federal Financial Institutions Examination Council and the SEC, directed all banking organizations under their supervision to ensure that the pandemic plans the financial institutions have in place are adequate to maintain critical operations during a severe outbreak. SEC issued similar requirements to the major securities industry market organizations. Improving the nation’s response capability to catastrophic disasters, such as an influenza pandemic, is essential. Following a mass casualty event, health care systems would need the ability to adequately care for a large number of patients or patients with unusual or highly specialized medical needs. The ability of local or regional health care systems to deliver services could be compromised, at least in the short term, because the volume of patients would far exceed the available hospital beds, medical personnel, pharmaceuticals, equipment, and supplies. Further, in natural and man-made disasters, assistance from other states may be used to increase capacity, but in a pandemic, states would likely be reluctant to provide assistance to each other due to scarce resources and fears of infection. The $5.62 billion that Congress provided in supplemental funding to HHS in 2006 was for, among other things, (1) monitoring disease spread to support rapid response, (2) developing vaccines and vaccine production capacity, (3) stockpiling antivirals and other countermeasures, (4) upgrading state and local capacity, and (5) upgrading laboratories and research at CDC. Figure 2 shows that the majority of this supplemental funding—about 77 percent—was allocated for developing antivirals and vaccines for a pandemic, and purchasing medical supplies. Also, a portion of the funding for state and local preparedness—$170 million—was allocated for state antiviral purchases for their state stockpiles. An outbreak will require additional capacity in many areas, including the procurement of additional patient treatment space and the acquisition and distribution of medical and other critical supplies, such as antivirals and vaccines for an influenza pandemic. In a severe pandemic, the demand would exceed the available hospital bed capacity, which would be further challenged by the existing shortages of health care providers and their potential high rates of absenteeism. In addition, the availability of antivirals and vaccines could be inadequate to meet demand due to limited production, distribution, and administration capacity. The federal government has provided some guidance and funding to help states plan for additional capacity. For example, the federal government provided guidance for states to use when preparing for medical surge and on prioritizing target groups for an influenza pandemic vaccine. Some state officials reported, however, that they had not begun work on altered standards of care guidelines, that is, for providing care while allocating scarce equipment, supplies, and personnel in a way that saves the largest number of lives in mass casualty event, or had not completed drafting guidelines, because of the difficulty of addressing the medical, ethical, and legal issues involved. We recommended that HHS serve as a clearinghouse for sharing among the states altered standards of care guidelines developed by individual states or medical experts. HHS did not comment on the recommendation, and it has not indicated if it plans to implement it. Further, in our June 2008 report on state and local planning and exercising efforts for an influenza pandemic, we found that state and local officials reported that they wanted federal influenza pandemic guidance on facilitating medical surge, which was also one of the areas that the HHS-led assessment rated as having “many major gaps” nationally among states’ influenza pandemic plans. The National Pandemic Implementation Plan emphasizes that government and public health officials must communicate clearly and continuously with the public throughout a pandemic. Accordingly, HHS, DHS, and other federal agencies have shared pandemic-related information in a number of ways, such as through Web sites, guidance, and state summits and meetings, and are using established networks, including coordinating councils for critical infrastructure protection, to share information about pandemic preparedness, response, and recovery. Federal agencies have established an influenza pandemic Web site (www.pandemicflu.gov) and disseminated pandemic preparedness checklists for workplaces, individuals and families, schools, health care and community organizations, and state and local governments. However, state and local officials from all of the states and localities we interviewed wanted additional federal influenza pandemic guidance from the federal government on specific topics, such as implementing community interventions, fatality management, and facilitating medical surge. Although the federal government has issued some guidance, it may not have reached state and local officials or may not have addressed the particular concerns or circumstances of the state and local officials we interviewed. In addition, private sector officials have told us that they would like clarification about the respective roles and responsibilities of the federal and state governments during an influenza pandemic emergency, such as for state border closures and influenza pandemic vaccine distribution. While the National Pandemic Strategy and Implementation Plan identify overarching goals and objectives for pandemic planning, the documents are not altogether clear on the roles, responsibilities, and requirements to carry out the plan. Some of the action items in the National Pandemic Implementation Plan, particularly those that are to be completed by state, local, and tribal governments or the private sector, do not identify an entity responsible for carrying out the action. Most of the plan’s performance measures consist of actions to be completed, such as disseminating guidance, but the measures are not always clearly linked with intended results. This lack of clear linkages makes it difficult to ascertain whether progress has in fact been made toward achieving the national goals and objectives described in the National Pandemic Strategy and Implementation Plan. Without a clear linkage to anticipated results, these measures of activities do not give an indication of whether the purpose of the activity is achieved. In addition, as discussed earlier, the National Pandemic Implementation Plan does not establish priorities among its 324 action items, which becomes especially important as agencies and other parties strive to effectively manage scarce resources and ensure that the most important steps are accomplished. Moreover, the National Pandemic Strategy and Implementation Plan do not provide information on the financial resources needed to implement them, which is one of six characteristics of an effective national strategy that we have identified. As a result, the documents do not provide a picture of priorities or how adjustments might be made in view of resource constraints. The recent outbreak of H1N1 influenza virus should serve as a powerful reminder that the threat of a pandemic influenza, which seemed to fade from public awareness in recent years, never really disappeared. While federal agencies have taken action on many of our recommendations, almost half the recommendations that we have made over the past 3 years are still not fully implemented. For one thing, it is essential, given the change in administration and the associated transition of senior federal officials, that the shared leadership roles that have been established between HHS and DHS along with other responsible federal officials, are tested in rigorous tests and exercises. Likewise, DHS should continue to work with other federal agencies and private sector members of the critical infrastructure coordinating councils to help address the challenges of coordination and clarify roles and responsibilities of federal and state governments. DHS and HHS should also, in coordination with other federal agencies, continue to work with states and local governments to help them address identified gaps in their pandemic planning. Moreover, the 3-year period covered by the National Pandemic Implementation Plan is now over and it will be important for HSC to establish a process for updating the National Pandemic Implementation Plan so that the updated plan can address the gaps we have identified, as well as lessons learned from the current H1N1 outbreak. Pandemic influenzas, as I noted earlier, differ from other types of disasters in that they are not necessarily discrete events. While the current H1N1 outbreak seems to have been relatively mild, it could return in a second wave this fall or winter in a more virulent form. Given this risk, the administration and federal agencies should turn their attention to filling in some of the gaps our work has pointed out, while time is still on our side. Chairman Pryor, Senator Ensign, and Members of the Subcommittee, this concludes my prepared statement. I would be happy to respond to any questions you may have. For further information regarding this statement, please contact Bernice Steinhardt, Director, Strategic Issues, at (202) 512-6543 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this testimony include Sarah Veale, Assistant Director; Maya Chakko; Melissa Kornblau; Susan Sato; Ellen Grady; Karin Fangman; and members of GAO’s Pandemic Working Group. The Secretary of HHS should expeditiously finalize guidance to assist state and local jurisdictions to determine how to effectively use limited supplies of antivirals and pre- pandemic vaccine in a pandemic, including prioritizing target groups for pre-pandemic vaccine. In December 2008, HHS released final guidance on antiviral drug use during an influenza pandemic. HHS officials informed us that they are drafting the guidance on pre-pandemic influenza vaccination. The Secretaries of HHS and Homeland Security should, in coordination with other federal agencies, convene additional meetings of the states in the five federal influenza pandemic regions to help them address identified gaps in their planning. HHS and DHS officials indicated that while no additional meetings are planned at this time, states will have to continuously update their pandemic plans and submit them for review. The Secretary of Homeland Security should work with sector-specific agencies and lead efforts to encourage the government and private sector members of the councils to consider and help address the challenges that will require coordination between the federal and private sectors involved with critical infrastructure and within the various sectors, in advance of, as well as during, a pandemic. DHS officials informed us that the department is working on initiatives, such as developing pandemic contingency plan guidance tailored to each of the critical infrastructure sectors, and holding a series of webinars with a number of the sectors. Influenza Pandemic: Further Efforts Are Needed to Ensure Clearer Federal Leadership Roles and an Effective National Strategy, GAO-07-781, August 14, 2007 (1) HHS and DHS officials stated that several influenza pandemic exercises had been conducted since November 2007 that involved both agencies and other federal officials, but it is unclear whether these exercises rigorously tested federal leadership roles in a pandemic. Influenza Pandemic: Opportunities Exist to Clarify Federal Leadership Roles and Improve Pandemic Planning, GAO-07-1257T, September 26, 2007 (1) The Secretaries of Homeland Security and HHS should work together to develop and conduct rigorous testing, training, and exercises for an influenza pandemic to ensure that the federal leadership roles are clearly defined and understood and that leaders are able to effectively execute shared responsibilities to address emerging challenges. Once the leadership roles have been clarified through testing, training, and exercising, the Secretaries of Homeland Security and HHS should ensure that these roles are clearly understood by state, local, and tribal governments; the private and nonprofit sectors; and the international community. (2) The Homeland Security Council (HSC) should establish a specific process and time frame for updating the National Pandemic Implementation Plan. The process should involve key nonfederal stakeholders and incorporate lessons learned from exercises and other sources. The National Pandemic Implementation Plan should also be improved by including the following information in the next update: (a) resources and investments needed to complete the action items and where they should be targeted, (b) a process and schedule for monitoring and publicly reporting on progress made on completing the action items, (c) clearer linkages with other strategies and plans, and (d) clearer descriptions of relationships or priorities among action items and greater use of outcome-focused performance measures. (2) HSC did not comment on the recommendation and has not indicated if it plans to implement it. Avian Influenza: USDA Has Taken Important Steps to Prepare for Outbreaks, but Better Planning Could Improve Response, GAO-07-652, June 11, 2007 (1) The Secretaries of Agriculture and Homeland Security should develop a memorandum of understanding that describes how the U.S. Department of Agriculture (USDA) and DHS will work together in the event of a declared presidential emergency or major disaster, or an Incident of National Significance, and test the effectiveness of this coordination during exercises. (1) Both USDA and DHS officials told us that they have taken preliminary steps to develop additional clarity and better define their coordination roles. For example the two agencies meet regularly to discuss such coordination. (2) The Secretary of Agriculture should, in consultation with other federal agencies, states, and the poultry industry, identify the capabilities necessary to respond to a probable scenario or scenarios for an outbreak of highly pathogenic avian influenza. The Secretary of Agriculture should also use this information to develop a response plan that identifies the critical tasks for responding to the selected outbreak scenario and, for each task, identifies the responsible entities, the location of resources needed, time frames, and completion status. Finally, the Secretary of Agriculture should test these capabilities in ongoing exercises to identify gaps and ways to overcome those gaps. (2) USDA officials told us that it has created a draft preparedness and response plan that identifies federal, state, and local actions, timelines, and responsibilities for responding to highly pathogenic avian influenza, but the plan has not been issued yet. (3) The Secretary of Agriculture should develop standard criteria for the components of state response plans for highly pathogenic avian influenza, enabling states to develop more complete plans and enabling USDA officials to more effectively review them. (3) USDA told us that it has drafted large volumes of guidance documents that are available on a secure Web site. However, the guidance is still under review and it is not clear what standard criteria from these documents USDA officials and states should apply when developing and reviewing plans. (4) The Secretary of Agriculture should focus additional work with states on how to overcome potential problems associated with unresolved issues, such as the difficulty in locating backyard birds and disposing of carcasses and materials (4) USDA officials have told us that the agency has developed online tools to help states make effective decisions about carcass disposal. In addition, USDA has created a secure Internet site that contains draft guidance for disease response, including highly pathogenic avian influenza, and it includes a discussion about many of the unresolved issues. (5) The Secretary of Agriculture should determine the amount of antiviral medication that USDA would need in order to protect animal health responders, given various highly pathogenic avian influenza scenarios. The Secretary of Agriculture should also determine how to obtain and provide supplies within 24 hours of an outbreak. (5) USDA officials told us that the National Veterinary Stockpile now contains enough antiviral medication to protect 3,000 animal health responders for 40 days. However, USDA has yet to determine the number of individuals that would need medicine based on a calculation of those exposed to the virus under a specific scenario. Further, USDA officials told us that a contract for additional medication for the stockpile has not yet been secured, which would better ensure that medications are available in the event of an outbreak of highly pathogenic avian influenza. Influenza Pandemic: HHS Needs to Continue Its Actions and Finalize Guidance for Pharmaceutical Interventions. GAO-08-671. Washington, D.C.: September 30, 2008. Influenza Pandemic: Federal Agencies Should Continue to Assist States to Address Gaps in Pandemic Planning. GAO-08-539. Washington, D.C.: June 19, 2008. Emergency Preparedness: States Are Planning for Medical Surge, but Could Benefit from Shared Guidance for Allocating Scarce Medical Resources. GAO-08-668. Washington, D.C.: June 13, 2008. Influenza Pandemic: Efforts Under Way to Address Constraints on Using Antivirals and Vaccines to Forestall a Pandemic. GAO-08-92. Washington, D.C.: December 21, 2007. Influenza Pandemic: Opportunities Exist to Address Critical Infrastructure Protection Challenges That Require Federal and Private Sector Coordination. GAO-08-36. Washington, D.C.: October 31, 2007. Influenza Pandemic: Federal Executive Boards’ Ability to Contribute to Pandemic Preparedness. GAO-07-1259T. Washington, D.C.: September 28, 2007. Influenza Pandemic: Opportunities Exist to Clarify Federal Leadership Roles and Improve Pandemic Planning. GAO-07-1257T. Washington, D.C.: September 26, 2007. Influenza Pandemic: Further Efforts Are Needed to Ensure Clearer Federal Leadership Roles and an Effective National Strategy. GAO-07-781. Washington, D.C.: August 14, 2007. Emergency Management Assistance Compact: Enhancing EMAC’s Collaborative and Administrative Capacity Should Improve National Disaster Response. GAO-07-854. Washington, D.C.: June 29, 2007. Influenza Pandemic: DOD Combatant Commands’ Preparedness Efforts Could Benefit from More Clearly Defined Roles, Resources, and Risk Mitigation. GAO-07-696. Washington, D.C.: June 20, 2007. Influenza Pandemic: Efforts to Forestall Onset Are Under Way; Identifying Countries at Greatest Risk Entails Challenges. GAO-07-604. Washington, D.C.: June 20, 2007. Avian Influenza: USDA Has Taken Important Steps to Prepare for Outbreaks, but Better Planning Could Improve Response. GAO-07-652. Washington, D.C.: June 11, 2007. The Federal Workforce: Additional Steps Needed to Take Advantage of Federal Executive Boards’ Ability to Contribute to Emergency Operations. GAO-07-515. Washington, D.C.: May 4, 2007. Financial Market Preparedness: Significant Progress Has Been Made, but Pandemic Planning and Other Challenges Remain. GAO-07-399. Washington, D.C.: March 29, 2007. Influenza Pandemic: DOD Has Taken Important Actions to Prepare, but Accountability, Funding, and Communications Need to be Clearer and Focused Departmentwide. GAO-06-1042. Washington, D.C.: September 21, 2006. Catastrophic Disasters: Enhanced Leadership, Capabilities, and Accountability Controls Will Improve the Effectiveness of the Nation’s Preparedness, Response, and Recovery System. GAO-06-618. Washington, D.C.: September 6, 2006. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | As the recent outbreak of the H1N1 (swine flu) virus underscores, an influenza pandemic remains a real threat to our nation and to the world. Over the past 3 years, GAO has conducted a body of work to help the nation better prepare for a possible pandemic. In a February 2009 report, GAO synthesized the results of this work, pointing out that while the previous administration had taken a number of actions to plan for a pandemic, including developing a national strategy and implementation plan, much more needs to be done, and many gaps in preparedness and planning still remain. This statement is based on the February 2009 report which synthesized the results of 11 reports and two testimonies covering six thematic areas: (1) leadership, authority, and coordination; (2) detecting threats and managing risks; (3) planning, training, and exercising, (4) capacity to respond and recover; (5) information sharing and communication; and (6) performance and accountability. (1) Leadership roles and responsibilities for an influenza pandemic need to be clarified, tested, and exercised, and existing coordination mechanisms, such as critical infrastructure coordinating councils, could be better utilized to address challenges in coordination between the federal, state, and local governments and the private sector in preparing for a pandemic. (2) Efforts are underway to improve the surveillance and detection of pandemic-related threats in humans and animals, but targeting assistance to countries at the greatest risk has been based on incomplete information, particularly from developing countries. (3) Pandemic planning and exercising has occurred at the federal, state, and local government levels, but important planning gaps remain at all levels of government. (4) Further actions are needed to address the capacity to respond to and recover from an influenza pandemic, which will require additional capacity in patient treatment space, and the acquisition and distribution of medical and other critical supplies, such as antivirals and vaccines. (5) Federal agencies have provided considerable guidance and pandemic-related information to state and local governments, but could augment their efforts with additional information on state border closures and other topics. (6) Performance monitoring and accountability for pandemic preparedness needs strengthening. For example, the May 2006 National Strategy for Pandemic Influenza Implementation Plan does not establish priorities among its 324 action items and does not provide information on the financial resources needed to implement them. The recent outbreak of the H1N1 influenza virus should serve as a powerful reminder that the threat of a pandemic influenza, which seemed to fade from public awareness in recent years, never really disappeared. While federal agencies have taken action on 13 of GAO's 23 recommendations, 10 of the recommendations that GAO has made over the past 3 years are still not fully implemented. With the possibility that the H1N1 virus could return in a more virulent form in a second wave in the fall or winter, the administration and federal agencies should turn their attention to filling in the planning and preparedness gaps GAO's work has pointed out. |
As you know, in connection with requests that we determine the average cost of an overnight stay at the Executive Residence and provide information on related overtime compensation for domestic staff within the Executive Residence, the Subcommittee asked us to determine the number of persons who were overnight guests in the Executive Residence and the total number of overnight stays since January 1993. The White House has publicly stated that there were 938 overnight guests and 831 of their names have been reported in the media. The White House told us that the names of the remaining people were not provided in order to preserve the privacy of the First Family. We understand that White House staff or other government employees who stayed overnight in the Executive Residence are not included in the total of 938 overnight guests. The White House has not stated how many nights the listed guests stayed. We have made no progress in confirming the aggregate number of overnight guests and determining the number of stays within the Executive Residence because we have obtained no records from the White House. To respond to the Subcommittee’s request, we simply require access to documents or systems that will establish the aggregate number of guests and stays. If such documents or systems do not exist, we need to ascertain the overnight guests at the Executive Residence during the period indicated from source documents or systems maintained by the White House or others. Once the number of overnight guests is established, we need to determine the number of nights each overnight guest stayed at the Executive Residence. We can then determine and report the total number of overnight guests and stays since January 1993. We have discussed this review with White House Counsel staff and others, but have made no progress in obtaining the information needed to do the work requested. On April 24, 1997, we met with officials from several White House offices to advise them of the Subcommittee’s request, including the request for information on overnight stays at the Executive Residence. On June 17, 1997, we provided the Associate Counsel to the President with an informal list of four areas related to the overnight stays that we wanted to discuss, including the nature, location, and people responsible for source documents and systems showing overnight stays at the Executive Residence. On July 11, 1997, we met with the Deputy Counsel and Associate Counsel to the President, at which time we discussed a number of areas, including the sources and methods used to compile the list of overnight guests that was previously made public. On July 28, 1997, the Associate Counsel sent us a list of names of those who were overnight guests at the Executive Residence and advised us that the list had been released to the public. We made several subsequent requests to the Associate Counsel for a follow-up meeting, and on September 19, 1997, we again met with the Deputy Counsel and Associate Counsel to discuss information relating to our review, but made no progress in obtaining any records. On October 16, 1997, we wrote to the Deputy Counsel to the President to insist on our access, by November 1, 1997, to all books, documents, papers, or other records related to the number of overnight guests at the Executive Residence and the beginning and ending dates of each guest stay since January 1993. We made that request to achieve our objective of counting and reporting the number of overnight guests and stays. The letter did not request the identity of the overnight guests or the reasons for their stay, although we recognize that the records that allow us to determine the number and duration of overnight stays may identify the guests by name. The Associate Counsel to the President replied by letter of October 23, 1997, that she and others had compiled the previously published list of overnight guests from documentation that included materials belonging to the First Family, including “personal and private correspondence.” The letter characterized our request as one to “gain access to private and personal papers of the First Family.” In expressing concern about GAO inspecting these materials, the Associate Counsel expressed a willingness to continue discussing the matter, but as of today we have received no records that would enable us to provide the Subcommittee with the requested information on overnight guests and stays at the Executive Residence. We are not unmindful of the sensitivity of using materials of the First Family in performing our review. Accordingly, we have been and continue to be open to reviewing other materials to determine the number and duration of overnight guests and stays at the Executive Residence. In this connection, our letter did not request the “private and personal papers of the First Family” or any other specific papers of the White House. We only requested documents “related to the number of overnight guests at the Executive Residence and the beginning and ending dates of each guest stay since January 1993.” At the invitation of the Associate Counsel, we met yesterday with White House staff, including the Deputy Counsel and the Associate Counsel, to discuss our request. At that meeting, we presented a letter proposing that we discuss the possibility of alternative sources of information with the Executive Residence’s Chief Usher, Administrative Assistant, Head Housekeeper, and others. During our discussions, the Deputy Counsel made clear that she was not representing that there were no other sources of the number of guests and stays, only that the list of guests released by the White House was compiled from private materials. She also stated that there was no concern about GAO determining the aggregate number of guests and stays from non-private materials. The White House is considering our formal request to discuss alternative sources of information with the Chief Usher and others. There were two legal issues raised in passing in the Associate Counsel’s October 23 letter. She first argues that a privacy interest protects the First Family’s notes and correspondence. Second, she reminded us that our statutory right of access encompasses “agency records” and advised that this right of access does not reach records of the First Family. I will briefly discuss each of these issues in turn. The Associate Counsel argues that GAO is seeking access to the “private and personal papers of the First Family,” suggesting that Presidential privacy interests shield these documents from scrutiny. The Supreme Court has recognized that, while the President has voluntarily surrendered the privacy accorded non-public figures, the President and other public officials “are not wholly without constitutionally protected privacy rights in matters of personal life unrelated to any acts done by them in their public capacity.” Nixon v. Administrator of General Services, 433 U.S. 425, 455, 457 (1977). This privacy interest is qualified—any intrusion must be weighed against the congressional, public, or other interest in reviewing the private materials. Id. at 458–465 (President’s privacy interest in private documents and tape recorded conversations outweighed by limited intrusion by archivists to separate private from non-private materials, the lack of an alternative for separating private from other materials, the public interest in preserving historical materials mixed with private materials, and other factors); Nixon v. Freeman, 670 F.2d 346, 354, 362-3 (D.C. Cir. 1982), cert. denied, 459 U.S. 1035 (1982) (President’s privacy interest in tape recorded conversations and in tape recorded diaries outweighed by limitations on proposed intrusions and other factors); Dellums v. Powell, 642 F.2d 1351, 1354, 1362-3 (D.C. Cir. 1980) (President’s common law privacy interests entitled to considerable measure of deference by courts, but may be outweighed by competing interests). The power of Congress to investigate and obtain records in aid of its investigation is as broad as its power to legislate. McGrain v. Daugherty, 273 U.S. 173 (1926). When the executive branch withholds information from Congress based on an assertion of Presidential privacy or other protected interest, the courts have balanced this interest against the congressional need for the information. See, e.g., United States v. AT&T, 567 F.2d 121 (D.C. Cir. 1977), in which the court sought to balance the Congress’ interest in assuring the proper expenditure of appropriated funds and the executive branch’s interest in protecting the national security (requests from FBI to AT&T for warrantless wiretaps). Seeking workload information—how many people are staying overnight—for a taxpayer funded establishment, the Executive Residence, by the relevant Subcommittee of the House Committee on Appropriations is clearly a suitable congressional inquiry. See United States v. AT&T, 551 F.2d 384, 393 (D.C. Cir. 1976). There is no allegation that Congress is seeking to “expose for the sake of exposure,” id.; in fact, the request is tailored to include aggregate numbers of Executive Residence guests and stays—not the identification of personal visitors or other private information. It is also significant that the intrusion here is at most minimal. GAO has proposed that it would not remove copies or original documents from the White House premises, but would merely use the materials to determine aggregate numbers. GAO’s record of protecting confidential information is exemplary; careful observation of confidentiality restrictions is necessary for GAO to do its work. Finally, access to what the White House considers “private” materials is only necessary if they are the only source for the requested information. It is important to make clear the authority under which GAO is performing the review of overnight guests and stays at the Executive Residence. As previously stated, your request asked GAO to conduct several assignments. The first—an audit of five categories of unvouchered expenditures of the President and the Vice President—is conducted pursuant to sections 105(d) and 106(b) of title 3, United States Code. The statute specifically addresses unvouchered expenditures, describes the scope of our audit, establishes our right of access to records relating to the unvouchered expenditures, and limits our reporting responsibilities. In contrast, our review of the number of overnight guests and stays in the Executive Residence falls under section 712 of title 31, United States Code. Paragraph (1) of section 712 authorizes GAO to investigate all matters related to the use of public money. Paragraphs (4) and (5) of section 712 direct GAO to investigate and report matters ordered by a congressional committee having jurisdiction over appropriations and to give the help and information the committee requests. Access to records for reviews performed under section 712 is authorized by section 716 of title 31, United States Code. Section 716 provides that each agency shall give GAO the information it requires concerning the duties, powers, activities, organization, and financial transactions of the agency. GAO may inspect agency records to get the information. As a result of the 1982 codification of title 31 of the United States Code, sections 101 and 701 define the term “agency” for purposes of sections 712 and 716 to mean a “department, agency, or instrumentality” of the United States Government, but not the legislative branch or the Supreme Court. As broad as the term “agency” is now defined, the statutory language before the codification emphasizes its expansiveness. Before the codification, the relevant term was “department or establishment,” defined in 31 U.S.C. 2 (1976) to include “any executive department, independent commission, board, bureau, office, agency, or other establishment of the Government.” The 1982 codification of title 31 restated, without substantive change, the laws enacted before April 16, 1982, that were replaced by the codification. See Public Law 97-258, § 4(a), 96 Stat. 1067 (1982). Similarly, the language of the access provision before codification illustrates how encompassing the term “records” is as used in section 716. The predecessor to section 716, 31 U.S.C. 54 (1976), used not just the term “records” but also such terms as “correspondence,” “papers,” and “written information” to describe the reach of our access authority. In analyzing the scope of our authority under sections 712 and 716, we are aware that the Executive Residence is not considered an “agency” for purposes of the Freedom of Information Act (FOIA). Sweetland v. Walters, 60 F.3d 852 (D.C. Cir. 1995). The FOIA definition of “agency” as interpreted by the courts has no relevance to the definition of “agency” in title 31. The FOIA controls public access to government information for the purpose of furthering the public’s understanding of government operations. In light of that purpose, the Congress has explicitly ratified an interpretation of the term “agency” that excludes units of the Executive Office of the President with no substantial independent authority to direct executive branch officials. Armstrong v. Executive Office of the President, 90 F.3d 553, 557–8 (D.C. Cir. 1996). Here, disclosure to GAO is solely in aid of the congressional power to oversee, investigate, and legislate. Over the last century, the Supreme Court has characterized the scope of congressional power to investigate as penetrating and far-reaching as the potential power to enact legislation, oversee the operation of government, and appropriate funds under the Constitution. Barenblatt v. United States, 360 U.S. 109, 111 (1959); McGrain v. Daugherty, 273 U.S. 173 (1926). The Court presumes a valid legislative purpose for congressional inquiries, In re Chapman, 166 U.S. 661, 670 (1897), and will consider such sources as a committee chairman’s opening statement to support the existence of a legislative purpose, Wilkinson v. United States, 365 U.S. 399, 410 (1961). This has been true even when the witness at a congressional investigation objected to the committee’s questions on the grounds that they related to private affairs. Sinclair v. United States, 279 U.S. 263, 295 (1929). The Executive Residence is a government facility staffed by federal employees and funded with appropriated tax dollars. This Subcommittee considers budget requests by the President for the operation and maintenance of the Executive Residence. In so doing, it desires to have information relating to the operation of the Executive Residence and the workload of the government employees responsible for maintaining it, including overtime and duties associated with overnight guests, as well as the number of overnight guests and stays. Accordingly, for purposes of our audit and access authority, we believe the Executive Residence is an “establishment” of the United States and that papers, correspondence, and other written materials documenting its use, created by the President or First Lady or received by them from private parties, and used by government employees to compile statistics released to the public, are “records” as that term is used in 31 U.S.C. 716. Mr. Chairman, that concludes my statement. I will be pleased to answer questions you or other members of the Subcommittee may have. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | GAO discussed the status of its work on the number of overnight guests and stays in the Executive Residence at the White House, focusing on: (1) an audit of certain fiscal year 1996 expenditures, including those to operate the Executive Residence, that are accounted for solely on the certificate of the President or the Vice President and referred to as unvouchered activities; (2) a review of certain processes and procedures relating to reimbursable expenditures of the Executive Residence, such as those for political events; and (3) a review of the number and cost of overnight stays in the Executive Residence since January 1993. GAO noted that: (1) its first two assignments in this area, relating to audits of certain 1996 expenditures and a review of certain processes and procedures relating to reimbursable expenditures, are proceeding on schedule; (2) its third assignment relates to the number and costs of overnight stays in the Executive Residence since January 1993; (3) GAO requested access to all books, documents, papers, or other records related to the number of overnight guests at the Executive Residence to achieve its objective of counting and reporting the number of overnight guests and stays; (4) in an October 23, 1997, letter, the Associate Counsel to the President characterized GAO's request as one to gain access to private and personal papers of the First Family, but expressed a willingness to continue discussing the matter; (5) GAO is open to reviewing other materials to determine the number and duration of overnight guests and stays at the Executive Residence; (6) in the October 1997 letter, the Associate Counsel argued that a privacy interest protects the First Family's notes and correspondence, and stated that GAO's statutory right of access encompasses agency records and does not reach records of the First Family; (7) GAO believes that disclosure of the information is solely in aid of the congressional power to enact legislation, oversee the operation of government, and appropriate funds under the Constitution, with the presumption of a valid legislative purpose; (8) the Executive Residence is a government facility staffed by federal employees and funded with appropriated tax dollars; (9) this subcommittee considers budget requests by the President for the operation and maintenance of the Executive Residence, and in so doing, it desires to have information relating to the operation of the Executive Residence and the workload of the government employees responsible for maintaining it, including overtime and duties associated with overnight guests, as well as the number of overnight guests and stays; and (10) accordingly, for purposes of GAO's audit and access authority, GAO believes the correspondence, and other written materials documenting use of the Executive Residence, created by the President or First Lady or received by them from private parties, and used by government employees to compile statistics released to the public, are records as that term is used in 31 U.S.C. 716. |
DHS, one of the 24 Chief Financial Officer (CFO) Act agencies, was formed from 22 agencies, including the following agencies or parts of agencies: the U.S. Customs Service, which was formerly located in the Department of the Treasury; the Federal Emergency Management Agency; the Coast Guard; and most of the Immigration and Naturalization Service, which was formerly located in the Department of Justice. Tables 1 and 2 show the representation of career employees at DHS and governmentwide as of September 2003 and September 2007, respectively. There were slight increases in percentage points among nearly all minority groups DHS-wide between 2003 and 2007. The greatest change DHS-wide was a decrease in White men. The greatest differences between the governmentwide data and DHS-wide data were among Hispanic men— in both 2003 and 2007 the representation of Hispanic men was more than 10 percentage points higher than the representation governmentwide, and for White women, the representation was nearly 10 percentage points lower. For both 2003 and 2007, the representation of women at DHS, with the exception of Hispanic women, was below the governmentwide level, the biggest difference being among White and African American women. See appendix I for a breakdown of the DHS-wide representation data by DHS components. Taking a closer look at the DHS-wide data, table 3 shows the representation of career employees at DHS by pay plan/grade as of September 2003. Minority employees generally represented less than 10 percent of career employees among all the pay plans and grades. Examples of the exceptions included the representation of Hispanic men in the blue collar pay plan, grades 5 to 8, and grades 9 to 12, where they ranged from 15 to over 21 percent. In grades 1 to 4, African American women represented over 17 percent, and Hispanic women represented nearly 13 percent of employees. Among the higher grades and pay plans— grades GS-13 to GS-15, SES, and SL/ST—the percentage of White women ranged from over 17 to more than 22 percent, and no minority group exceeded 9 percent of career employees. By 2007, the representation of career employees at DHS by pay plan/grade showed only slight increases and decreases. Exceptions, as shown in table 4, were in the percentage of White men in the SL/ST pay plan, which increased from 0 percent in 2003 to more than 65 percent in 2007, and White women, which during this period in the same pay plan increased from 0 percent to almost 28 percent. The representation of minorities was still less than 10 percent in grades GS-13 and above. As we have reported, leadership in agencies across the federal government, especially at senior executive levels, is essential to providing accountable, committed, consistent, and sustained attention to human capital and related organizational transformation issues. Having a diverse SES corps, which generally represents the most experienced segment of the federal workforce, can be an organizational strength that can bring a wider variety of perspectives and approaches to bear on policy development and implementation, strategic planning, problem solving, and decision making. The members of the career SES are the highest nonpolitically appointed leaders in the federal workforce, and we recently looked more closely at their representation governmentwide. Table 5 shows the total number of career SES and the percentage of women and minority SES in DHS and at the 23 other CFO Act agencies in 2003 and 2007. Overall at DHS, the total number of SES increased by more than 50 percent between 2003 and 2007 going from 208 to 325. Within that total, the percentage of women increased from 21.2 percent to 26.2 percent. In 2003, the representation of women within individual CFO Act agencies ranged from 15.9 to 40.7 percent, with more than two-thirds of the agencies having at least 25 percent women—DHS had 21.2 percent. The representation of minorities within the CFO Act agencies in 2003 ranged from 7.2 to 42.0 percent with more than two-thirds having at least 15 percent minorities—DHS had 15.9 percent. In 2007, the representation of women at these agencies ranged from 19.9 to 45.5 percent, with more than half of the agencies having 30 percent or more women—DHS had 26.2 percent. For minority representation, CFO Act agency rates ranged from 6.1 to 43.8 percent, with two-thirds having at least 15 percent or more minorities—DHS had 13.2 percent. Minority representation in the career SES governmentwide generally increased by less than 1 percentage point from September 2003 through September 2007 as shown in table 6. During this period, the representation of men in the SES decreased by 2.6 percentage points, and White men by 2.7 percentage points; whereas, the percentage of women increased by 2.7 percentage points. At DHS, the extent of change in the representation of career SES employees was generally greater than the change that occurred in the governmentwide SES from September 2003 through September 2007. For example, as shown in table 7, the percentage of White women in DHS’s career SES was 23.1 percent in 2007, 5.8 percentage points above the 2003 rate of 17.3 percent. White men and African American men experienced the largest decrease in their representation in the career SES by 2007, dropping 3.1 and 2.3 percentage points, respectively. Overall, minorities decreased from 15.9 to 13.2 percent. The vast majority of potential successors for career SES positions will come from the GS pay plan for grades GS-15 and GS-14, the levels that serve as the SES developmental pool. Table 8 shows the changes in the representation of the SES developmental pool governmentwide from September 2003 to September 2007. Governmentwide, the total number of employees in the SES developmental pool decreased slightly from September 2003 to September 2007. The greatest change in representation was a decrease of 5.3 percentage points in the number of White men from 2003 to 2007. The percentage of women in the governmentwide SES developmental pool increased by 3.9 percentage points between 2003 and 2007, but the percentage of men in this developmental pool decreased by this same amount. By 2007, the representation of each of the minority groups in the governmentwide SES developmental pool increased by 1.3 percentage points or fewer, resulting in an overall increase of 3.7 percentage points for minorities. Unlike the total number of employees in the governmentwide SES developmental pool, those in DHS’s SES developmental pool increased by more than half. The two greatest changes in representation within DHS’s career SES developmental pool from September 2003 through September 2007 were for White men, which decreased by 4.2 percentage points, and minorities, which increased by 4.6 percentage points, of which African American women increased by 1.8 percentage points, as shown in table 9. While we did not analyze factors that contributed to changes in DHS workforce from September 2003 through September 2007, OPM and the Equal Employment Opportunity Commission (EEOC) in their oversight roles require federal agencies, including DHS, to analyze their workforces. Both OPM and EEOC also report on governmentwide representation levels. Under OPM’s regulations implementing the Federal Equal Opportunity Recruitment Program (FEORP), agencies are required to determine where representation levels for covered groups are lower than the civilian labor force (CLF) and take steps to address those differences. EEOC’s Management Directive 715 (MD-715) provides guidance and standards to federal agencies for establishing and maintaining effective equal employment opportunity (EEO) programs, including a framework for executive branch agencies to help ensure effective management, accountability, and self-analysis to determine whether barriers to EEO exist and to identify and develop strategies to mitigate or eliminate the barriers to participation. Specifically EEOC’s MD-715 states that agency personnel programs and policies should be evaluated regularly to ascertain whether such programs have any barriers that tend to limit or restrict equitable opportunities for open competition in the workplace. The initial step is for agencies to analyze their workforce data with designated benchmarks, including the CLF. If analysis of their workforce profiles identifies potential barriers, agencies are to examine all related policies, procedures, and practices to determine whether an actual barrier exists. EEOC requires agencies to report the results of their analyses annually. A high-performance organization relies on a dynamic workforce with the requisite talents and up-to-date skills to ensure that it is equipped to accomplish its mission and achieve its goals. Such organizations typically foster a work environment in which people are enabled and motivated to contribute to continuous learning and improvement as well as mission accomplishment and which provides both accountability and fairness for all employees. In addition, the approach that a high-performance organization takes toward its workforce is inclusive and draws on the strengths of employees at all levels and of all backgrounds. This approach is consistent with that of diversity management. We have defined diversity management as a process intended to create and maintain a positive work environment where the similarities and differences of individuals are valued, so that all can reach their potential and maximize their contributions to an organization’s strategic goals and objectives. In our past work, we identified nine leading practices in diversity management that experts agreed should be present in some combination for creating and managing diversity. The leading diversity management practices identified by a majority of experts were as follows: Top leadership commitment—a vision of diversity demonstrated and communicated throughout an organization by top-level management. Diversity as part of an organization’s strategic plan—a diversity strategy and plan that are developed and aligned with the organization’s strategic plan. Diversity linked to performance—the understanding that a more diverse and inclusive work environment can yield greater productivity and help improve individual and organizational performance. Measurement—a set of quantitative and qualitative measures of the effect of various aspects of an overall diversity program. Accountability—the means to ensure that leaders are responsible for diversity by linking their performance assessment and compensation to the progress of diversity initiatives. Succession planning—an ongoing, strategic process for identifying and developing a diverse pool of talent for an organization’s potential future leaders. Recruitment—the process of attracting a supply of qualified, diverse applicants for employment. Employee involvement—the contribution of employees in driving diversity throughout an organization. Diversity training—organizational efforts to inform and educate management and staff about diversity. DHS’s Acting Chief Human Capital Officer (CHCO) testified in April 2008 on actions the Department is taking to create and manage its workforce. These actions are consistent with leading diversity management practices in four areas: (1) a diversity strategy as part of its strategic plan, (2) recruitment, (3) employee involvement, and (4) succession planning. We have not conducted a review of DHS’s diversity management efforts; therefore, we cannot comment on the effectiveness of DHS’s implementation of these practices. In addition, because we do not highlight a particular practice, it is not meant to imply success or lack of success by DHS in implementing other diversity management practices. Diversity strategy as part of the strategic plan. DHS established an objective in its 2004 Strategic Plan to “ensure effective recruitment, development, compensation, succession management and leadership of a diverse workforce to provide optimal service at a responsible cost.” In an August 2007 progress report on implementation of mission and management functions, we indicated that DHS had taken action to satisfy most of the elements related to developing a results-oriented strategic human capital plan. We noted that in addition to the strategic human capital plan that DHS issued in October 2004, which covers 2004 to 2008, the department developed a fiscal year 2007 and 2008 Human Capital Operational Plan, which provides measurable goals that the department is using to gauge the effectiveness of its human capital efforts. DHS officials provided us with a copy of DHS’s Corporate Diversity Strategy, issued in March 2008, and stated that the department has developed a Diversity Action Plan, which it plans to submit to the DHS Diversity Council for approval in May 2008. The Diversity Strategy outlines DHS’s policy of encouraging a diverse workforce and the value of a diverse workforce in accomplishing DHS’s mission. Among the guiding principles is integrating diversity into the organization culture rather than as a stand alone program and recognizing that diversity is a matter of equity and fairness. To help ensure accountability, among other things, the strategy calls for establishing a senior-level Diversity Council, which DHS officials reported has been done, integrating diversity strategies into DHS’s comprehensive human resource operation, and ensuring that all DHS leaders have access to training, tools, and support needed to serve as de facto diversity champions. Recruitment. To achieve its strategic plan objective of a diverse workforce, in his April 2008 testimony, DHS’s Acting CHCO stated that recruitment strategies have been implemented at the department and component levels to improve diversity of the DHS talent pool. DHS officials told us that the department partners with several minority-serving institutions and participates in several intern, scholarship, and fellowship programs; officials provided a recruitment brochure. These officials also indicated that in October 2007, the DHS began a Veterans’ Outreach Program as a means of recruiting a diverse workforce. This outreach strategy consists of (1) a Web site for one-stop employment and other information, (2) an advisory forum of external veterans as stakeholders, and (3) training in veterans’ preference and reemployment rights for EEO and human capital specialists. DHS has also created an SES-level Director of Recruiting and Diversity within the Chief Human Capital Office. Employee Involvement. Employees can make valuable contributions in driving diversity throughout an organization. Our work on leading diversity management practices identified several forms these contributions can take, including mentoring and community outreach with private employers, public schools, and universities. DHS officials described actions the department is taking to provide opportunities for employees at various levels throughout the department to receive mentoring. In addition, DHS officials stated that they have developed formal partnerships with minority professional service organizations, including the Urban League’s Black Executive Exchange Program, where DHS provides speakers that participate in outreach programs at historically black colleges and universities. DHS officials indicated they are pursuing similar partnerships with the National Association of Hispanic Federal Executives, the African American Federal Executive Association, and the Asian American Executive Network. Succession Planning. Succession planning is a comprehensive, ongoing strategic process that provides for forecasting an organization’s senior leadership and other needs; identifying and developing candidates who have the potential to be future leaders; and selecting individuals from among a diverse pool of qualified candidates to meet executive resource needs. Succession planning and management can help an organization become what it needs to be, rather than simply recreate the existing organization. Leading organizations go beyond a “replacement” approach that focuses on identifying particular individuals as possible successors for specific top-ranking positions and engage in broad, integrated succession planning and management efforts that focus on strengthening both current and future capacity. They anticipate the need for leaders and other key employees with the necessary competencies to successfully meet the complex challenges of the 21st century. For DHS, in addition to the changes that will occur as a result of the upcoming new administration, several factors including recent turnover and expected retirements provide opportunities for DHS to affect the diversity of its workforce and highlight the importance of succession planning. Recently, we reported that the overall attrition rates for permanent DHS employees (excluding SES and presidential appointees) at 8 percent and 7 percent in 2005 and 2006, respectively, exceeded the 4 percent average rate for all cabinet-level agencies. These higher attrition rates, about 14 to 17 percent, were among transportation security officers in DHS’s Transportation Security Administration. The attrition rate for SES and presidential appointees was also higher than the average senior- level attrition rate for all cabinet-level departments. As for retirements, about 20 percent of career employees at DHS as of fiscal year 2007 are projected to be eligible to retire by 2012, and certain key occupations within the department are expected to have high retirement eligibility rates, such as customs and border protection agents—about 51 percent. In 2006, OPM reported that approximately 60 percent of the executive branch’s 1.6 million white-collar employees and 90 percent of about 6,000 federal executives will be eligible for retirement over the next 10 years. Considering retirement eligibility and actual retirement rates of the SES is important because individuals normally do not enter the SES until well into their careers; thus SES retirement eligibility is much higher than for the workforce in general. If a significant number of SES members were to retire, it could result in a loss of leadership continuity, institutional knowledge, and expertise among the SES corps, with the degree of loss varying among agencies and occupations. Succession planning also is tied to the federal government’s opportunity to affect the diversity of the executive corps through new appointments. Racial, ethnic, and gender diversity in the SES is an important component for the effective operation of the government. In September 2003, we reported that agencies in other countries use succession planning and management to achieve a more diverse workforce, maintain their leadership capacity, and increase the retention of high-potential staff. According to the Acting CHCO’s April 3, 2008, testimony and discussion with senior level human capital officials, the department is taking steps to develop a qualified and diverse pool of applicants for SES positions by preparing its mid-career employees through a variety of leadership development programs. These programs include the DHS SES Candidate Development Program (primarily for GS-15s) and the DHS Fellows Program (for GS-13s, GS-14s, and GS-15s). See appendix II for representation data for both programs since their inception. According to DHS officials, the DHS Fellows Program, initiated in 2006, is a competitive developmental program where participants are placed in high-visibility rotational assignments, receive training in such areas as leadership, and form small groups to work on specific projects. After completion of this 11-month program, participants remain in their current assignments but, according to DHS officials, are prepared for advancement when the opportunities arise. Participants in both of the DHS leadership programs receive mentoring and coaching and rotational assignments. However, according to DHS officials, employees at other levels of the organization can also participate in ad hoc mentoring and rotational assignments. Effective training and development programs can enhance the federal government’s ability to prepare its workforce and thereby achieve results. The efforts that DHS officials described are consistent with these practices. Chairman Thompson, Ranking Member King, and Members of the Committee, this concludes my prepared statement. I would be pleased to respond to any questions that you may have. For further information regarding this statement, please contact George Stalcup, Director, Strategic Issues, on (202) 512-6806 or at [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this statement included Belva Martin and Kiki Theodoropoulos, Assistant Directors; Karin Fangman; Mary Y. Martin; and Greg Wilmoth. Tables 10 and 11 below provide demographic data by race and gender on the Department of Homeland Security’s (DHS) career employees by DHS component for September 2003 and September 2007. In 2003 and 2007, the U.S. Customs and Border Protection (USCBP) and the U.S. Immigration Customs Enforcement (USICE) had the highest percentage of Hispanic men, while the Transportation Security Administration (TSA) had the highest percentage of African American men. DHS officials stated that they have two formal leadership development programs to prepare future DHS leaders: the DHS Fellows Program for GS- 13, GS-14, and GS-15 staff (an 11-month program) and the DHS Senior Executive Service (SES) Candidate Development Program, generally for GS-15s (an 18-month program). Tables 12 and 13 below provide a breakdown of the representation of women and minorities in each of these programs. | The Department of Homeland Security (DHS) was created from a disparate group of agencies with multiple missions, values, and cultures into a cabinet department whose goals are to, among other things, protect U.S. borders and infrastructure, improve intelligence and information sharing, and prevent and respond to potential terrorist attacks. GAO designated the implementation and transformation of DHS as a high-risk area in 2003, and it remains so. While DHS has made progress, it continues to face challenges in transforming into an effective, integrated organization. In response to a request to provide information on diversity in DHS and steps DHS is taking to create and manage a diverse workforce, GAO is providing demographic data related to the federal government as a whole and DHS's workforce. GAO obtained these data from the Office of Personnel Management's (OPM) Central Personnel Data File (CPDF). GAO used its past work on leading diversity management practices (GAO-05-90) and reviewed data from DHS on its diversity management practices. Data in OPM's CPDF show that as of September 2007, the overall percentages of women and minorities have increased in the career SES governmentwide, the highest nonpolitically appointed leaders in the federal workforce, and the SES developmental pool for potential successors since September 2003. As part of GAO's recent analysis of the diversity of the SES and the SES developmental pool, GAO reviewed career, or permanent, SES appointments at DHS and DHS's SES developmental pool. During this 4-year period, the total number of career SES and those in the SES developmental pool for potential successors increased at DHS. The percentage of women in the SES increased, while the percentage of minorities decreased. For the SES developmental pool, the percentage of women and minorities increased. While GAO did not analyze the factors that contributed to changes in DHS's workforce for this period, OPM and the Equal Employment Opportunity Commission in their oversight roles require federal agencies, including DHS, to analyze their workforces. As part of a strategic human capital planning approach, agencies need to develop long-term strategies for acquiring, developing, motivating, and retaining a diverse workforce. An agency's human capital planning should address the demographic trends that the agency faces with its workforce, especially retirements, which provide opportunities for agencies to affect the diversity of their workforces. DHS reported taking steps to affect the diversity of its workforce. These steps are consistent with several leading diversity management practices: (1) a diversity strategy as part of its strategic plan, (2) recruitment, (3) employee involvement, and (4) succession planning. For example, DHS cited its use of intern programs for recruiting and its implementation of two leadership development programs for managing succession. GAO has not conducted a review of DHS's diversity management efforts; therefore, it cannot comment on the effectiveness of DHS's implementation of these practices. |
State and local governments will likely face daunting fiscal challenges in the next few years, driven in large part by the growth in health-related costs. Medicaid and health insurance for state and local employees and retirees make up a large share of such costs. In contrast, our analysis shows that state and local governments on average would need to increase pension contribution rates to 9.3 percent of salaries—less than .5 percent more than the 9.0 percent contribution rate in 2006 to achieve healthy funding on an ongoing basis. With few exceptions, defined benefit pension plans still provide the primary pension benefit for most state and local workers. About 90 percent of full-time state and local employees participated in defined benefit pension plans as of 1998. A defined benefit plan determines benefit amounts by a formula that is generally based on such factors as years of employment, age at retirement, and salary level. A few states offer defined contribution or other types of plans as the primary retirement instrument. In fiscal year 2006, state and local government pension systems covered 18.4 million members and made periodic payments to 7.3 million beneficiaries, paying out $151.7 billion in benefits. Many state and local governments also offer retirees health care benefits— in addition to Medicare benefits provided by the federal government—the costs of which have been growing rapidly. One study estimated that state and local governments paid $20.7 billion in fiscal year 2004 for retiree health benefits. For retirees who are under age 65 (that is, not yet Medicare-eligible), many state and local employers provide access to group health coverage with varying levels of employer contributions. As of 2006, 14 states did not contribute to the premium for this coverage, while 14 states picked up the entire cost, and the remainder fell somewhere in between. For virtually all state and local retirees age 65 or older, Medicare provides the primary coverage. Most state and local government employers provide supplemental coverage for Medicare-eligible retirees that covers prescription drugs. Both government employers and employees generally make contributions to fund state and local pension benefits. States follow statutes specifying contribution amounts or determine the contribution amount each legislative session. However many state and local governments are statutorily required to make yearly contributions based either on actuarial calculations or according to a statutorily specified amount. For plans in which employees are covered by Social Security, the median contribution rate in fiscal year 2006 was 8.5 percent of payroll for employers and 5 percent of pay for employees, in addition to 6.2 percent of payroll from both employers and employees to Social Security. For plans in which employees are not covered by Social Security, the median contribution rate was 11.5 percent of payroll for employers and 8 percent of pay for employees. Actuaries estimate the amount that will be needed to pay future benefits. The benefits that are attributable to past service are called the “actuarial accrued liabilities.” (In this report, the actuarial accrued liabilities are referred to as “liabilities.”) Actuaries calculate liabilities based on an actuarial cost method and a number of assumptions including discount rates and worker and retiree mortality. Actuaries also estimate the “actuarial value of assets” that fund a plan (in this report, the actuarial value of assets is referred to as “assets”). The excess of actuarial accrued liabilities over the actuarial value of assets is referred to as the “unfunded actuarial accrued liability” or “unfunded liability.” Under accounting standards, such information is disclosed in financial statements. In contrast, the liability that is recognized on the balance sheet is the cumulative excess of annual benefit costs over contributions to the plan. Certain amounts included in the actuarial accrued liability are not yet recognized as annual benefit costs under accounting standards, as they are amortized over several years. In a typical defined benefit pension plan, employer and employee contributions are made to a specific fund from which benefits will be paid. The yearly contributions from employers and employees are invested in the stock market, bonds, and other investments. Unlike most pension plans, retiree health benefits have generally been financed on a pay-as-you- go basis. Pay-as-you-go financing means that state and local governments have not set aside funds in a trust reserved for future retiree health costs. Instead, governments pay for each year’s retiree health benefits from the current year’s budget. The federal government has an interest in the funded status of state and local government retiree pensions and health care, even though it has not imposed the same funding and reporting requirements as it has on private sector pension plans. State and local government pension plans are not covered by most of the substantive requirements, or the insurance program operated by the Pension Benefit Guaranty Corporation (PBGC), under the Employee Retirement Income Security Act of 1974 (ERISA), which apply to most private employer benefit plans. Federal law generally does not require state and local governments to prefund or report on the funded status of pension plans or health care benefits. However, in order to receive preferential tax treatment, state and local pensions must comply with requirements of the Internal Revenue Code. In addition, the retirement income security of all Americans is an ongoing concern of the federal government. All states have legal protections for their pensions. The majority of states have constitutional provisions prescribing how pension trusts are to be funded, protected, managed, or governed. The remaining states have pension protections in their statutes or recognize legal protections under common law. Legal protections usually apply to benefits for existing workers or benefits that have already accrued; thus, state and local governments generally can change the benefits for new hires. In contrast to pensions, retiree health benefits generally do not have the same constitutional or statutory protections. Instead, to the extent retiree health benefits are legally protected, it is generally because they have been collectively bargained and are subject to current labor contracts. Since the 1980s, the Governmental Accounting Standards Board (GASB) has maintained standards for accounting and financial reporting for state and local governments. GASB operates independently and has no authority to enforce the use of its standards. Still, many state laws require local governments to follow GASB standards, and bond raters do consider whether GASB standards are followed. Also, to receive a “clean” audit opinion under generally accepted accounting principles, state and local governments are required to follow GASB standards. These standards require reporting financial information on pensions, such as contributions and the ratio of assets to liabilities. In contrast to pensions, the financial status of retiree health care benefits has generally not been reported or even estimated actuarially until recently. However, new GASB standards (Statements 43 and 45) call for employers to quantify and report on the size of retiree health care benefit liabilities. The new health care reporting standards are being phased in over time to give more time to smaller state and local government sponsors to generate estimates. Table 1 shows the respective GASB 43 and 45 effective dates, as well as to what type of entity each statement applies. Understanding the financial health of pension plans can be confusing. To help clarify, we found that three measures are key to understanding pension plans’ funded status. GASB standards require reporting all three of these measures. First, one can look at yearly contributions governments are making to their plans. Actuaries calculate yearly contribution amounts needed to maintain or improve the funded status of plans over time. Comparing this amount to the amount governments actually contribute indicates how well governments are keeping up with yearly funding needs. Two other measures, funded ratios and unfunded liabilities, both suggest the extent to which current assets can cover accrued benefits. These three measures should be viewed together and over time to get a complete picture of the funded status. The funded status measures of different plans cannot be compared to one another easily because different governments use different actuarial funding methods and assumptions to estimate them. Some officials we interviewed expressed confusion about how to understand the funded status of public pension plans. State and local governments report a significant amount of information on funding, required by GASB standards. The media often report various measures of the funded status without explaining the meaning of the terms or without enough context. In addition, governments have been reporting these funded status measures for pensions for years. However, the new accounting rules will also call on governments to report the funded status of retiree health benefits in a similar manner, even though many have not made any contributions to build assets to cover liabilities. We identified three key measures to help explain plans’ funded status: contributions, funded ratios, and unfunded liabilities. According to experts we interviewed, any single measure at a point in time may give a dimension of a plan’s funded status, but it does not give a complete picture. Instead, the measures should be reviewed collectively over time to understand how the funded status is improving or worsening. For example, a strong funded status means that, over time, the amount of assets, along with future scheduled contributions, comes close to matching a plan’s liabilities. Comparing governments’ actual contributions to the “annual required contribution” (ARC) helps in evaluating the funded status of each plan. Each year, plan actuaries calculate a contribution amount that, if paid in full, would normally maintain or improve the funded status. This amount is referred to as the ARC, although the use of the word “required” can be misleading because governments can choose to pay more or less than this amount. If the actuarial assumptions are consistent with the plans’ future experience, paying the full ARC each year provides reasonable assurance that sufficient money is being set aside to cover currently accruing benefits as well as a portion of any unfunded accrued benefits left over from previous years, instead of leaving those costs for the future. In other words, when a government consistently pays the ARC, the benefits accrued by employees are paid for by the taxpayers who receive the employees’ services. When the ARC is not paid in full each year, future generations must make up for the costs of benefits that accrued to employees in the past. In addition, the ARC can be compared to the government’s yearly budget to understand the financial burden of the benefits, according to officials. This comparison indicates how affordable the plan is to the government in a given year. A high ARC relative to a government’s budget may indicate that the costs of benefits are relatively high or that payments have been deferred from previous years. The funded ratio is the ratio of assets to liabilities. Liabilities are the amount governments owe in benefits to current employees who have already accrued benefits they will collect in the future. The funded ratio indicates the extent to which a plan has enough funds set aside to pay accrued benefits. If a plan has a funded ratio of 80 percent, the plan has enough assets to pay for 80 percent of all accrued benefits. A rising funded ratio over time indicates that the government is accumulating the assets needed to make future payments for benefits accrued to date. A low or declining funded ratio over time may raise concerns that the government will not have the assets set aside to pay for benefits. While the funded ratio equals the ratio of assets to liabilities, unfunded liabilities equal the difference between liabilities and assets in dollars. Thus, unfunded liabilities indicate the amount of benefits accrued for which no money is set aside. Assets may fall short of liabilities, for example, when governments do not contribute the full ARC, when they increase benefits retroactively, or when returns on investments are lower than assumed. Additionally, because all these financial calculations involve estimates of future payments, they are based on a number of assumptions about the future. Unfunded liabilities can grow if actuaries’ assumptions do not hold true. For example, if beneficiaries live longer than anticipated, they will receive more benefits than predicted, even if the government has been paying the ARC consistently. Unfunded liabilities will eventually require the government employer to increase revenue, reduce benefits or other government spending, or do some combination of these. Revenue increases could include higher taxes, returns on investments, or employee contributions. Nevertheless, we found that unfunded liabilities do not necessarily imply that pension benefits are at risk in the near term. Current funds and new contributions may be sufficient to pay benefits for several years, even when funded ratios are relatively low. As described in figure 1, unfunded liabilities are calculated as intermediate steps in the process of calculating the ARC. After calculating the unfunded liabilities, actuaries usually determine an amount to fund the unfunded liabilities over several years or “amortize” the cost of the liability. That amortized portion is added to the cost of benefits that employees accrued in the current year to determine the ARC. If a government pays the ARC, then a portion of the unfunded liabilities is paid off each year. When no more unfunded liabilities exist, the funded ratio is 100 percent, and the plan has “fully funded” all the benefits that its current employees have accrued under the plan’s actuarial cost method. However, a fully funded plan still requires yearly contributions to maintain full funding because as employees perform additional service, they accrue additional benefits. Asset = sum of past contribution from the te nd locl government plponor, employee, nd invetment erning tht hve not een pid ot in enefit or dminitrtive expen. Liabilitie = rrent cot of ll fre enefit tht hve een ccred to dte. Under GASB reporting standards, the funded status of different pension plans cannot be compared easily because governments use different actuarial approaches such as different actuarial cost methods, assumptions, amortization periods, and “smoothing” mechanisms. Most public pension plans use one of three “actuarial cost methods,” out of the six GASB approves. Actuarial cost methods differ in several ways. First, each uses a different approach to calculate the “normal cost,” the portion of future benefits that the cost method allocates to a specific year, resulting in different funding patterns for each, as described in Table 2. Actuarial cost methods are used to allocate the current value of future benefits into amounts attributable to the past, to the current year, and to future years, as shown in figure 2. The cost of future benefits that are attributable to past years under the actuarial cost method is called the actuarial accrued liability (AAL), while the cost of benefits accrued under the cost method in the current year is known as the normal cost. The funded status of plans using different cost methods differs because each has a different approach to dividing up the value of future benefits. Different cost methods are designed for plans to accrue liabilities at different rates, so the normal cost and the AAL vary according to the cost method. For example, under some cost methods, governments accrue more liabilities in the early part of employees’ career rather than later. As a result, two identical plans, using identical actuarial assumptions but different cost methods, would report a different funded status. In addition to the cost methods, differences in assumptions used to calculate the funded status can result in significant differences among plans that make comparisons difficult. One key assumption is the rate at which governments assume their invested assets will grow. If governments assume a high growth rate, their calculations will indicate that they do not have to pay as much today, because the assets set aside will grow more rapidly. In 2006, 70 percent of state and local government pension plans assumed a return of 8.0 to 8.5 percent, while 30 percent assumed a lower rate of return (7 percent at the lowest). If a plan’s assets fail to grow at the assumed rate of return, then the shortfall becomes part of the unfunded liabilities. However, in other years, assets may earn more than the assumed rate of return, reducing unfunded liabilities. Amortization Periods for Unfunded Liabilities In addition to actuarial cost methods and assumptions, differences in amortization periods make it difficult to compare the funded status of different plans. Governments amortize unfunded liabilities to reduce the volatility of contributions from year to year. Governments can choose shorter or longer periods over which to amortize unfunded liabilities. GASB standards allow governments to amortize unfunded liabilities over a period of up to 30 years. State and local governments can amortize their benefits because there is little chance that they will cease to exist. Finally, actuaries for many plans calculate the value of current assets based on an average value of past years. As a result, if the value of assets fluctuates significantly from year to year, the “smoothed” value of assets changes less dramatically. GASB does not limit the number of years governments may use to smooth the value of assets, but in 2006, most governments averaged the value of current assets with those of the last zero to 5 years. Comparing the funded status of plans that use different smoothing periods can be confusing because the value of the different plans’ assets reflects a different number of years. Given fluctuations in the stock market from year to year, the reported value of assets for plans that use different numbers of years for smoothing calculations could reflect significantly different market returns. More than half of public pension plans reported that they have put enough assets aside in advance to pay for benefits over the next several decades, while governments providing retiree health benefits generally have significant unfunded liabilities. The percentage of pension plans with funded ratios below 80 percent, a level viewed by many experts as sound, has increased in recent years, and a few plans are persistently underfunded. Although members of these plans may not be at risk of losing benefits in the near term, the unfunded liabilities will have to be made up in the future. In addition, a number of governments reported not contributing enough to reduce unfunded liabilities, which can shift costs to future generations. For state and local governments’ retiree health benefits, studies have estimated unfunded liabilities nationwide to be between $600 million and $1.6 trillion, although the amounts for individual governments vary widely. Even though annual costs for retiree health benefits are currently low compared to pensions, continuing to pay for current benefits with current revenues can put stress on government budgets because health care costs are increasing rapidly. Most public pension plans report having sufficient assets to pay for retiree benefits over the next several decades. Many experts and officials to whom we spoke consider a funded ratio of 80 percent to be sufficient for public plans for a couple of reasons. First, it is unlikely that public entities will go bankrupt as can happen with private sector employers, and state and local governments can spread the costs of unfunded liabilities over up to 30 years under current GASB standards. In addition, several commented that it can be politically unwise for a plan to be overfunded; that is, to have a funded ratio over 100 percent. The contributions made to funds with “excess” assets can become a target for lawmakers with other priorities or for those wishing to increase retiree benefits. More than half of state and local governments’ plans reviewed by the Public Fund Survey (PFS) had a funded ratio of 80 percent or better in fiscal year 2006, but the percentage of plans with a funded ratio of 80 percent or better has decreased since 2000, as shown in figure 3. Our analysis of the PFS data on 65 self-reported state and local government pension plans showed that 38 (58 percent) had a funded ratio of 80 percent or more, while 27 had a funded ratio of less than 80 percent. In the early 2000s, according to one study, the funded ratio of 114 state and local government pension plans together reached about 100 percent; it has since declined. In fiscal year 2006, the aggregate funded ratio was about 86 percent. Some officials attribute the decline in funded ratios since the late 1990s to the decline of the stock market, which reduced the value of assets. This sharp decline would likely affect funded ratios for several years because most plans use smoothing techniques to average out the value of assets over several years. Our analysis of several factors affecting the funded ratio showed that changes in investment returns had the most significant impact on the funded ratio between 1988 and 2005, followed by changes in liabilities. Although most plans report being soundly funded in 2006, a few have been persistently underfunded, and some plans have seen funded ratio declines in recent years. We found that several plans in our data set had funded ratios below 80 percent in each of the years for which data is available. Of 70 plans in our data set, 6 had funded ratios below 80 percent for 9 years between 1994 and 2006. Two plans had funded ratios below 50 percent for the same time period. In addition, of the 27 plans that had funded ratios below 80 percent in 2006, 15 had lower funded ratios in 2006 than in 1994. The sponsors of these plans may be at risk in the future of increased budget pressures. By themselves, lower funded ratios and unfunded liabilities do not necessarily indicate that benefits for current plan members are at risk, according to experts we interviewed. Unfunded liabilities are generally not paid off in a single year, so it can be misleading to review total unfunded liabilities without knowing the length of the period over which the government plans to pay them off. Large unfunded liabilities may represent a fiscal challenge, particularly if the period to pay them off is short. But all unfunded liabilities shift the responsibility for paying for benefits accrued in past years to the future. A number of governments reported not contributing enough to keep up with yearly costs. Governments need to contribute the full ARC yearly to maintain the funded ratio of a fully funded plan or improve the funded ratio of a plan with unfunded liabilities. In fiscal year 2006, the sponsors of 46 percent of the 70 plans in our data set contributed less than 100 percent of the ARC, as shown in figure 4, including 39 percent that contributed less than 90 percent of the ARC. In fact, the percentage of governments contributing less than the full ARC has risen in recent years. This continues a trend in recent years of about half of governments making full contributions. In particular, some of the governments that did not contribute the full ARC in multiple years were sponsors of plans with lower funded ratios. In 2006, almost two-thirds of plans with funded ratios below 80 percent in 2006 did not contribute the full ARC in multiple years. Of the 32 plans that in 2006 had funded ratios below 80 percent, 20 did not contribute the full ARC in more than half of the 9 years for which data is available. In addition, 17 of these governments did not contribute more than 90 percent of the full ARC in more than half the years. State and local government pension representatives told us that governments may not contribute the full ARC each year for a number of reasons. First, when state and local governments are under fiscal pressure, they may have to make difficult choices about paying for competing interests. State and local governments will likely face increasing fiscal challenges in the next several years as the cost of health care continues to rise. In light of this stress, the ability of some governments to continue to pay the ARC may be questioned. Second, changes in the value of assets can affect governments’ expectations about how much they will have to contribute. Because a high proportion of plan assets are invested in the stock market, the decline in the early 2000s decreased funded ratios and increased the unfunded liabilities of many plans. Such a marked decline in asset values was not typical in the experience of public pension funds, according to one expert. Reflecting the need to keep up with the increase in unfunded liabilities, ARCs increased, challenging many governments to make full contributions after they had grown accustomed to lower ARCs in the late 1990s. Moreover, some plans have contribution rates that are fixed by constitution, statute, or practice and do not change in response to changes in the ARC. Even when the contribution rate is not fixed, the political process may take time to recognize and act on the need for increased contributions. Nonetheless, many states have been increasing their contribution rates in recent years, according to information compiled by the National Conference of State Legislatures. Third, some governments may not contribute the full ARC because they are not committed to pre-funding their pension plans and instead have other priorities, regardless of fiscal conditions. When a government contributes less than the full ARC, the funded ratio can decline and unfunded liabilities can rise, if all other assumptions are met about the change in assets and liabilities. Increased unfunded liabilities will require larger contributions in the future to keep pace with the liabilities that accrue each year and to make up for liabilities that accrued in the past. As a result, costs are shifted from current to future generations. Our review of studies estimating the total retiree health benefits for all state and local governments showed that liabilities are between $600 billion and $1.6 trillion. The studies noted that, like many private employers, few governments have set aside any assets to pay for these obligations. The projected unfunded liabilities do not have to be paid all at once, but can be paid over many years. Some governments do not pay for any retiree health benefits and therefore do not have any unfunded liabilities. Others may have large unfunded liabilities. For example, California has estimated its unfunded retiree health benefits liabilities at $70 billion, while the state of Utah estimates $749 million. Estimates of unfunded liabilities for retiree health benefits are subject to change substantially because projecting future costs of health care is difficult. Compared to the future payments for pension benefits, payments for health care benefits are significantly more unpredictable. Pension calculations generally use salaries as a base for calculations and result in a predictable benefit amount per year. But the cost of providing health care benefits varies with the changing cost of health care as well as with each individual’s usage. In addition, state and local governments usually have the ability to reduce or eliminate benefits. Unfunded liabilities for retiree health benefits are high because unlike pension plans, nearly all state and local government retiree health benefits have been financed on a pay-as-you-go basis. In other words, most governments have not set aside funds in a trust dedicated for future retiree health benefit payments. As a result, governments do not pay a yearly ARC, but rather pay for retiree health benefits as they become due from annual funds. However, the new GASB accounting standards will require state and local governments to report their funding status on an accrual basis. In other words, for the first time, most governments will begin to calculate and report their funding status in a manner similar to the way they report pensions’ funding status, whether or not they are prefunded. Officials told us that state and local governments have not prefunded retiree health benefits for several reasons. First, for many governments, retiree health benefits began as an extension of employee health care benefits, which are usually paid for from general funds. Governments did not view retiree health as a separate stream of payments. Second, retiree health benefits were established at a time when health care costs were more affordable, so paying for the benefits as a yearly expense was less burdensome. Third, the inflation rate for health care is less predictable than for pensions, so calculating the current funding status is difficult. Fourth, given that specific retiree health benefits are generally not guaranteed by law, employers are freer to modify benefits; as a result, state and local governments are reluctant to commit funds to an obligation that may be reduced or eliminated in the future. Finally, changes in national health care policy and health insurance markets can affect what benefits state and local governments cover, so state and local governments may have resisted locking in their commitment to pay for future retiree health benefits by prefunding, and instead preferred to finance on a pay- as-you-go basis. Although the unfunded liabilities for retiree health benefits are generally much higher than for pensions, their current annual payments are considerably lower. According to our analysis presented in our recent report on this topic, in 2006, the aggregate state and local contribution rate for pensions was about 9 percent of salaries, and the pay-as-you-go expense for retiree health benefits was about 2 percent of salaries. However, if retiree health continues to be financed on a pay-as-you-go basis, the pay-as-you-go amount is estimated to more than double to 5 percent of salaries by 2050 to keep up with the growth in health costs, adding to budgetary stress. Pay-as-you-go financing also leaves less budgetary flexibility because state and local governments must pay the full costs of each year’s benefits. In contrast, under pre-funding, benefits are paid from a fund that already exists, so government contributions can be reduced when fiscal pressures are great. As a result, governments may face even greater pressure to reduce benefits or shift the costs of benefits to beneficiaries, for example, by restricting eligibility, reducing coverage, or increasing premiums. Still, pre-funding retiree health benefits would require significantly higher contributions in the short term than pay-as- you-go financing would require. Understanding the funded status of state and local government retiree benefits requires examining, on a plan-by-plan basis, whether funding levels are improving over time and whether governments are making the contributions recommended by the plan’s actuary each year. The variety of actuarial funding methods and assumptions makes it difficult to compare funded status across different pension plans. However, funded status information is not intended to help compare plans, but rather to determine contributions that will achieve full funding over time and to assess a given plan’s funded status over time. The funded status of state and local government pensions overall is reasonably sound, though recent deterioration underscores the importance of keeping up with contributions, especially in light of anticipated fiscal and economic challenges. Since the stock market downturn in the early 2000s, the funded ratios of some governments have declined. Governments can gradually recover from these losses. However, the failure of some to consistently make the annual required contributions undermines that progress and is cause for concern, particularly as state and local governments will likely face increasing fiscal pressure in the coming decades. While unfunded liabilities do not generally put benefits at risk in the near-term, they do shift costs and risks to the future. In the case of retiree health benefits, pay-as-you-go financing has been the norm up to the present day. The initial estimates of the unfunded liabilities will be daunting. But that is a natural consequence of pay-as-you-go financing. Just as the unfunded liabilities did not accumulate overnight, it may be unrealistic to expect them to be paid for overnight. Rather, state and local governments need to find strategies for dealing with unfunded liabilities, and such strategies will take time, will require difficult choices, and could be affected by changes in national health policy. We provided officials from the Internal Revenue Service, GASB staff, and other external reviewers knowledgeable about the subject area a copy of this report for their review. They provided us with technical comments that we incorporated, where appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to relevant congressional committees, the Acting Commissioner of Internal Revenue, and other interested parties. Copies will also be made available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. Please contact me at (202) 512-7215, if you have any questions about this report. Other major contributors include Tamara Cross, Assistant Director; Ken Stockbridge; Anna Bonelli; Temeca Simpson; Amy Abramowitz; Joseph Applebaum; Rick Krashevski; Jeremy Schwartz; Walter Vance; Charles Willson; and Craig Winslow. The objectives of this report were to examine 1) the key measures of the funded status of retiree benefits and 2) the current funded status of state and local pension and retiree health benefits. To describe the key measures of the funded status of retiree benefits, we interviewed experts on state and local government pension and retiree health benefits such as national organizations, bond rating agencies, and representatives from one local government retiree benefit system. We also spoke with experts on actuarial science such as the Actuarial Standards Board, the American Academy of Actuaries, and independent actuaries. We spoke to staff of the Governmental Accounting Standards Board to understand accounting practices and principles. We also reviewed actuarial literature and attended conferences. In addition, we conducted the following analysis: To understand the impact of various economic factors on the funding ratio of public pension plans, we developed a simple model of the determinants of the funding ratio and conducted “counterfactuals” holding rates of return on investments constant. To do this, we used the following data sources: funding ratio data from the Public Fund Survey (PFS) for years 2001 to 2005 and the Survey of State and Local Pensions for years 1988 to 2000; market value of pension assets from the Federal Reserve’s Flow of contributions and benefits data from the Bureau of Economic Analysis’s National Income and Product Accounts database; and data on returns on pension fund portfolios by analyzing market data. Our methodology and data sources for this analysis include some limitations. First, annual data are not available in the Survey of State and Local Pensions for 5 years during the period. For those years, values were imputed by using the average growth between the two closest values. In addition, the funding ratios are available on a fiscal year basis and were subsequently adjusted to a calendar year period. Second, assumptions may not be representative of all pension plans, such as the assumptions based on smoothing functions and the real expected returns on investments. Last, counterfactuals do not include policy adjustments that may occur because of different rates of return. To describe the funded status of state and local governments’ pensions, in addition to a literature review, we analyzed pension funding data provided by the National Association of State Retirement Administrators (NASRA). The data come from two different databases. The first database is the PFS and is sponsored by NASRA and the National Council on Teacher Retirement (NCTR). Data from years 2001 to 2006 were available. PFS data are gathered by reviewing publicly available financial documents from the state and local government plans. The second database is called the PENDAT database and was sponsored by the Public Pension Coordinating Council. PENDAT data are available in fiscal years 1992, 1994, 1996, 1998, and 2000. PENDAT data were collected via a survey sent to the administrators of a sample of plans nationwide. The PFS and PENDAT databases do not include all of the same entries. We matched individual entries from PENDAT to PFS, resulting in a sample with between 63 and 71 plans that had data across each of the available years from 1994 to 2006. In fiscal year 2005, these plans represented 58 percent of plan assets nationwide, and 72 percent of state and local government pension plan members. We reviewed the PFS and PENDAT data and found them to be reliable for our purposes. To do this, we reviewed all entries of key data points in the PFS data using publicly available sources from the state and local government plan sponsors and made adjustments to the data as needed. The corrections made to the PFS data were not material. To review the PENDAT database, we reviewed the methodology used to collect the data and verified the data of 23 percent of entries using external sources. The corrections were not found to be material. The information contained in the PFS and PENDAT databases have limitations: 1) surveys, including PENDAT, are subject to several kinds of error such as the failure to include all members of the population in the sample, nonresponse error, and data processing error; 2) the funding ratio and other funding indicators represent the financial status for the fiscal year with the most recent actuarial valuation, and thus do not all represent the same fiscal year’s financial status; 3) the plans included in the analysis are not necessarily representative of all state and local government pension plans nationwide; and 4) data for every plan is not available in each year. To obtain information on the funded status of retiree health benefits, we interviewed experts on retiree health benefits funding from national organizations, bond rating agencies, and one local government retiree benefits system. We also reviewed studies conducted by various organizations estimating the funded status. These organizations each obtained information about retiree health benefits liabilities from a number of different state and local governments and then extrapolated these figures to generate a nationwide estimate of all state and local governments. We reviewed the following studies: Credit Suisse, You Dropped a Bomb on Me, GASB, 2007. Limitations of this study include: only states in the analysis, not local jurisdictions, are included; assumes that those government entities for which Credit Suisse was able to find estimates of future retiree health benefit obligations were representative of governments overall in terms of age distribution and funding levels; and does not consider the variation in actuarial assumptions and methods between the different plans. Cato Institute, Unfunded State and Local Health Costs: $1.4 Trillion, 2006. Limitations of this study include: includes states only in the analysis, not local jurisdictions; assumes that those government entities for which Cato was able to find estimates of future retiree health benefit obligations were representative of governments overall in terms of age distribution and funding levels; does not consider the variation in actuarial assumptions and methods between the different plans; it is not clear how many employees were covered by the sample because there were so many localities; and figures on the percentage of employees covered by health care plans in state and local government jurisdictions may not be precise. OPEB for Public Entities: GASB 45 and Other Challenges, JP Morgan, 2005. Limitations of this study include: assumes that those government entities for which they were able to find estimates of future retiree health benefit obligations were representative of governments overall in terms of age distribution and funding levels; and does not consider the variation in assumptions and methods between the different plans. We conducted our work in Washington, D.C.; New York; and Connecticut, from July 2006 to January 2008 in accordance with generally accepted government auditing standards. State and Local Government Retiree Benefits: Current Status of Benefit Structures, Protections, and Fiscal Outlook for Funding Future Costs. GAO-07-1156. Washington, D.C.: September 24, 2007. State and Local Governments: Persistent Fiscal Challenges Will Likely Emerge within the Next Decade. GAO-07-1080SP. Washington, D.C.: July 18, 2007. Retiree Health Benefits: Majority of Sponsors Continued to Offer Prescription Drug Coverage and Chose the Retiree Drug Subsidy. GAO-07-572. Washington, D.C.: May 31, 2007. Employer-Sponsored Health and Retirement Benefits: Efforts to Control Employer Costs and the Implications for Workers. GAO-07-355. Washington, D.C.: March 30, 2007. State Pension Plans: Similarities and Differences Between Federal and State Designs. GAO/GGD-99-45. Washington, D.C.: March 19, 1999. Public Pensions: Section 457 Plans Pose Greater Risk than Other Supplemental Plans. GAO/HEHS-96-38. Washington, D.C.: April 30, 1996. Public Pensions: State and Local Government Contributions to Underfunded Plans. GAO/HEHS-96-56. Washington, D.C.: March 14, 1996. | Pension and other retiree benefits for state and local government employees represent liabilities for state and local governments and ultimately a burden for state and local taxpayers. Since 1986, accounting standards have required state and local governments to report their unfunded pension liabilities. Recently, however, standards changed and now call for governments also to report retiree health liabilities. The extent of these liabilities nationwide is not yet known, but some predict they will be very large, possibly exceeding a trillion dollars in present value terms. The federal government has an interest in assuring that all Americans have a secure retirement, as reflected in the federal tax deferral for contributions to both public and private pension plans. Consequently, the GAO was asked to examine: 1) the key measures of the funded status of retiree benefits and 2) the current funded status of retiree benefits. GAO analyzed data on public pensions, reviewed current literature, and interviewed a range of experts on public retiree benefits, actuarial science, and accounting. Three key measures help to understand different aspects of the funded status of state and local government pension and other retiree benefits. First, governments' annual contributions indicate the extent to which governments are keeping up with the benefits as they are accumulating. Second, the funded ratio indicates the percentage of actuarially accrued benefit liabilities covered by the actuarial value of assets. Third, unfunded actuarial accrued liabilities indicate the excess, if any, of liabilities over assets in dollars. Governments have been reporting these three measures for pensions for years, but new accounting standards will also require governments to report the same for retiree health benefits. Because a variety of methods and actuarial assumptions are used to calculate the funded status, different plans cannot be easily compared. Currently, most state and local government pension plans have enough invested resources set aside to keep up with the benefits they are scheduled to pay over the next several decades, but governments offering retiree health benefits generally have large unfunded liabilities. Many experts consider a funded ratio of about 80 percent or better to be sound for government pensions. We found that 58 percent of 65 large pension plans were funded to that level in 2006, a decrease since 2000. Low funded ratios would eventually require the government employer to improve funding, for example, by reducing benefits or by increasing contributions. However, pension benefits are generally not at risk in the near term because current assets and new contributions may be sufficient to pay benefits for several years. Still, many governments have often contributed less than the amount needed to improve or maintain funded ratios. Low contributions raise concerns about the future funded status. For retiree health benefits, studies estimate that the total unfunded actuarial accrued liability for state and local governments lies between $600 billion and $1.6 trillion in present value terms. The unfunded liabilities are large because governments typically have not set aside any funds for the future payment of retiree health benefits as they have for pensions. |
Traditionally, “universal service” has meant providing residential customers with affordable, nationwide access to basic telephone service. The Telecommunications Act of 1996, among other things, extended universal service support to eligible schools and libraries. The Act also specified that every telecommunications carrier that provides interstate telecommunications services, unless exempted by FCC, must contribute to a universal service fund. Finally, the Act directed FCC to convene a federal-state Joint Board to specify which services should be supported by the federal universal service mechanisms and recommend regulatory changes to provide such support. In its May 1997 universal service order, FCC adopted the Joint Board’s recommendation that eligible schools and libraries could receive discounts of between 20 to 90 percent on all telecommunications services, Internet access, and internal connections, subject to a $2.25 billion annual cap. Changes have been made to the program through a number of reconsideration orders, the latest of which was released on June 22, 1998. These orders define the size, time frame, and eligibility requirements for the schools and libraries program, the type and level of funding support available from universal service funds, and the administrative structure of the program, among other things. The general purpose of this program is to improve the access of schools and libraries to modern telecommunications services. Generally, any school that meets the Elementary and Secondary Education Act of 1965’s definition of schools is eligible to participate, as are libraries that can receive assistance from a state’s library administrative agency under the Library Services and Technology Act. In addition, the orders specifically define the three classes of services that are eligible for universal service support: telecommunications services, Internet access, and internal connections. FCC has defined the mechanism by which eligible schools and libraries will receive support from the universal service program. Specifically, schools and libraries do not receive direct funding from the program. Instead, they receive discounts on the costs of services provided by vendors. The amount of discount each school or library can receive under the program ranges from 20 to 90 percent and is determined using a matrix designed by FCC, with schools and libraries located in rural and low-income areas receiving the highest discounts from the fund. The universal service fund compensates the schools’ and libraries’ vendors for the amount of the discounts. The act did not prescribe a structure for administering the program. However, the FCC directed the establishment of the Schools and Libraries Corporation. FCC’s Chairman selects or approves the Corporation’s Board of Directors as well as the hiring and removing of the Chief Executive Officer. Under FCC’s orders, the Corporation is responsible for administering certain functions of the program, including processing and reviewing the applications and administering an Internet site on the World Wide Web. FCC also specified that the Corporation can only engage in activities that are consistent with FCC orders and rules. FCC’s latest reconsideration order significantly changed the program. Specifically, this order changed the funding year from a calendar year cycle to a fiscal year cycle and extended the first funding round period to 18 months. The order also adjusted the maximum amounts that could be collected and spent during 1998 and the first 6 months of 1999 and directed the Corporation to commit no more than $1.925 billion for the schools and libraries support program during this time frame. FCC also directed the Corporation to fund requests for telecommunication and Internet services first and then fund requests for internal connections. Those applicants eligible for the highest levels of discounts would receive funding priority for internal connections. The Corporation currently has 15 staff, all based in Washington, who manage the application and disbursement process and conduct outreach to potential applicants. To date, the Corporation has conducted over 130 outreach sessions informing schools and libraries about the program. In addition, the Corporation has established a web site that contains program applications, information, and updates. The Corporation also has provided training to its contractors’ staff in answering applicants’ questions and processing and reviewing applications. The Corporation has contracted out most of the application-processing, client support, and review functions to the National Exchange Carrier Association (NECA). NECA has 66 staff, the majority of whom are part of the program integrity assurance operation, which reviews the applications for compliance with the program’s eligibility requirements. NECA has also subcontracted with two organizations to provide customer support, process and enter the applications into the Corporation’s database, and establish and maintain the Corporation’s web site. As of June 1998, these two subcontractors employed approximately 390 staff dedicated to Corporation activities. According to Corporation officials, however, the subcontractors’ staffing levels could decrease as the system designs are finalized and the number of applications needing processing declines. The Corporation was established in the Fall 1997. The Corporation stated that its operating expenses for calendar year 1997 were approximately $1.9 million. For calendar year 1998, the first full year of program operations, the Corporation estimates its operating expenses at $18.8 million. Most of this estimate covers the costs of contracts, including the Corporation’s contract with NECA and an independent auditor. Corporation staff stated, however, that the 1998 estimate may increase as program procedures and systems need to be redesigned in response to FCC’s recent rule changes. To receive universal service support, schools and libraries must complete a two-stage application process which, for the program’s first year of funding, began in January 1998. During the first stage, applicants post requests for services on the Corporation’s web site so that vendors can provide the applicants with bids on the cost of providing the requested services. The Corporation has received to date nearly 48,000 of these initial applications (FCC Form 470). The second stage of the process begins after the schools and libraries have accepted a bid and entered into a contract with a service vendor. The applicants then submit on paper a second application (FCC Form 471) that details the types and costs of the services being contracted for, and the amount of the discount being requested. In its original order, FCC determined that applications would be funded on a first-come, first-served basis. Subsequently, FCC amended its rules and the Corporation established a 75-day window within which these second applications would be considered as arriving at the same time. This was done, in part, in order to reduce disparities between applicants with substantial administrative resources and applicants with fewer resources. As a result, the applications received within this window are not funded on a first-come, first-served basis. Approximately 32,600 applications were received during this initial window. The Corporation estimates that the applications contain approximately $2 billion in requests for discounts. The Corporation’s contractors review the second applications for compliance with what the Corporation considers to be “minimum processing standards,” which include a check for original signatures, completeness, and legibility. If the minimum standards are not met, the application is rejected. If the standards are met but other problems with the application are found, the application is sent to a problem resolution team that contacts the applicant to make corrections. After these problems are corrected, information from the application is entered into a database. FCC and the Corporation anticipated that all of the first year’s applications would be processed by the end of June 1998. According to the Corporation, however, as of July 7, 1998, information from only about 20,400 of the 32,600 applications (about 62 percent) received within the initial window had been entered into the Corporation’s database. Of the remaining applications, approximately 2,560 (8 percent) were rejected for not meeting minimum processing standards, 1,600 (5 percent) are in problem resolution, and 7,900 (24 percent) applications are awaiting data entry. According to Corporation officials, the delay occurred because the contractors have had to spend more time than expected in working with applicants to resolve problems. The officials stated that applicants found some parts of the applications and instructions confusing. In addition, the officials noted that the contractors made some mistakes initially in applying the minimum processing standards. Therefore, some rejected applicants are currently being contacted to resolve their problems, enter their data, and place them back in the initial application window. To ensure compliance with FCC rules and regulations, the Corporation relies on a combination of applicants’ self-certifications, third-party reviews, and its own procedures. Applicants are required to self-certify that they are following the program’s rules, and third parties, such as state-level education and library agencies, certify that the schools and libraries have technology plans in place that show how technology will be used to support their educational goals. In addition, the Corporation’s staff and contractors check applications to ensure that applicants are eligible, services are eligible, and discount levels are appropriate. The way the Corporation is conducting key compliance tests, however, raises our concern about how effective the tests will be in detecting deviations from program rules. We are also concerned about the timing of detailed reviews that the Corporation plans to conduct on a set of applications judged to be “high risk,” to provide further assurance that program rules are being followed. Currently, the Corporation is not planning to begin these selective, detailed reviews until after it issues commitment letters to applicants and their vendors informing them of the amount of funding that will be set aside to cover discounts for the services they are requesting. Should these subsequent reviews reveal systemic problems with the Corporation’s quality assurance procedures or defects in the reviewed applications, the Corporation could find it difficult to take corrective actions since the commitment letters are, in essence, “green light” signals to the applicants and vendors to go ahead with the contracted services. If the Corporation finds major problems with some of the applications at this time, it may have to reduce or withdraw funding commitments previously made. These applicants might find themselves responsible for paying more of the cost of services received than they planned for. On the basis of the Joint Board’s recommendations, FCC’s orders specified that the application process for schools and libraries would be grounded on self-certification by applicants. This was done in the belief that the administrative burden on applicants should be limited, while still holding them accountable for the information they provide. Accordingly, a responsible official must sign the application, certifying that the information presented is correct. FCC can impose civil and criminal penalties for applicants making willfully false statements. In addition to this general self-certification that all of the information provided is accurate, each application requires specific self-certifications about certain information provided. For example, the “request for services” application (FCC Form 470) requires applicants to self-certify that they or the entities they represent are an eligible school or library and that all services for which discounts are requested will be used for educational purposes only. The “request for discounts” application (FCC Form 471) includes additional self-certifications, such as assurances that all applicable state or local laws or rules regarding procurement have been followed. The applicants must also self-certify they have the budgetary resources, not only to pay their share of the costs of requested services, but also those resources necessary to use and maintain the technology services for which discounts are requested. In addition to the self-certifications on the Form 470 and Form 471 applications, FCC requires applicants to have a separate technology plan that provides details on how they intend to integrate technology into their educational goals and curricula, as well as how they will pay for the costs of acquiring and maintaining the technology. FCC requires that the plans be independently approved. To implement this requirement, the Corporation designates third parties, such as state education and library agencies or private school associations, to review and approve the plans on the basis of criteria provided by the Corporation. The schools and libraries do not routinely submit copies of their technology plans for review by the Corporation. These technology plans do not have to be approved when the applications are submitted or even when the Corporation commits funding support to the applicants. However, the applicants must certify to the Corporation that their plans have been approved before any funds are disbursed to cover the services requested. As a result, most applicants’ requests for discounted services are not routinely reviewed by the third-party reviewers in order to determine whether, in fact, the requested services are linked to the educational goals described in the applicants’ approved plans. According to Corporation officials, the third-party reviewers approve the plan but are not required to review the application. And, as noted above, the Corporation receives the application but does not routinely receive copies of the technology plan, although it may do so if it selects the application for a detailed review, as discussed below. The Corporation recognizes that self-certification and third-party approvals alone are not adequate controls to ensure compliance with the program’s rules. It has therefore established a program integrity assurance operation that is designed to help ensure that applications and invoices submitted to the Corporation are complete, accurate, and in compliance with FCC’s rules. No program integrity tests are applied to the initial application for services (Form 470). Instead, the Corporation focuses on reviewing the information submitted by applicants in their subsequent application for discounts (Form 471). The Corporation’s review of this application takes place in two stages. During the first stage, when the Form 471 application is submitted to the Corporation’s contractor, it is reviewed to ensure that it has met minimum processing standards. This review includes checking to see that the application has been signed by an authorized official and that the applicant is clearly identified. If the application does not meet the minimum standards, it is rejected and returned to the applicant. If the application meets minimum standards but is in some way unclear, it undergoes a “problem resolution” process, during which the Corporation’s contractor contacts the applicant to ask for clarification. At the second stage of the review process, the Corporation electronically compares information provided by the applicant against information in databases that the Corporation has compiled or purchased. Specifically, the Corporation runs three computer-assisted tests on each application. The set of tests: (a) compares the name of the applying school or library to a database of eligible schools and libraries, (b) looks for indications of whether any discounts are being requested for ineligible services, and (c) compares the discount requested by the applicant to the appropriate discount, as calculated from data maintained by the Corporation. Should these tests indicate potential problems with the eligibility of applicant, the eligibility of the services, or with the appropriateness of the discount, the Corporation’s contractor contacts the applicant to resolve the issues identified. Depending on the additional information provided by the applicant, the application can be approved, revised, or rejected (in total or in part). Of about 20,000 applications entered into the database and tested as of July 7, 1998, roughly 14,000 were identified by at least one of the three tests as needing further review. As indicated above, the Corporation has already applied these three tests to more than one-half of the 32,600 applications it has received. However, the Corporation added new criteria on several occasions to improve the particular test used to identify potentially ineligible services. Specifically, it added several criteria related to services prohibited under FCC’s rules after a number of applications had already been reviewed. As a result, different test standards have been applied to the applications already processed, depending on when they were reviewed. According to Corporation officials, they do not plan to use the updated criteria to recheck applications processed earlier to determine if any passed that should have been flagged for problem resolution. Another concern is the latitude of deviation allowed by the Corporation’s automated test that checks whether an applicant is requesting an appropriate discount level. This automated test compares an applicant’s requested discount with the appropriate discount as calculated from data in the Corporation’s database. The Corporation is not reviewing all the applications showing discrepancies between the database calculation and the applicant’s requested discount. Instead, it is allowing for a degree of deviation from the criteria established by FCC because, according to Corporation officials, the database used to conduct the test has some reliability problems. They are also concerned that reviewing all applications with any amount of deviation would increase processing time and costs without resulting in commensurate benefits. We recognize that internal controls should provide reasonable, but not absolute, assurance of deterring or detecting noncompliance with laws, regulations, and management policies. However, part of determining the reasonableness of controls involves assessing them in relation to the associated risks, costs, and benefits. A key risk in this instance is that allowing inappropriately high discount levels to some applicants reduces the amount of discount support available for others. To date, the Corporation has not performed a benefit-cost analysis to justify that its approach is reasonable. Specifically, the Corporation has not determined the total dollar amount of potentially inappropriate discounts that is passing unchallenged through its computer-based test. In addition to the tests described above, the Corporation plans to conduct other computer-assisted tests on the applications. For example, it plans on testing for duplicate applications. However, these tests have not been finalized. In addition to these computer-assisted tests, the Corporation plans to conduct more detailed manual reviews of applications that it considers to be “high risk.” However, according to current plans, these reviews will not be performed until after funds are committed to applicants and vendors. To carry out these detailed reviews, the Corporation will designate applications as high risk if they (1) request a large total amount of funds, (2) request a large amount of funds compared with other applications on a per-unit basis (such as per-student or per-patron), (3) are from wealthy private schools, or (4) have been placed on an “alert list” of applications that have been identified in some way as potentially violating the program’s rules. Although the procedures for these detailed reviews have not been finalized, the Corporation plans to require applicants to submit additional material to support the information provided in their applications, such as technology plans, budget information, requests for proposals, and bids. Using this material, the Corporation staff will give these high-risk applications a detailed review for compliance with the program’s rules, such as those regarding eligibility of services and prohibitions against the improper consideration of “free services” in awarding contracts. Performing these reviews after commitment letters have been sent has some disadvantages. First, the reviews would not help the Corporation evaluate the effectiveness of its three automated compliance tests before funds are committed. As a result, it may not be able to identify and correct any systemic problems in its application review process prior to commitment. In addition, if the Corporation finds major problems at this time with the applications reviewed, it may have to reduce or withdraw funding commitments from applicants. This could cause problems for applicants that have begun receiving services on the basis of their commitment letters. These applicants might find themselves responsible for paying a higher cost for those services than they planned. The Corporation has not yet finalized all the procedures, systems, and internal controls that it needs in order to make funding commitments and approve vendor compensation for the discounted services provided to applicants. Corporation officials stated that some progress has been made in developing the procedures and controls needed to conduct these processes and in developing the automated systems needed to carry them out. However, the procedures are still subject to change. In fact, key control documents in this process—the commitment letters and the “Receipt of Service” form (FCC Form 486) which triggers the funds disbursement process — have yet to be made final. Corporation officials could not estimate when these procedures and forms would be finalized. This situation is of concern because these procedures could be needed very shortly after commitment letters are sent to applicants and vendors. For example, applicants who are already receiving eligible services under existing contracts could quickly send in their Form 486s for processing once they receive commitment letters. Similarly, their vendors could quickly begin submitting invoices, and the Corporation could begin processing them once the related Form 486s have been accepted. The Corporation itself estimates that the invoices for payment could begin as soon as 15 days after commitment letters are sent. If the procedures and internal controls for this phase of the program are not in place when commitment letters are issued, the Corporation may find itself unable to process vendor invoices in a timely manner. According to Corporation officials, the delay in finalizing the commitment letters and disbursement procedures is due to the priority they have given to processing the backlog of applications as well as to anticipated changes in the program’s rules. As discussed earlier, FCC made changes to the universal service program in June 1998. As described in the order, there were two primary changes to the schools and libraries discount program. First, the funding year was changed from a calendar year to a fiscal year, effective immediately. To ease the transition, the 1998 funding year was extended 6 months to end June 30, 1999. According to FCC, this change was made because delays in starting the program made it difficult for some schools to use the funds within the original time period and because a fiscal year calendar is more convenient for applicants and for the companies that pay for universal service. Second, the order changed the funding priorities for schools and libraries. Previously, FCC rules did not provide for any differentiation among applications that were received during the initial 75-day application window, except to specify that the last $250 million would be distributed on a priority basis to the applicants eligible for the highest discount levels. However, after recognizing that the funds provided by its orders would probably not cover all of the applicants’ requests, FCC changed its priority rules so that all applications for telecommunications and Internet services would be funded first. The remaining funds would be distributed to applicants asking for internal connections, and those with the highest discount levels would be funded first. Corporation officials stated that they are still developing procedures to implement these changes, including procedures to allow applicants to amend their applications. In December 1997, FCC’s Chairman requested the Corporation to contract with an independent auditor to verify that the program’s processes and procedures provide the controls needed to mitigate against fraud, waste, and abuse. The Corporation accordingly engaged the services of an independent audit organization, which is currently reviewing the Corporation’s systems and procedures and providing advice on improvements. According to current plans, the auditor’s report is due to be completed before the Corporation authorizes the disbursement of funds. The independent audit is to include a review of the design of the program’s integrity assurance operations. According to the Corporation, the audit objectives are to determine if the Corporation has designed the controls necessary to provide reasonable assurance that (1) all applications are processed in the order received; (2) only eligible schools and libraries receive discounts for eligible services; (3) the discount percentages are calculated in accordance with FCC’s orders; (4) payments for reimbursements to vendors are timely; and (5) funding commitments do not exceed the program’s limits. However, the Corporation stated that these control objectives have not been finalized and are subject to change. The auditor’s scope of work, in this start-up phase, is focused on the design of the controls and will not include a verification of how effectively the controls have been applied. For example, the auditor will not review a sample of applications to determine whether the eligibility tests for applicants and services actually identified the applications that could have compliance problems. We believe that the independent audit can be useful in strengthening the program’s integrity, even with its limited scope of work. We are concerned, however, about the timing of the auditor’s final report, which is not due until after funding commitment letters have been issued to applicants and vendors. When we discussed our concern with Corporation officials, they proposed having the auditor brief the Corporation’s Board of Directors on its preliminary results regarding “pre-commitment” procedures before the Corporation sends out funding commitment letters. This approach, however, does not adequately address our concerns. The briefing would not cover the procedures that the Corporation would use for its post-commitment review of applications that it designates as high risk. More important, the briefing would not cover the procedures, systems, and internal controls associated with disbursing funds. As noted earlier, applicants and vendors could begin sending in forms and invoices for funds disbursement as soon as 15 days after commitment letters have been sent out. It is therefore important that the Corporation have all of its disbursement procedures, systems, and controls in place and reviewed by the independent auditor before commitment letters are issued. If the auditor’s final report comes later and identifies problems with disbursement procedures, it may be difficult for the Corporation to resolve them in a timely manner so that vendor invoices can be processed promptly and accurately. Currently, the Corporation does not know when the auditor’s formal report will be completed, partly because it does not know when it will finalize the funds disbursement procedures, systems, and controls for the auditor to review. Performance measurement is critical to determining a program’s progress in meeting its intended outcomes. Accordingly, the Congress, FCC, and the Corporation need clearly articulated goals and reliable performance data to assess the effectiveness of the schools and libraries program. FCC’s combined “Strategic Plan for Fiscal Years 1997-2002 and Annual Performance Plan for Fiscal Year 1999,” prepared in response to the Results Act, mentions the schools and libraries program in the context of a large number of telecommunications initiatives. However, this document provides no specific strategic goals, performance measures, or target levels of performance for the program as required by the act. The schools and libraries program is listed under the combined plan’s “Policy and Rulemaking Activity Objective 2,” which states that FCC “will encourage competition in the telecommunications industry through pro-competitive, deregulatory rulemakings, reducing consumer costs and increasing the telecommunications choices available to consumers.” However, this is a high-level, comprehensive goal that includes a wide array of telecommunications initiatives, such as radio spectrum management, the allocation of toll-free numbers, the review of merger requests, and standard setting for global communications services. Moreover, for all of the varied activities under this goal, there is a single general performance indicator: “Performance will be measured by an annual compilation of the number of actions taken by the Commission to promote competition and an analysis of the result of these activities on consumers.” While enhancing competition is part of FCC’s mission, it is not clear how this statement translates into a strategic goal for the schools and libraries program. Similarly, the annual performance goal for the schools and libraries program in fiscal year 1999 is too general, stating simply that FCC “will work to improve the connections of classrooms, libraries and rural health care facilities to the Internet by the end of 1999 and to maintain affordable Telecommunications services to rural America.” FCC needs to make the performance goals and measures for the program more specific to bring them in line with the Results Act’s requirements. The act defines an annual performance goal as the target level of performance expressed as a tangible, measurable objective against which actual achievement is to be compared. An annual performance goal is to consist of two parts: (1) the performance measure that represents the specific characteristic of the program used to gauge performance and (2) the target level of performance to be achieved during a given fiscal year for the measure. According to Corporation officials, they have begun exploring options for performance measurement. For example, they have identified a number of existing data sources that could be used to develop baseline data and measure trends in areas such as Internet connections. While this is encouraging, it is important that FCC take the lead as part of its policy-making and oversight responsibilities for the program. FCC can build on the Corporation’s preliminary work in revising its own annual performance plan to define specific goals and measures for the program. GAO has issued guidance on developing effective strategic plans which FCC should find useful. We recognize that a program in its first year of operation faces many challenges and difficulties. While the initial year cannot be expected to unfold without any problems, it is important that the program’s managers identify the major risks facing the program and address them at the time when corrective actions would be most effective. This time is approaching for the Corporation as it prepares to issue its first set of funding commitment letters to successful applicants. Given our concerns over the program integrity assurance operations, we believe that the Corporation needs to complete additional actions before, rather than after, commitment letters are issued to applicants. Waiting until after commitment letters have been issued will make it difficult for the Corporation to take effective actions to correct any systemic problems in the application review procedures and could put the Corporation in the position of having to withdraw funding commitments from applicants, even those who have begun receiving services from vendors. Similarly, issuing commitment letters before all of the program’s operating procedures, systems, and internal controls have been finalized and verified (especially those dealing with authorizing the disbursement of funds) would put the program’s integrity at risk. To help strengthen the Corporation’s program integrity assurance operations and help ensure that funding is properly directed to eligible applicants, for eligible and appropriate services, and at appropriate discount levels, we recommend that the FCC Chairman direct the Chief Executive Officer of the Schools and Libraries Corporation to complete the following actions before issuing any funding commitment letters to applicants: Conduct detailed reviews of a random sample of applications to assess not only the soundness of these applications but also the overall effectiveness of the Corporation’s program integrity procedures for detecting ineligible applicants, ineligible services, and inappropriate discount levels as defined by FCC orders. Should these reviews reveal systemic weaknesses in program integrity procedures or their implementation, the Corporation should take corrective actions before committing any funds. Finalize procedures, automated systems, and internal controls for the post-commitment phase of the program’s funding cycle, including funds disbursement. Obtain a report from its independent auditor that finds that the Corporation has developed an appropriate set of internal controls to mitigate against waste, fraud, and abuse. In addition, before issuing commitment letters for those applications identified as “high risk,” the Corporation should conduct detailed reviews of the technology plans and related documents to determine whether the applicants have the resources to effectively use the services requested and whether the applications are in compliance with FCC rules regarding eligibility. Finally, we recommend that the FCC Chairman direct responsible FCC staff to develop goals, measures, and performance targets for the schools and libraries program that are consistent with the requirements of the Results Act. These measures should be defined by the end of this Federal fiscal year so that data collection and analysis activities can begin during the program’s first funding cycle and goals can be communicated to future applicants. We performed our review during June and July 1998 in accordance with generally accepted government auditing standards. We met with officials from FCC and the Corporation to review the progress being made in starting up the schools and libraries program and implementing the first year’s funding cycle. We also met with the Corporation’s contractor in New Jersey, which has major responsibilities for processing and reviewing the program’s applications. We reviewed guidance and procedures developed by FCC and the Corporation, along with status reports on the program’s activities and cost data. We did not verify the accuracy of the information in these reports or the cost data. We discussed our findings and recommendations with FCC and Corporation officials. The Corporation’s Chief Executive Officer agreed with our recommendations. In addition, in response to Corporation comments, we made a few revisions including clarifying the scope of the detailed compliance reviews. FCC’s Common Carrier Bureau Chief stated that the recommendations are reasonable. Mr. Chairman, this concludes our testimony. We would be happy to answer any questions that you and members of the Committee may have at this time. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | GAO discussed issues related to the Schools and Libraries Corporation's operating procedures and internal controls, focusing on: (1) its progress in reviewing applications; (2) the scope and timing of key compliance tests; (3) the status of its efforts to finalize its operating procedures; and (4) the status of the independent audit to determine whether the Corporation has developed an appropriate set of internal controls to mitigate against fraud, waste, and abuse. GAO noted that: (1) the Corporation has made substantial progress in establishing an operational framework for the program that is consistent with relevant Federal Communications Commission (FCC) orders; (2) with regard to processing applications, the Corporation has worked with schools and libraries to inform them about the program and its application procedures; (3) during the initial application period, which began on January 30, 1998, and ended on April 15, 1998, schools and libraries sent in over 32,600 applications for discounts; (4) however, processing these applications has taken longer than either the Corporation or FCC expected; (5) the Corporation relies on a combination of applicants' self-certifications, third-party reviews, and its own procedures to ensure compliance with FCC's rules and regulations; (6) the Corporation tests applications for compliance with rules on the eligibility of applicants and requested services, and on the amount of requested discounts; (7) also, while the Corporation plans to conduct additional tests and reviews to ensure that applications are consistent with program rules, their scope and timing have not been finalized; (8) while the Corporation has established procedures for initially reviewing the applications, it has not yet finalized all necessary procedures and related internal controls for the program; (9) GAO is particularly concerned about this because the Corporation estimates that invoices for payment could begin to arrive as soon as 15 days after commitment letters are sent out; (10) the FCC Chairman has called for an independent audit of the Corporation's internal controls to help mitigate against fraud, waste, and abuse; (11) since applicants and vendors could begin submitting forms and invoices for disbursement of funds as soon as 15 days after they receive their commitment letters, it is important that the Corporation have all of its disbursement procedures, systems, and controls in place and reviewed by the independent auditor before sending these letters; (12) the FCC has not developed performance goals and measures for this program consistent with the requirements of the Government Performance and Results Act of 1993; (13) FCC's Strategic Plan for Fiscal Year 1997-2002 and Annual Performance Plan for Fiscal Year 1999 mentions the schools and libraries program in the context of a large number of telecommunications initiatives, but establishes no specific performance measures or target levels of performance to be achieved by the program; and (14) the Corporation is still developing and finalizing some of its procedures and controls, and they are subject to change. |
The Department of Defense (DOD) has been recovering nonrecurring research and development and one-time production costs on sales of weapon systems to foreign governments since 1967. The requirement to recover a proportionate amount of these costs was codified in the Arms Export Control Act in 1976, 22 U.S.C. section 2761 (e)(1)(B). The intent of the act was to control U.S. costs and the extent of weapons sales to foreign governments. The law required the recovery of costs on foreign military sales (government-to-government sales), but DOD retained its policy to collect nonrecurring costs on direct commercial sales (between the contractor and the buying entity) as it had been doing before the law was enacted. In 1992, DOD canceled its policy to recover nonrecurring costs on direct commercial sales in an effort to increase the competitiveness of U.S. firms in the world market. In 1995, a number of bills were introduced that could affect the recovery of nonrecurring costs on military sales. DOD interpreted the Arms Export Control Act as requiring the recovery of research and development costs on a pro rata basis. Between 1974 and 1977, DOD used a pro rata rate up to 4 percent of the total sales price. Currently, the services calculate the pro rata rate by dividing total research and development and other one-time production costs by the anticipated total number of units to be produced for both domestic and foreign use. A separate charge is calculated for each item of major defense equipment. The Defense Security Assistance Agency (DSAA) must approve all charges. They are published in the Major Defense Equipment List (MDEL) as part of DOD Manual 5105.38-M. DSAA officials acknowledged that the current pro rata calculation is complex and subject to error, particularly if sales fall short of or exceed projections. Nonrecurring cost charges are considered offsetting proprietary receipts and are deposited into the U.S. Treasury General Fund. They are credited to DOD’s total budget authority and total outlays but cannot be spent unless specifically appropriated. The Arms Export Control Act also specifies that waivers or reduced charges of nonrecurring costs are permitted on sales to the North Atlantic Treaty Organization (NATO) countries, Australia, New Zealand, and Japan to further standardization and mutual defense treaties. However, each waiver and reduction requires written justification. DOD collected $181 million in nonrecurring costs under the foreign military sales program in fiscal year 1994. Fiscal year 1990-92 collections total $559.4 million—$337.3 million for foreign military sales and $222.1 million for direct commercial sales. Fiscal year 1993 collections totaled $177.9 million. DSAA estimated in February 1995 that collections during fiscal years 1995-99 could amount to $845 million. DSAA based these estimates primarily on past sales. DSAA also estimated that if the charge on foreign military sales is dropped as proposed, collections would decrease by $73 million through 1999. Some collections would continue based on deliveries to be made on current contracts. (See fig. 1.) A DSAA official stated that collections would probably stop completely in fiscal year 2002 if the charge is repealed in fiscal year 1995. In May 1995, DSAA estimated that if a requirement to collect nonrecurring costs on direct commercial sales were reimposed in fiscal year 1996, it would resume collections in fiscal year 1998, given production and delivery lead times, and recover about $198 million through fiscal year 1999. Table 1 shows estimated collections on both foreign military and direct commercial sales (including a charge on direct commercial sales). DOD waived $273 million in nonrecurring costs to NATO members and Japan in fiscal year 1994, about $92 million more than DOD collected in nonrecurring cost charges in the same year. About 90 percent of the waivers involved Norway’s purchase of missiles and Turkey’s purchase of missiles, aircraft, gun mounts, sonars, and vertical launchers. DOD’s justification for the waivers involving Norway was to help achieve standardization, and the justification for waivers involving Turkey related to base rights agreements. Table 2 shows the aggregated totals of authorized waivers to NATO, 12 individual NATO countries, Australia, and Japan for fiscal years 1991 to 1994. Waivers on direct commercial sales represent sales agreements signed before the 1992 repeal. We focused our analysis on the comparison of current pro rata charges with flat rate charges of 3, 5, 8, and 10 percent on the acquisition cost of 68 weapon systems sold. First, we calculated the charges on four categories of weapons—projectiles, missiles, aircraft, and aircraft engines. The flat rate charges of 3 and 5 percent generally resulted in lower total charges for each category of weapon systems—in the aircraft category, the charge was considerably less at 3 percent—than the total pro rata charges. Flat rate charges of 8 and 10 percent in most cases resulted in comparable or considerably higher total charges than the current pro rata charges for the four categories of weapon systems. For example, a 3-percent flat rate charged on the sale of each of 27 aircraft resulted in total charges of $20.4 million, or $9.5 million less than the $29.9 million recovered under the pro rata system. On the other hand, a 10-percent flat rate charge on the sale of each of the 27 aircraft resulted in a total charge of $67.9 million, or $38 million more than the $29.9 million recovered under the pro rata system. On the 68 weapon systems we examined, current pro rata charges ranged from 0.07 percent to 15.95 percent of acquisition cost and averaged 5.18 percent. Thus, for a given flat rate of 3, 5, 8, or 10 percent, the difference between the flat rates and pro rata charges varies widely. For example, a 3-percent flat rate would be greater than a pro rata charge for 19 of the 27 aircraft we examined whereas a 3-percent flat rate was larger than the pro rata charge for only 2 of the 13 missiles we examined. However, on some sales of commonly sold military items, DOD might not recover the same level of charges using a nominal flat rate that it would under the pro rata system. For example, DOD anticipated collections of $279 million in nonrecurring cost charges on the sales of 228 F-16 A/B aircraft and 131 F-16 C/D aircraft when they are delivered to the buying countries. A flat rate of 3 percent on these sales would yield about one-half the pro rata charges; a 6-percent flat rate would yield an amount comparable to the pro rata charges. On sales of HARM AGM-88 missiles to three countries, total pro rata charges for the 181 missiles sold amount to $3.85 million. A 3-percent flat rate on these sales would provide only 40 percent of the pro rata charges; a 7.5-percent flat rate would yield an amount comparable to the pro rata charges. Appendix I compares the current pro rata charges with flat rate charges of 3, 5, 8, and 10 percent on the weapon systems we examined. The benefit of computing nonrecurring cost charges with a flat rate is its ease of administration. In addition, some of the U.S. government’s research and development investment would be recovered, though perhaps not accurately or equitably for some specific weapons or categories of weapons. Total recoveries are affected by sales, which are affected by a buyer’s assessment of economic factors such as price, quality, availability, and competition, and must also be considered. We did not analyze flat rate charges on commercial sales because of the proprietary nature of commercial sales prices. However, DSAA officials stated that the same rate would apply to both types of sales should the nonrecurring cost charge be reimposed on direct commercial sales. We reported in 1986 that the pro rata system had inaccuracies that prevented DOD from collecting accurate nonrecurring cost charges. For example, DOD was unable to accurately predict future costs and future U.S. and foreign quantity requirements. At that time, we recommended that DOD improve the existing pro rata system or develop a new approach for recovering research and development costs. The approach we suggested was to apply a flat rate to the acquisition price of all equipment sold abroad. We reported that with the use of a flat rate, DOD would recover comparable research and development costs yet simplify the complex administrative and review process of calculating a pro rata fee. In 1986, DOD opted to retain the pro rata calculation and stated that the Arms Export Control Act would need to be revised to permit the use of a flat rate fee. DOD’s reasoning at the time was that a flat rate would not recover a “proportionate” share of investment on individual items as the law required. DSAA’s General Counsel now interprets 22 U.S.C. section 2761 (e)(1)(B) as allowing a flat rate to be collected because the law requires recovery of a proportionate amount, not a pro rata share. Thus, the DSAA General Counsel concluded that the law would not have to be amended to permit the use of a flat rate. In our view, it is not clear that DOD would have authority under current law to use a flat rate. Supporters and opponents of recovery of nonrecurring costs differ on its benefits and drawbacks. On one hand, supporters of nonrecurring cost recovery that we spoke with, including arms control advocates, argue that nonrecurring cost charges should be collected on both foreign military sales and direct commercial sales for a number of reasons. Some supporters believe that, from an economic standpoint, the United States should recover all its costs and not subsidize the weapons industry by forgoing recovery of a portion of its research and development investments. Others believe that arms sales decisions should be based on national security concerns, not the economic interests of private firms. One group pointed to successful conversions of elements of the defense industry to competitive members of the international market for civilian goods as a means to counter declining defense production. Some arms control advocates assert that higher prices may deter sales and lessen any threat to the United States by reducing the availability of arms worldwide. Some supporters told us that recovered charges are deposited into the U.S. Treasury and thus relieve the U.S. budget deficit and benefit U.S. taxpayers. Some groups believe waivers to NATO and other foreign countries should be abolished as well. Opponents of recovery that we spoke with, generally industry representatives, favor repeal of the charge on foreign military sales and are adamantly against reimposing it on direct commercial sales. They expressed concerns that the charges raise sales prices and inhibit U.S. businesses’ competitiveness in the world market. They asserted that any addition to the cost of weapons could price U.S. industry out of the world market with a cascading adverse impact on U.S. jobs, income, and tax revenue. They also stated that lost sales, whether government-to-government or direct commercial sales, raise prices to the U.S. military services because they lose the benefit of lower unit costs. Industry officials also stated that the charge is an unfair tax that does not accurately represent U.S. research and development investment and is applied in an arbitrary manner. Many industry representatives said that the U.S. research and development investment benefits U.S. forces regardless of foreign sales and should not be imposed on foreign customers. DOD officials stated that they believe the elimination of the recovery charge would not negatively affect national security interests and the elimination of the recovery charge would, overall, be beneficial to the United States. In a May 1995 report, we compared U.S. government support for military exports with that of France, Germany, and the United Kingdom. We pointed out that, among other things, (1) the United States has been the world’s leading defense exporter since 1990, with almost 50 percent of the global market; (2) based on orders placed but not yet filled, U.S. industry will likely remain strong in the world market, at least for the short term; and (3) the U.S. government already provides substantial financial and other support to the U.S. defense exporters. Because of the large size of the U.S. domestic defense program, European businesses believe they are at a disadvantage when competing with U.S. firms. In written comments on a draft of this report, DOD concurred with the report. DOD indicated that (1) the Department fully supported the administration’s proposal to repeal the statutory requirement to recover nonrecurring costs on foreign military sales of major defense equipment, (2) a consistent policy for foreign military and direct commercial sales is essential, and (3) the current imbalance between the two types of sales should be eliminated. DOD’s comments are reprinted in appendix II. DOD also provided technical suggestions to clarify the report and they have been incorporated where appropriate. It should be pointed out that our review was not to assess the legislative proposals, but rather to focus primarily on the financial effects of using a flat rate instead of the current pro rata fees. We obtained information for this review from officials of DOD, DSAA, and the military services. We reviewed applicable statutes and DOD regulations governing recovery of nonrecurring costs on foreign military sales. We also discussed the benefits and drawbacks of recovering nonrecurring costs with supporters and opponents of recovery. To determine the effect of imposing a flat rate charge, we obtained from each of the services the acquisition value of selected major defense equipment sold under the foreign military sales program. We calculated nonrecurring cost charges using flat rates of 3, 5, 8, and 10 percent of the acquisition values of the selected weapon systems. We did our work between January and March 1995 in accordance with generally accepted government auditing standards. Unless you publicly announce its contents earlier, we plan no further distribution of this report until 10 days from its issue date. At that time, we will send copies to the Chairmen of the Senate and House Committees on Appropriations, the Secretaries of Defense and State, and the Director of the Office of Management and Budget. Copies will also be made available to others on request. Please contact me on (202) 512-4128 if you or your staff have any questions concerning this report. Major contributors to this report were Diana Glod, Barbara Schmitt, and George Taylor. Table I.1: Pro Rata and Flat Rate Charges on Projectiles Nonrecurring pro rata charge (percent of acquisition cost) Table I.2: Pro Rata and Flat Rate Charges on Missiles Nonrecurring pro rata charge (percent of acquisition cost) Table I.3: Pro Rata and Flat Rate Charges on Aircraft Nonrecurring pro rata charge (percent of acquisition cost) (Table notes on next page) Without two J-85 engines. Without engines, AN/APG-63 radars, multistage improvement program, and towed electronic warfare system. With engines. Nonrecurring pro rata charge (percent of acquisition cost) The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO provided information on military exports, focusing on: (1) the government's recovery of nonrecurring research and development costs on sales of major defense equipment; (2) the effect of charging a flat or standard rate rather than the current pro rata fee; and (3) views from supporters and opponents on the recovery of these costs. GAO found that: (1) the Department of Defense recovered $181 million in nonrecurring costs on foreign military sales in fiscal year (FY) 1994 and estimated, based on historical trends, that collections could amount to $845 million between FY 1995 and 1999; (2) the Defense Security Assistance Agency waived almost $273 million in nonrecurring cost charges on sales to North Atlantic Treaty Organization countries and Japan in FY 1994; (3) the total value of waivers for FY 1991 through 1994 amounted to $773 million; (4) if the charge for nonrecurring costs is repealed, some collections would continue for a few more years as the charges are recovered on deliveries associated with prior years' sales; (5) if the legislative requirement to collect nonrecurring cost charges is not repealed, one alternative to the current pro rata charge is a flat rate charge, which would be easy to calculate and would not need to be periodically updated, as is the case in calculating a pro rata charge; (6) the effect of a flat rate varies depending on the way it is applied, in some cases the amount the U.S. government would collect on each unit sold would be less than the pro rate charge, in others it would be considerably more; (7) the total charges for each of four categories of 68 weapons systems (projectiles, missiles, aircraft, and aircraft engines) were generally lower than the current pro rata charges when using three and five percent flat rates but were comparable or higher for the most part when using eight and ten percent flat rates; (8) the differences between the pro rata charges and the flat rate charges for each of the 68 weapons systems varied widely for the same four flat rates and, for example, were considerably higher for some aircraft but lower for some missiles; (9) the average of the current pro rata charge on the acquisition cost of the 68 weapons systems was 5.18 percent; (10) supporters and opponents of recovery of nonrecurring costs differ on its benefits and drawbacks; (11) supporters, including some arms control advocates, believe that the charges serve national security interests by keeping weapons systems out of unstable regions of the world and the weapons industry should not be subsidized at taxpayers' expense; (12) opponents believe the charges adversely affect U.S. industry's competitiveness in the world market and could affect the U.S. economy in the long run; and (13) the United States has been the world's leading defense exporter since 1990, and based on orders received but not yet filled, the United States is likely to retain its first place position in the world market for at least the short term. |
U.S. support for the Bosnia peace operation is projected to cost an estimated $10.6 billion from fiscal year 1996 through fiscal year 1999. The peace operation was established after 3-. 1/2 years of war in Bosnia and Herzegovina (hereafter referred to as Bosnia), when international intervention culminated in the signing of the Dayton Agreement in December 1995. In signing the agreement, the parties to the conflict—political leaders of Bosnia’s three major ethnic groups, Croatia, and the Federal Republic of Yugoslavia—agreed to implement a number of security, political, and economic measures intended to bring peace and stability to the region. To assist the parties in their efforts, the international community created the Bosnia peace operation, consisting of an international military force led by the North Atlantic Treaty Organization (NATO) and numerous international civilian organizations. In early May 1997, we reported that the Bosnia peace operation had created and sustained an environment that allowed the peace process to move forward, but reconciliation had not yet occurred due to the intransigence of Bosnia’s political leaders. During 1997, important changes to the operation and its political environment provided additional authority in some areas and created new opportunities for accelerating the pace of implementing the Dayton Agreement’s provisions. Executive branch estimates available as of March 3, 1998, indicate that the United States will provide about $10.6 billion for military and civilian support to the Bosnia peace operation from fiscal years 1996 through 1999: about $8.6 billion in incremental costs for military-related operations and about $2 billion for the civilian sector (see table 1.1). The Department of Defense (DOD) estimates assume that the United States will maintain its current force of about 8,500 in Bosnia through June 1998 and then draw down to about 6,900 by late October 1998. According to a U.S. Army Europe document, U.S. force levels will temporarily increase by about 2,000 troops in June as units rotate into and out of Bosnia and by up to about 1,000 troops for a 75-day period around the time of Bosnia’s September 1998 elections. The U.S. military will also have deployed another 3,750 troops in Croatia, Hungary, and Italy in support of the Bosnia operation. In fiscal year 1997, DOD incurred about $2.3 billion in incremental costs for its participation in NATO operations in Bosnia. The U.S. Army, which is deploying and logistically supporting ground troops in and around Bosnia, incurred nearly 80 percent of these costs, or about $1.77 billion. The U.S. Air Force spent about $256 million, while the Navy and Marine Corps together spent about $77 million. In addition, about $172 million was spent by the following DOD components for operations related to Bosnia: the National Imaging and Mapping Agency, the Defense Intelligence Agency, the Defense Information Systems Agency, Defense Logistics Agency, the U.S. Special Operations Command, the Defense Health Program, and the National Security Agency. U.S. civilian agencies in fiscal year 1997 obligated about $450 million for programs designed to assist in the economic, political, and social transition taking place in Bosnia (see table 1.2). Most of this assistance, almost $250 million, was funded by USAID. The State Department provided about $190 million; the remainder was obligated by other civilian agencies, including USIA and the Departments of Justice, Commerce, and the Treasury. Appendix I provides more information on U.S. civilian programs that supported the Bosnia peace operation in fiscal year 1997. The 1992-95 war in Bosnia was part of the violent dissolution of the Socialist Federal Republic of Yugoslavia, which had been an ethnically diverse federation of six republics with almost no history of democratic governance or a capitalist economy. The war was fought among Bosnia’s three major ethnic/religious groups—Bosniaks (Muslims), Serbs (Eastern Orthodox Christians), and Croats (Roman Catholics)—the latter two being supported directly by the republics of Serbia and Croatia, respectively. Bosnian Serb and Croat war aims were to partition Bosnia and establish ethnically pure states: Bosnian Serbs created Republika Srpska, and Bosnian Croats established Herceg-Bosna. In contrast, the Bosniaks claimed to support a unified, multiethnic Bosnia. In March 1994, U.S. mediation led to the creation of a joint Bosniak-Bosnian Croat entity—the Federation—and a cease-fire between the Bosniak and Bosnian Croat armies, which continued to fight against the Bosnian Serb army. The United Nations and other international mediators were generally unsuccessful in their attempts to stop the war until the U.S. government took the lead in negotiations during mid-1995. By October 1995, a cease-fire among all three militaries was established. In December 1995, the Dayton Agreement was signed, continuing the complex and difficult process of attempting reconciliation among the parties to the conflict. Building on the October 1995 cease-fire, representatives from Croatia, the Federal Republic of Yugoslavia, and Bosnia’s three major ethnic groups signed the Dayton Agreement in Paris on December 14, 1995. The agreement defined Bosnia and Herzegovina as consisting of the two entities that had been created during the war—Republika Srpska and the Bosniak-Croat Federation—and divided them by an interentity boundary line (see fig. 1.1). Both entities agreed to the transfer of some territory they held at the time of the cease-fire. Republika Srpska would comprise 49 percent of Bosnia (and nearly all of the Bosnian Serb-controlled areas), and the Federation would consist of 51 percent of Bosnia. The Federation territory would be made up of noncontiguous areas of Bosniak and Bosnian Croat control. Most areas within Bosnia, with the exception of central Bosnia, are populated and controlled by a predominant ethnic group as a result of population movements during the war. The Federation consists of 10 cantons, a level of government that would link together a number of municipalities (see table 1.3). All of the cantons are in a very early stage of development. At the time the Dayton Agreement was signed, the Bosniaks and Bosnian Croats also signed a related side agreement on the development of Federation economic and governmental institutions. Also, the U.S. government initiated a separate program to train and equip a unified Federation military. According to State Department officials, the program is intended to correct an imbalance of military power in the region and fulfill a commitment the U.S. government made to the Bosniaks in return for their approval of the Dayton Agreement. In signing the Dayton Agreement and related side agreements, political leaders of Bosnia’s three major ethnic groups pledged to provide security for the people of Bosnia; create a unified, democratic Bosnia within internationally recognized boundaries—to include surrendering indictees to the International Criminal Tribunal for the former Yugoslavia (hereafter referred to as the war crimes tribunal) at the Hague, the Netherlands; rebuild the economy; and ensure the right of people to return to their homes (see table 1.4). In response to the leaders’ request for assistance in achieving these goals, the international community established the Bosnia peace operation. While the Dayton Agreement placed responsibility for implementing the agreement on the parties, it also gave responsibility for assisting the parties in their efforts to five principal international organizations, as well as donor countries and organizations. The operation’s principal organizations, as they existed in December 1997, are depicted in figure 1.2. NATO-led forces—first the Implementation Force (IFOR) in December 1995 and later SFOR in December 1996—monitored and enforced implementation of the military aspects of the agreement, including separating and controlling the Bosniak, Bosnian Serb, and Bosnian Croat militaries and ensuring the demilitarization of the zone of separation, as specified by annex 1A of the Dayton Agreement. If resources were available, NATO-led forces were also expected to (1) help create secure conditions for the conduct of other Dayton Agreement tasks, such as elections; (2) assist UNHCR and other international organizations in their humanitarian missions; (3) observe and prevent interference with the movement of civilian populations, refugees, and displaced persons and respond appropriately to deliberate violence to life and person; and (4) monitor the clearing of minefields and obstacles. Although SFOR had an authorized force level of 31,000 troops, about half the size of IFOR, higher force levels were consistently maintained throughout 1997. As of November 17, 1997, SFOR had about 34,300 troops from 16 NATO and 20 non-NATO countries in Bosnia and an additional 2,500 support troops in Croatia; the United States had 8,300 troops in Bosnia, with an additional 3,400 support troops in Croatia, Hungary, and Italy. As with IFOR, the United States is the largest force provider to SFOR, and Americans hold the key NATO military positions that control the operation. On the civilian side of the operation, the Dayton Agreement created OHR and gave the High Representative many responsibilities. These included monitoring implementation of the agreement, coordinating civilian organizations, maintaining close contact with the parties, and giving the final interpretation in theater on civilian implementation of the agreement. Throughout most of 1997, the High Representative did not use his authority to enforce the parties’ compliance with the civil provisions of the Dayton Agreement. However, in December 1997 the Peace Implementation Council agreed to support a new, expanded interpretation of the High Representative’s mandate that allows him to resolve difficulties in implementing the agreement caused by the intransigence of Bosnia’s political leaders. UNMIBH consisted of three components, including IPTF. IPTF’s mandate was to (1) monitor, observe, and inspect the parties’ law enforcement activities and facilities; (2) advise governmental authorities on how to organize effective civilian law enforcement agencies; (3) advise and train law enforcement personnel; and (4) investigate and report on any human rights abuses by Bosnia’s police. IPTF’s mandate does not include the power of arrest. As of December 1, 1997, IPTF consisted of 2,004 unarmed, civilian police monitors from 40 countries. UNHCR’s role in the implementation of the Dayton Agreement was to work with the parties to (1) develop a repatriation plan that would allow the early, peaceful, and phased return of refugees and displaced persons and (2) foster returns of refugees and displaced persons to their homes. OSCE supported international and local efforts to promote democratization and ethnic reconciliation in Bosnia, monitored and reported on human rights, assisted with negotiation and implementation of confidence-building measures and arms control, and supervised the election process. In 1997, OSCE supervised two sets of elections: the nationwide municipal elections originally scheduled for September 1996 but postponed until September 1997, and the elections for the Republika Srpska National Assembly that were called on short notice and held in late November 1997. During 1997, important changes in the organization and political environment of the Bosnia peace operation gave the operation additional authority in some areas and provided new opportunities for supporting Bosnia’s political leaders who uphold the implementation of the Dayton Agreement. Specifically, (1) in April 1997 a supervisory administration with significant authority was established in the strategically important area of Brcko; (2) in May and June 1997, as well as later in the year, the international community led by the United States expressed and demonstrated a much stronger commitment—both politically and militarily—to full implementation of the Dayton Agreement’s civil provisions; and (3) in late June 1997, a division in the Bosnian Serb political leadership and the ruling Bosnian Serb political party, the Serb Democratic Party (SDS), started a process of transforming the political environment and governmental structures in Republika Srpska and in Bosnia as a whole. At Dayton, the parties were unable to agree on which of Bosnia’s ethnic groups would control the strategically important area in and around the city of Brcko. The agreement instead called for an arbitration tribunal to decide this issue by December 14, 1996. At the end of the war, Brcko was controlled by Bosnian Serb political leaders and populated predominately by Serbs due to “ethnic cleansing” of the substantial prewar Muslim and Croat population, who had then accounted for 63 percent of the city’s population, and resettlement of Serb refugees there. Western observers in Bosnia told us that an arbitration decision that awarded control of the area to either the Bosniaks or Bosnian Serbs would lead to civil unrest and would possibly restart the conflict because the location of Brcko made it vitally important to both parties’ respective interests. After granting a request for a 2-month extension, the arbitration tribunal issued a statement on February 14, 1997. This statement essentially postponed the hard decision and called for the international community to designate a supervisor under the auspices of OHR, who would establish an interim supervisory administration for the Brcko area. This supervisory organization would be authorized to oversee the implementation of the civil provisions of the Dayton Agreement in the Brcko area; specifically, to allow former Brcko residents to return to their homes, to provide freedom of movement and other human rights throughout the area, to give proper police protection to all citizens, to encourage economic revitalization, and to lay the foundation for local representative democratic government. On March 7, 1997, the Peace Implementation Council Steering Boardannounced that the High Representative had appointed a U.S. official as Brcko Supervisor. On March 31, 1997, the U.N. Security Council authorized an increase in the strength of UNMIBH’s IPTF by 186 police monitors and 11 civilian personnel to promote respect for freedom of movement and to facilitate the orderly and phased return of refugees in the Brcko area. The Brcko Supervisor established his office on April 11, 1997, which was to operate for at least 1 year. On March 15, 1998, the Brcko arbitrator announced that the decision on the status of Brcko would be postponed for another 6 to 12 months. As described in the arbitration statement and a Peace Implementation Council document, the Brcko Supervisor had more authority over this area of operations than the High Representative had in Bosnia at that time. The arbitration tribunal’s decision gave the Supervisor authority to issue binding regulations and orders to (1) assist in implementing the Dayton Agreement in the Brcko area and (2) strengthen the area’s local, multiethnic democratic institutions. These regulations and orders would prevail over existing laws in the area if a conflict existed. Further, in reaffirming the right of persons to return to their homes of origin, the Peace Implementation Council said that any new influx of refugees or displaced persons should occur only with the consent of the Supervisor in consultation with UNHCR. Neither document, however, described how the Supervisor would enforce his regulations, orders, or decisions if the parties did not choose to comply. In the spring of 1997, the United States conducted a major review of U.S. policy in Bosnia, an effort that helped reinvigorate the peace process by demonstrating renewed U.S. commitment to implementing the Dayton Agreement. Following the policy review, the Steering Board of the Peace Implementation Council articulated and SFOR demonstrated the international community’s commitment to achieving Dayton’s goals. On May 30, 1997, following a meeting in Sintra, Portugal, the council’s Steering Board supported the more vigorous U.S. approach, issuing a statement, known as the Sintra Declaration, that confirmed the Steering Board’s long-term commitment to the peace process in Bosnia and reaffirmed that the international community would not tolerate a resumption of hostilities by anyone in the country in the future; emphasized that Bosnia and Herzegovina will remain a united and sovereign country, consisting of two multiethnic entities, and that the international community will not tolerate any attempts at ethnic partition, in fact or in law, by anyone; demanded that Bosnia’s political leaders and national and entity governments significantly accelerate their work toward implementing the Dayton Agreement; set specific, near-term dates by which Bosnia’s political leaders and government institutions would have to accomplish specific tasks, such as pass citizenship and passport laws, that would link the country’s ethnic groups and their separate areas of control; and, in some cases, described diplomatic consequences if the parties did not accomplish the tasks by the specified date; acknowledged the High Representative’s authority to regulate Bosnia’s media, specifically to curtail or suspend any media network or program whose output is in persistent and blatant contravention of either the spirit or letter of the Dayton Agreement; and reemphasized that providing economic assistance to Bosnia would be conditioned at the municipal level on the parties’ complying with the Dayton Agreement, particularly those provisions dealing with surrendering indictees to the war crimes tribunal and accepting the peaceful return of refugees and displaced persons to their prewar homes. Beginning in mid-1997, SFOR began to more actively support implementation of the civilian aspects of the peace operation. For example, SFOR began to provide general and local security for people returning to their prewar homes across ethnic lines in June/July 1997; defined and in August 1997 began to control special police as paramilitary units under annex 1A of the Dayton Agreement, as a step toward either disbanding and disarming them and/or bringing them under the IPTF restructuring program for civilian police; and supported the High Representative’s attempts to curtail media that blatantly and persistently violated the Dayton Agreement by taking control of five television transmitters in Republika Srpska during October 1997. On December 10, 1997, the Peace Implementation Council reiterated the international commitment to implement fully the Dayton Agreement. The council’s conclusions, based on its interpretation of the Dayton Agreement, also stated that the High Representative could make binding decisions on (1) the timing and location of meetings and the chairmanship of Bosnia’s common governmental institutions; (2) interim measures that would take effect when parties are unable to reach agreement and would remain in force until Bosnia’s collective Presidency or Council of Ministers had adopted a decision consistent with the Dayton Agreement on the issue concerned; and (3) other measures to ensure implementation of the Dayton Agreement throughout Bosnia and its entities, as well as the smooth running of common institutions. Such measures may include actions against persons holding public office or officials who are absent from meetings without good cause or who are found by the High Representative to be in violation of legal commitments made under the agreement or the terms for its implementation. As of mid-1997, Bosnian Serb political leaders had not started to implement key areas of the Dayton Agreement. This was in large part due to Radovan Karadzic’s blocking of attempts of more moderate Bosnian Serb political leaders to work with the international community in efforts that would link Bosnia’s ethnic groups politically or economically. Karadzic is a war crimes indictee and unifying force of the then-ruling political party in Republika Srpska, the SDS. Because of Karadzic’s intransigence, the international community gave very little economic assistance to Republika Srpska in 1996 and 1997. On June 27, 1997, the President of Republika Srpska, President Plavsic, announced that she had fired the Republika Srpska Minister of Interior. According to an OHR report, Plavsic fired the Minister because he had attempted to remove police officers and units involved in compiling a special report on illegal trade and other economic activities in Republika Srpska. This action was the first visible sign of a political division between President Plavsic, whose political base is in Banja Luka, and Karadzic and his hard-line SDS supporters, whose political base is in Pale. By the end of October 1997, the political struggle in Republika Srpska had resulted in (1) Plavsic being expelled from the SDS and gaining control of civilian police in three of nine public security centers in Republika Srpska (see fig. 1.3); (2) Karadzic and the SDS losing control of the transmitters of Serb Radio and Television (SRT) television, the primary Bosnian Serb media outlet; and (3) Plavsic disbanding the Republika Srpska National Assembly and calling elections for a new assembly, which were held on November 22 and 23, 1997, and resulted in the formation of a new, more moderate Republika Srpska government based in Banja Luka. Appendix II provides information on key events in the Republika Srpska political crisis through January 31, 1998. Many observers told us that President Plavsic is an ardent Serb nationalist who maintains a long-term goal of a separate Serb state. However, she has allowed more open political expression in Republika Srpska and, unlike Karadzic and the SDS, is willing to work with the international community to implement at least some civilian measures called for in the Dayton Agreement, including those that would link the ethnic groups politically and economically. President Plavsic would do so, according to these observers, because (1) she sees the growing economic gap between the Federation and Republika Srpska and realizes that to obtain economic aid she must cooperate with the international community and (2) she intends to build a Serbian state based on democracy and the rule of law rather than on the corruption of the hard-line SDS. According to one observer, Plavsic has not repudiated all of her former beliefs; however, her actions indicate that her views appeared to have evolved in a more pro-Dayton direction. By the end of 1997, the political division of Republika Srpska had affected the operating environment of all aspects of the peace operation. The evolving political situation that followed the initial split provided the international community with many opportunities to encourage and/or force further implementation of the Dayton Agreement. Many specific events in the crisis required SFOR intervention to prevent or respond to violent situations, such as when pro-Plavsic police unsuccessfully attempted to take over Pale-controlled police facilities in Doboj and Brcko. At the request of the Chairman, Senate Committee on Foreign Relations, we reviewed the implementation of the Bosnia peace operation. Our specific objectives were to determine what progress had been made in achieving the operation’s objectives since mid-1997. To do so, we focused on the operation’s four key goals, which are to create conditions that allow Bosnia’s political leaders to (1) provide a secure environment for the people of Bosnia; (2) create a unified, democratic country, to include the surrendering of indictees to the war crimes tribunal; (3) ensure the rights of people to return to their prewar homes; and (4) rebuild the economy. In addition, we reviewed the progress of the program designed to train and equip the Bosniak and Bosnian Croat militaries as they integrate into a unified Federation military. To determine progress, we made field visits to Bosnia in June and October 1997 and February 1998. We reported on the results of our June visit in testimony to the Committee in July 1997. During our field visits, we did audit work in Sarajevo, Tuzla, Brcko, Banja Luka, Pale, Mostar, Stolac, Travnik, Jajce, Busovaca, Konjic, Zenica, Sanski Most, Prijedor, Doboj, Trebinje, and numerous villages throughout Bosnia. While in Bosnia, we interviewed officials from the U.S. embassy; USAID; USIA; the headquarters of SFOR and two of its multinational division headquarters; OHR; UNMIBH, including IPTF, U.N. Civil Affairs, and the Mine Action Center; the World Bank; UNHCR; OSCE; government officials; opposition party members; Bosnian displaced persons, many of whom had returned to their homes in areas controlled by another ethnic group; and numerous nongovernmental organizations. We also interviewed officials from (1) the Departments of State, Defense, and the Treasury; USAID; USIA; and the Central Intelligence Agency in Washington, D.C.; (2) the U.S. European Command and U.S. Army Europe in Germany; (3) the U.S. mission to NATO, NATO international staff, SHAPE, and the European Commission in Belgium; (5) OSCE and the U.S. mission to the OSCE in Vienna, Austria; and (6) the U.S. embassy and U.N. Liaison Office in Zagreb, Croatia. Also to assess progress toward achieving the operation’s objectives and in implementing the train and equip program, we compared conditions in Bosnia with the goals laid out in Dayton and related agreements. We analyzed numerous situation reports and other documents from U.S. agencies, NATO, SFOR, OHR, OSCE, IPTF, UNHCR, and other organizations. We also interviewed many observers of the situation in Bosnia to expand upon or clarify information contained in the documents. Further, we relied on results of a joint GAO-Congressional Research Service (CRS) seminar for Congress on “Bosnia: U.S. Options After June 1998,” which was held on November 6, 1997. We did not (1) verify the accuracy and completeness of the cost information DOD or civilian agencies provided to us; (2) evaluate the methodology of USIA polls or other surveys or polls used in this report; or (3) assess the reliability or methodology of USAID, OHR, or World Bank audit reports. According to USIA officials, USIA analyses are based on responses from people belonging to the principal ethnic group in each of the following sampling areas: Republika Srpska; predominately Croatian regions of Bosnia; and predominately Muslim areas of Bosnia. Nineteen times out of 20, results from samples of similar size to USIA samples will differ by no more than 4 percentage points in either direction from what would be found if it were possible to interview every Bosnian Serb in Republika Srpska, every Bosnian Muslim in Muslim-dominated areas of the country, and every Bosnian Croat in Croat-dominated areas of the country. Because of this sampling methodology, USIA cautions against using its poll results to develop data on attitudes of Bosnia’s total population. Despite these limitations to USIA samples, we believe the USIA data have sufficient geographic coverage to provide an adequate approximation of the attitudes of each of Bosnia’s three major ethnic groups countrywide. We conducted our work from June 1997 through May 1998 in accordance with generally accepted government auditing standards. Our information on foreign law was obtained from interviews and secondary sources, rather than independent review and analysis. To promote a permanent reconciliation between all parties, the Dayton Agreement sought to establish “lasting security” based on a durable cessation of hostilities, civilian police that operate in accordance with democratic policing standards, and a stable military balance in the region. Under heavy international pressure, considerable progress has been made toward achieving the goal of a secure environment, but much remains to be accomplished, particularly in the area of developing democratic civilian police forces. The overall security situation improved somewhat during 1997, but remains very volatile. SFOR has continued to ensure the cease-fire by monitoring and controlling Bosnia’s three militaries and in August 1997 started to control Bosnia’s special police units as military forces. Significant early steps were taken in 1997 in certifying, training, and ethnically integrating Bosnia’s civilian police forces in the Federation and in starting the certification process in Republika Srpska. However, according to U.N. officials, the police remained the primary violator of human rights in Bosnia and often failed to provide security for people of other ethnic groups. Also, by the end of 1997, the parties to the Dayton Agreement largely complied with arms control measures designed to achieve a regional military balance. The U.S.-led international program to train, equip, and integrate the Bosniak and Bosnian Croat militaries into a unified Federation military also made significant progress. According to data from the SFOR Assessment Cell, an operation analysis unit at SFOR headquarters, the overall security situation improved in Bosnia during 1997, but threats to stability increased during the first few months of 1998 (see fig. 2.1). The cell’s data—which include incidents related to freedom of movement, ethnic conflicts, and police abuse—show that threats decreased at an average monthly rate of 1.5 percent during 1997. However, the data also show substantial volatility throughout the year and during early 1998. For example, the number of incidents increased by 123 percent between April and May 1997, decreased by 45 percent between September and November 1997, and then increased again by about 140 percent from January through March 1998. According to an assessment cell report, these threat trends on a general level reflect the cycle of violence that occurred during Bosnia’s war, with declines in intensity in the spring for planting and late summer for harvest, and in early winter when movement is more difficult. Further, tensions related to returns of refugees and displaced people contributed to increases from May through December 1997, as well as in early 1998. While the number of incidents in January and February 1998 was much lower than during the same months a year earlier, the number of incidents during March and April 1998 was higher than the prior year, primarily due to an increase in (1) ethnic incidents, particularly in the Federation, as people crossed ethnic lines to visit or return to their prewar homes and (2) police abuse incidents associated with illegal police checkpoints. In 1997, SFOR continued to contain the three militaries in Bosnia and started the process of bringing special police units under SFOR control. SFOR officials and NATO documents state that during 1997 Bosnia’s political leaders generally complied with most military provisions of the Dayton Agreement, but their militaries continually tested SFOR’s reactions to minor violations of annex 1A of the Dayton Agreement. Under SFOR supervision, the three militaries continued to observe the October 1995 cease-fire; kept their forces separated; and demobilized additional troops, bringing their combined strength down to 55,500 soldiers by October 1997. SFOR enforced compliance with the military provisions of the Dayton Agreement by continually patrolling throughout the country, including in the zone of separation; routinely monitoring and inspecting SFOR-approved military storage sites and installations; and monitoring SFOR-approved military training and movement activities. Further, according to a DOD report, the three military forces surpassed SFOR’s requirement that they reduce their military cantonment sites by 25 percent during 1997. They reduced the number of sites by about 29 percent—from 770 sites to 545 sites—by December 1, 1997, and further lowered the number to 534 by January 1998. Minor violations and weapons inventory discrepancies by the three militaries led SFOR to confiscate and destroy about 10,000 small arms and some heavy weapons in 1997. Moreover, according to NATO documents, SFOR also imposed numerous training and movement bans on the three militaries throughout the year for violations such as failing to meet demining requirements, inaccurately reporting troop movement and training activities, and infringing radar and missile restrictions. Because the fighting has not resumed, the operation’s civilian organizations have been able to continue their work and the people of Bosnia have been able to proceed with the long process of political and social reconciliation. On December 10, 1997, the Peace Implementation Council stated that the presence of NATO-led forces has been the greatest single contributor to subregional security since the signing of the agreement and will continue to be so in the short to medium term. On August 7, 1997, the SFOR Commander notified the entity Presidents that special police units in Bosnia would henceforth be controlled by SFOR as military forces under annex 1A of the Dayton Agreement. The agreement had defined Ministry of Internal Affairs special police as organizations with military capability and thus subject to Dayton’s military provisions. The new SFOR policy was to apply to special police not duly certified and monitored as civilian police under the IPTF police restructuring program. The policy was designed to help accelerate and ensure police restructuring and reform, particularly in Republika Srpska. The SFOR Commander also issued supplementary instructions to the parties on August 15, 1997. These instructions laid out the procedures to be followed while the special police are subject to SFOR control before IPTF certifies them as civilian police. NATO documents show that special police in the Federation were generally in compliance with SFOR requirements as of mid-October 1997. However, Republika Srpska special police, specifically some units of the Police Anti-Terrorist Brigade, had failed to comply despite SFOR training and movement bans on all Republika Srpska special police units that were not in compliance with the supplementary instructions. As of November 12, 1997, the two outstanding issues were (1) the failure of five special police units to provide monthly duty rosters and of one of these units to submit its personnel list to SFOR and (2) the failure of the Bosnian Serb member of Bosnia’s collective Presidency, Momcilo Krajisnik of the SDS, to personally respond and explain to the SFOR Commander the role of special police in events that took place in Banja Luka in early September 1997. Because of these problems, special police remained subject to a training and movement ban and continued to be closely monitored by SFOR. On November 10, 1997, SFOR seized control of the special police unit in Doboj, in response to special police actions in Banja Luka in early September 1997 and the subsequent failure of Krajisnik to adequately explain them. Specifically, SFOR confiscated weapons, vehicles, communications equipment, and files from the unit and decertified the officers assigned there. On November 20, 1997, SFOR and IPTF officials reached an agreement with Republika Srpska representatives on the future role of special police as they become part of the civilian police structure. Once certified as civilian police, some units (about 850 officers) will be allowed to assume IPTF-approved tasks related to counterterrorism, border control, organized crime prevention, protection of important people, and crowd control. As of February 8, 1998, according to an IPTF memo, 1,321 special police officers in Banja Luka (106), Doboj (960), and Bijeljina (255) had started the initial steps of IPTF’s civilian police certification process. By that time, according to a NATO document, SFOR had all Republika Srpska special police under control and surveillance, with SFOR liaison officers attached to each unit; however, not all units were in full compliance yet with SFOR’s instructions of August 1997. During 1997, under intense international pressure, Bosniak, Bosnian Croat, and Bosnian Serb political leaders began taking important, early steps in developing police forces that meet IPTF’s standards for democratic policing. The Bosniaks and Bosnian Croats began patrolling together in every municipality of two ethnically mixed cantons in the Federation; both President Plavsic and SDS hard liners in Pale allowed their police forces to start the IPTF police restructuring process late in the year, after almost 2 years of refusing to cooperate with the IPTF; and in Brcko, the Supervisor began the process of establishing a multiethnic, democratic civilian police force for Brcko municipality. The progress was often slow and halting, however, and police continued to be the primary violator of human rights in Bosnia. The program to train and equip Bosnia’s police forces, an integral part of the IPTF police restructuring program, was strongly supported by the United States but received limited financial support from other donors. By the end of 1997, IPTF was implementing three distinct police restructuring efforts in Bosnia, specifically, (1) in the Federation for Bosniak and Bosnian Croat police forces at the canton and Federation levels; (2) in Republika Srpska, starting with the entity’s public security centers, three of which were controlled by President Plavsic in Banja Luka and six of which were controlled by SDS hard-liners in Pale; and (3) in the Brcko area of supervision, Republika Srpska, under the authority of the Brcko Supervisor. Each police restructuring effort consisted of certifying, training, reducing, and integrating police forces, as well as revising police standards and procedures so that they are in accordance with democratic policing standards. Tables 2.1 and 2.2 provide information on progress made in these areas in 1997 and early 1998. According to a State Department official, although Bosnian Serb political leaders in Pale consented to police restructuring in September 1997, they had not consistently followed through on their commitments; most of the police who had been provisionally certified by the IPTF were in areas controlled by President Plavsic. IPTF’s efforts to integrate Bosnia’s police forces are viewed by many observers in Bosnia as critically important for building confidence among people who have crossed or will cross ethnic lines to return home and will have to rely on their local police to provide security for them. The three police restructuring efforts in 1997 had different standards for ethnically integrating police forces and made different amounts of progress toward their goals: The integration of Bosniak and Bosnian Croat police in the Federation had made important progress by the end of 1997; the creation of a multiethnic police force in Brcko started very late in the year; and the integration process had not yet started in other areas of Republika Srspka (see table 2.2). The joint patrolling by Bosniak and Bosnian Croat police forces was viewed as a positive development by human rights and other observers. During our October 1997 visit to Bosnia, these patrols were just getting underway in many areas of the ethnically mixed cantons of Neretva and Central Bosnia. At that time, a senior OSCE human rights observer told us that joint Bosniak and Bosnian Croat police patrols had resulted in a decline in human rights abuses in areas where they were occurring. By early December 1997, according to a U.N. report, joint Bosniak and Bosnian Croat patrols were taking place in every municipality in Neretva and Central Bosnia cantons. However, by mid-March 1998, some municipalities in these cantons had reverted to a pattern of police patrols consisting solely of officers from the dominant ethnic group. Despite these positive developments, State Department and IPTF officials described the progress in integrating Federation police forces as frustrating, halting, incremental progress, noting many problems. For example, police deployed to areas controlled by another ethnic group at times had been harassed, intimidated, and threatened, and some had requested IPTF or SFOR protection. Further, in early February 1998, according to a State Department official, IPTF and OHR canceled the inauguration of a restructured police force in a Croat-controlled canton when they discovered that only Bosnian Croat flags were to be displayed, and no Bosniak officials or police were to be present. This canton is particularly resistant to implementing reform or integrating, given its proximity and ties to the Republic of Croatia. State officials said that political leaders are the cause of problems in integrating Bosniak and Bosnian Croat police in the Federation—political will is not coming from Bosniak and Bosnian Croat political leaders to allow or encourage integration. Many police forces in the Federation face a serious shortage of police officers because they cannot fill positions allocated for Serbs or other ethnic groups, despite offers of housing assistance and other incentives to attract police from those groups. For example, Neretva Canton had filled only 3 of the 260 slots allotted to Bosnian Serb police as of mid-October 1997. According to a Police Chief in the canton, the ability of his force to protect public safety remains seriously compromised until his station reaches full strength. Further, Bosnia’s three ethnically based police forces, which continue to be controlled by their respective political leaders, often did little to provide personal security and uphold human rights of citizens outside their respective ethnic groups. Instead, most human rights violations—by some estimates as high as 50-70 percent, according to a senior U.N. official—have been committed by police. Police forces in many instances during 1997 did not act to protect people of other ethnic groups who still lived in their jurisdictions or who wished to travel or return to their homes across ethnic lines. According to a State Department official, some people have protected the rights of all citizens regardless of ethnicity; however, the development of democratic police in Bosnia will require a change in Bosnia’s political leadership and the control they still wield over the police. Further, many observers told us that this will also require a new generation of police leaders trained in democratic policing. These observers stated that Bosnia’s current generation of police leaders—including those installed by President Plavsic—had been trained to serve an authoritarian state rather than the people of Bosnia. The Federation started the process of developing a new generation of professional officers trained in accordance with democratic standards when it opened its new police academy in December 1997. The first class of 100 officers includes 58 Bosnian Croats, 20 Bosniaks, and 22 “Serb or other” students. According to a State Department official, IPTF originally estimated that it would cost about $110 million to provide training and equipment for Bosnia’s civilian police as they participate in IPTF’s police restructuring program: $60 million for the Federation and $50 million for Republika Srpska. The United States has pledged about $30 million in fiscal years 1996 and 1997 and requested an additional $15 million in fiscal year 1998. The State Department spent $6.2 million to support efforts to train and equip Federation police in fiscal year 1996 and obligated or planned to obligate $17.4 million to support similar efforts for Federation and Republika Srpska police in fiscal year 1997. The vast majority of the funds were to be used for the Federation, as Bosnian Serb political leaders did not agree to restructure their police forces until late in the year. Most of the U.S. police training money in both entities was used to fund programs administered by the Department of Justice’s International Criminal Investigative Training Assistance Program, including the IPTF’s human dignity and basic skills (transition) training courses for thousands of Bosnian police officers (see table 2.1). The program also (1) helped to establish a model police station in Sarajevo—one is planned for each canton and five are now operational—to demonstrate how police stations in a democratic country should function, (2) provided training and instructor and curriculum development for the reformed Federation police academy, and (3) continued forensics and executive development training. The United States also spent about $2.3 million to provide uniforms and 12,000 pairs of handcuffs for the Federation police. Further, the State Department obligated about $1 million to the Department of Justice to support similar training programs for Republika Srpska police in Brcko. According to State Department officials, other countries until recently had not pledged or made major contributions because they disagree with the United States on how to approach police restructuring. They believe that IPTF should be handling all aspects of police restructuring—monitoring, reorganizing, and training—on its own. The U.S. government, however, believes that even with the new IPTF focus on recruiting trainers and playing a more active role in training, the IPTF by itself does not have the training and equipment required for effective restructuring of the Federation police. Up until October 1997, other donors had pledged about $4.2 million and actually contributed $2.8 million to the U.N. Trust Fund for police reform, according to State sources. Beginning in late 1997, according to a State Department document, the European Union and other countries did pledge additional funds for police assistance, bringing the total amount promised up to $23.3 million, although the total amount actually contributed to the U.N. Trust Fund is still less than $3 million, as of the end of March 1998. According to State Department officials, a shortage of funding for the program has resulted in delays in providing temporarily certified police with professional training required for full certification. For example, lack of funds delayed the opening of the Federation police academy from September 1 to mid-December 1997, thereby postponing the introduction of the IPTF’s planned 6-month recruit training course. The academy needed an estimated $3 million-$5 million in renovations. The international community recognizes that in order to ensure public security in Bosnia, police reform must be accompanied by reform of Bosnia’s judicial system, an effort that USAID officials acknowledge will be a massive undertaking for the international community. Large-scale efforts to reform the judiciary have not yet gotten underway, though some donors, including USAID, are funding limited judicial reform efforts. According to a USAID judicial reform grantee, the international community has not yet started to address problems of the court systems at many levels of government; they remain undemocratic and corrupt instruments of government control from the prewar Communist era. The judiciary in all entities, according to a State Department human rights report, remains subject to coercive influence and intimidation by the authorities or dominant political parties, and close ties exist between courts of law and the ruling parties in many areas. A third key element of providing a secure environment in Bosnia is to create a stable military balance in the region. The United States believes that there are two primary factors in achieving a stable military balance: (1) the arms control provisions of the Dayton Agreement and (2) the U.S.-led program to train and equip the Bosniak and Bosnian Croat militaries as they integrate into a unified Federation military. In 1997, the international community and political leaders of Bosnia’s three major ethnic groups continued to implement two of the three arms verification and control agreements called for by annex 1B, articles II, IV, and V, of the Dayton Agreement, although they did so only with strong international pressure and support. The negotiations for the article V regional arms control agreement had not yet begun as of late April 1998. The article II agreement was signed on January 26, 1996, by political leaders of Bosnia’s three major ethnic groups and called for measures to enhance mutual confidence and reduce the risk of conflict. To assist in this process, OSCE established a regional arms control monitoring mission in Sarajevo to oversee article II implementation. The political leaders of the three major ethnic groups have generally fulfilled the objectives of the article II agreement, although they required heavy OSCE pressure to do so. Specifically, they (1) declared their holdings of heavy weapons; (2) completed scheduled inspections of those holdings under OSCE auspices; and (3) exchanged information and military liaisons, established other communications links, and participated in joint visits and seminars. While U.S. and OSCE officials stated that they were generally satisfied with the degree of compliance demonstrated by the parties in 1997, they also said that military liaison missions were meeting twice monthly only under OSCE pressure. They also noted that the parties were not using the defense ministry “hot lines” that had been established. Because of these problems, U.S. and OSCE officials believe that the parties cannot continue the article II process in 1998 without significant international involvement. According to these officials, OSCE will review the need for the continued presence of its arms control mission in Bosnia in June 1998. The second agreement, the article IV subregional arms control agreement of June 1996, was signed by political leaders of Bosnia’s three major ethnic groups as well as Croatia and the Federal Republic of Yugoslavia, to reduce arms and military forces to balanced and stable levels. These parties made substantial progress during 1997 in implementing the outstanding provisions of the article IV agreement. Specifically, the parties (1) completed an additional round of scheduled inspections (beyond those completed in 1996) of all five parties’ declared heavy weapons holdings; (2) remained under the voluntary manpower limits that they established in 1996; (3) periodically updated their heavy weapons declarations; and (4) met the October 31, 1997, deadline for reducing their declared surpluses of heavy weapons. Altogether, the five militaries destroyed or disposed of nearly 6,600 surplus heavy weapons—about 40 percent of their combined heavy weapons holdings—by that date. Thus, at the end of 1997, the parties were below the heavy weapons ceilings established by the article IV agreement (see fig. 2.2). Bosnian Serb political leaders, who had largely failed to comply with the December 1996 interim reduction target, fully met the final target date.U.S. officials attributed the greater compliance of Bosnian Serbs to (1) SFOR’s restrictions on the Bosnian Serb military’s movement and training as a means of forcing compliance, (2) Bosnian Serb budget and manpower constraints that do not allow them to maintain weapons, and (3) SFOR assistance in transporting weapons to their reduction sites. According to OSCE and State officials, OSCE will remain substantially involved in the article IV inspection processes and will use them to push the parties to report more fully all heavy weapons holdings. For example, according to a State Department official, OSCE will ask the parties to classify several hundred mortars currently excluded from article IV as subject to its heavy weapons limits. Negotiations had not yet begun on the third agreement called for by annex 1B, article V, to establish a regional arms control balance in and around the former Yugoslavia, by late April 1998. The Dayton Agreement placed no time limit on these negotiations, nor did it define the geographic area subject to this agreement. According to a State Department official, OSCE did select a Special Representative at its December 1997 meeting in Copenhagen. The Special Representative is expected to begin consultations in the spring of 1998 to set the scope and objectives for article V, under which negotiations can later begin. The U.S.-led international program to equip, train, and integrate the Bosniak and Bosnian Croat militaries into a unified Federation militaryremains a key element of the U.S. effort to establish a stable military balance in the region and sustain a secure environment in Bosnia. The program made significant progress in equipping, training, and establishing integrated structures for the Federation Army in 1997, but the Bosniak and Bosnian Croat militaries still maintain separate chains of command, the troops will require years of additional training and sustainment support, and the force is not projected to have a fully integrated defensive and deterrence capability until beyond the year 2000. As of April 1998, the total pledges and contributions to the train and equip program were about $389 million, including $109.1 million from the United States, with a total of 14 countries pledging cash, equipment, training, or other support. For example, foreign donors provided in full the $147 million in cash they pledged in 1996 plus an additional $5 million contributed in 1998; the majority of the donated or purchased military equipment has been delivered to the Federation (see fig. 2.3); and Bosniak and Bosnian Croat soldiers are or will be trained in Germany, Turkey, Egypt, Malaysia, Bangladesh, Qatar, and the United Arab Emirates, while American, Jordanian, and Indonesian trainers have instructed Bosniak and Bosnian Croat soldiers in Bosnia. In addition, the Bosniak military has used donor funds to purchase multiple-launch rocket systems, and 532 trucks and trailers; moreover, it started producing artillery, helmets, and small arms ammunition in state-owned factories. See appendix III for additional details on the status of the train and equip program. The U.S. firm contracted by the Federation to train and integrate the Bosniak and Bosnian Croat militaries—MPRI—largely met the objectives of its first phase of its 2-year contract, which is valued at about $80 million. According to State Department and contractor officials, phase I of the contract—which ended in September 1997—achieved the following: The integrated Ministry of Defense, the Joint Military Command, the joint logistics and training commands the contractor helped establish and train are now at least partially staffed and beginning to function. As of October 1997, the new joint logistics command was starting to distribute the small arms and some types of equipment donated by the United States. The contractor has completed “train the trainer” courses in small unit tactics for 9 of the 15 Bosniak and Bosnian Croat brigades using U.S.-supplied light weapons. The Federation Army School, which was established by the contractor in October 1996, trained about 1,900 Bosniak and Bosnian Croat officers and noncommissioned officers in its first year. The school’s leadership and technical training ranged from basic non-commissioned officer’s classes up to brigade and battalion commander and staff courses. The Federation Army combat simulation center near Hadzici opened in January 1997 and has provided brigade and battalion staff training for Bosniak and Bosnian Croat commanders and staff. In keeping with the Federation Army’s defensive strategy, the training emphasizes defensive warfare. The contract was extended for an additional year on September 6, 1997, according to State Department officials. During this phase, the contractor intends to help the Ministry of Defense and Joint Military Command become fully operational, continue to provide individual and unit training, and give instruction in the use of U.S.-donated weapons. The Federation Army School plans to provide training for approximately 1,500 officers and noncommissioned officers in its second year. As of the beginning of May 1998, the contractor had completed training 1,823 Federation Army personnel in the operation and maintenance of the U.S.-provided tanks and armored personnel carriers. The new joint logistics command had also started to distribute the small arms and equipment donated by the United States, was planning to distribute weapons donated by four other countries, and was maintaining control over the ammunition. The Federation will need additional financial and material resources to complete and sustain its new force structure, according to State officials, because the $152 million in cash donations and $100 million in U.S. drawdown authority is fully committed to existing program requirements. The Federation will also need assistance in maintaining the heavy weapons donated by the United Arab Emirates, Egypt, Qatar, and Turkey. Further, according to contractor personnel, Federation commanders and staff will require 2 or 3 years before these staffs are fully trained in the tactical doctrine being taught at the simulation center. Maintenance personnel will need 3 or 4 years’ additional training before they will be able to instruct other personnel on the maintenance of the U.S.-provided heavy weapons. As of May 1998, Bosnian Serb leaders had not agreed to participate in the military train and equip program under conditions imposed by the United States. Specifically, according to State Department officials, the Bosnian Serb political leaders and military would have to (1) begin to work toward establishing common national defense institutions for Bosnia; (2) end their deep and extensive military relationship with Serbia; and (3) comply with all areas of the Dayton Agreement, including arresting people indicted for war crimes, guaranteeing freedom of movement, and following through on arms control agreements. A senior State Department official acknowledged that Bosnian Croats and Bosniaks have not fully complied with the agreement, but said that they have complied to a far greater degree than have the Bosnian Serbs on such issues as surrendering indictees to the war crimes tribunal, allowing freedom of movement, permitting the return of refugees, and accepting other key elements of Dayton. A second principal objective of the Dayton Agreement was to establish Bosnia as a unified, democratic country that would uphold the rule of law and adhere to international standards of human rights. Some progress was made in 1997 and early 1998 in establishing the institutions, laws, and practices of a unified, democratic Bosnia at all levels; the human rights situation improved considerably; ethnic intolerance eased slightly; and the international community’s efforts to promote democratic governance and practices showed early results. Despite the progress made, the country remained a long way from achieving the overall objective: Most multiethnic institutions at all levels of government were largely not functioning or were functioning only as a result of heavy international involvement, the vast majority of Bosnian Serbs and Croats and their political leaders still wanted to be separate from Bosnia, and the human rights situation remained poor and ethnic intolerance strong. Ethnic intolerance and human rights remain particularly volatile, as reflected in the increased number of incidents in these areas from January through April 1998. Under intense international pressure, some progress was made during 1997 and early 1998 in developing governmental institutions and the legal framework for politically linking Bosnia’s three ethnic groups at the national, entity, and municipal levels, as well as in the area of the Brcko supervisory regime. However, the intransigence of political leaders of Bosnia’s three major ethnic groups—particularly the hard-line SDS leadership in Pale—continued to block the effective functioning of Bosnia’s national institutions. This situation required the High Representative to use his authority to break political impasses in the development of national symbols and laws. Further, as of May 1998, the new, relatively moderate government in Republika Srpska was still in the process of consolidating the political, security, and financial institutions and resources that would allow it to live up to its pledges of implementing the Dayton Agreement; real power in the Federation remained in separate Bosniak- and Croat-controlled structures; 133 of 136 municipal governments elected in September 1997 had formed but only with strong international involvement; and Brcko’s multiethnic institutions were established and functioning only because of the intense international supervision and pressure. Since the September 1996 election of Bosnia’s multiethnic, collective Presidency and Parliamentary Assembly, elected Bosnian officials from all three ethnic groups have begun to build a national government. Although all key national institutions were established by the summer of 1997, they generally have not functioned as intended, in large part because hard-line SDS political leaders within these institutions impeded their effective operations. In October 1997, the High Representative noted that the internal crisis in Republika Srpska and the regular absence of SDS members of these institutions substantially hampered their work and constituted a major impediment to implementing the Dayton Agreement. By early December 1997, the problems of the non-functioning national institutions led the High Representative to request, and the Peace Implementation Council to approve, an interpretation of his Dayton authority that allowed him to regulate the functioning of national institutions and to impose interim measures when the parties are unable to reach agreement. Table 3.1 shows a list of national institutions and their status as of May 1998. Because these institutions have largely not functioned as intended, during most of 1997 the political leaders of the three ethnic groups reached agreement on few laws and symbols that would link them politically. In late 1997 and early 1998, the High Representative responded to the political intransigence by ordering the implementation of legislation after Bosnia authorities failed to pass the required legislation on time (see table 3.2). As of May 19, 1998, the High Representative had not exercised his authority that allows him to remove obstructionist, elected officials from office at the national level. The election for the Republika Srpska National Assembly, or parliament, on November 22 and 23, 1997, resulted in the SDS losing control of the parliament and in the formation of a more moderate entity-level government. This government is headed by a Prime Minister—Milorad Dodik—who publicly declared support for full implementation of the Dayton Agreement. As of May 1998, the new government’s control of the political, security, and financial apparatus in Republika Srpska was not yet complete, and its plans and pledges to support Dayton not yet implemented. In the November 1997 elections, the SDS lost its majority in the parliament, dropping from 45 (of 83) seats to 24 seats and from 52 percent of the vote to 27 percent. Even when in coalition with another hard-line party, the Serb Radical Party, the SDS could no longer control the assembly (see fig. 3.1). President Plavsic’s new political party, the Serb People’s Union (SNS), was the biggest beneficiary of changes in the parliament, winning 15 seats and 16 percent of the vote. Another Serb opposition party, the Socialist Party of Republika Srpska (which has strong ties to President Milosevic of the Federal Republic of Yugoslavia), won nine seats in the parliament. The Coalition for a Unified and Democratic Bosnia—led by the ruling Bosniak Party of Democratic Action (SDA)—won the same number of seats, 16, as in 1996, although its total percentage of the vote declined from about 19 percent to about 17 percent. Serb opposition (1996) = A coalition of Socialists, Independent Social Democrats, and other parties; Democratic Patriot Block; Serb Party of Krajina; and Serb Patriotic Party Serb opposition (1997) = Socialists, Independent Social Democrats, and SNS Federation-based parties (1996) = SDA, Party for Bosnia and Herzegovina, and a coalition of other political parties Federation-based parties (1997) = Coalition for a Unified and Democratic Bosnia and Social Democratic Party. Although an SDS member was reelected as the parliament’s President, members of Serb opposition and Federation-based political parties in the parliament elected the new, moderate Prime Minister by one vote and gave him the mandate to form a new government on January 18, 1998. This election took place despite hard-liners’ attempts to disrupt the proceedings by walking out of the session. On January 31, 1998, at the third parliamentary session, the new Republika Srpska government was sworn in, and the parliament voted to move the seat of government from Pale to Banja Luka. After being elected Prime Minister, Dodik pledged a clean break with the failed policies of the ultranationalists, promised to cooperate with the international community, and expressed full support for the peace plan, including the right of all refugees to return to their prewar homes. The international community, including SFOR, supported the first meetings of the new parliament and transition to the new government through political and military means. For example, following the election of the new government, SFOR increased patrols and established observation posts in the vicinity of Republika Srpska government offices in and around Pale. Dodik’s election as Prime Minister is viewed by observers in Bosnia as one of the most significant political developments in Bosnia since the signing of the Dayton Agreement. According to the International Crisis Group, a nongovernmental organization operating in Bosnia, before the war Dodik supported non-nationalist policies and reforms; during the war he formed an opposition block of 12 members in the Bosnia Serb parliament and supported all peace initiatives; and after the 1996 national elections, he formed a “shadow government” consisting of three Serbs, three Bosniaks, and two Croats. Further, in September 1997, after Dodik’s party had won a plurality of seats in the Laktasi municipal assembly and tied for the most seats in the Srbac municipal assembly, he invited all former residents who were expelled during the war to return. In forming his new government, the International Crisis Group reported, Dodik continued to break Bosnian taboos. For example, instead of looking to the exclusive support of one ethnic group, he sought the political backing of all ethnic groups. USIA polling data show that as of mid-February 1998, Dodik had substantial support from Bosnian Serbs, with 69 percent holding a favorable opinion of him. Further, according to OHR documents, Dodik immediately moved to reestablish political and economic ties between Republika Srspka and Sarajevo, as well as between Republika Srpska and Croatia. The new government received support from the Republika Srpska Ministry of Defense and has been attempting to reunify the entity’s state media that had been split during the political crisis. As of May 1998, however, it was unclear whether the Prime Minister would be able to fulfill his commitments to implement Dayton due to his weak hold on Republika Srpska’s political, security, and financial institutions. For example: Some observers, including human rights groups, said that Dodik-appointed Ministers of Defense, Justice, and Interior had either expressed limited support for Dayton implementation or were closely associated with hard-line nationalists and individuals indicted by the war crimes tribunal; thus, these individuals may attempt to obstruct efforts to implement Dayton. Dodik’s government remained threatened by attempts of hard-liners to undermine the government. For example, according to an OSCE report, the President of the Republika Srpska parliament, an SDS member, called a special parliamentary session to be held in Doboj on April 16; during the session, the hard-line SDS and Serb Radical parties intended to hold a vote of no-confidence. The session was cancelled when a boycott by all other parties deprived the session of a quorum. Several reports in late April and early May 1998, including a statement of the President’s Special Representative for Dayton Implementation, stated that Milosevic, President of the Federal Republic of Yugoslavia, supported this and other hard-liner attempts to destabilize the government. It was unclear whether the new government had gained control of all Republika Srpska police. The new Minister of Interior had moved to depoliticize and reunify the police forces that were controlled by SDS leaders in Pale and by the government in Banja Luka; for example, he named new chiefs to eight of the nine public security centers in the entity. However, there was no evidence that these moves had broken the chain of command extending from the SDS in Pale to police forces in eastern Republika Srpska. Dodik was unable to take full control of Republika Srpska revenues, and revenues continued to flow to SDS leaders in Pale. According to an international observer in Bosnia, it was unknown how much of the entity’s total revenue was flowing to Dodik’s government. In mid-February 1998, Dodik vowed to quit his position if international assistance to his new government was not quickly delivered, as he needed funds to pay police, teachers, and civil servants. On February 24, 1998, the High Representative delivered the first tranche of international assistance to go toward budgetary support for the new government—4 million deutsche marks from the European Union. USAID pledged $5 million for budgetary support for the new Republika Srpska government, which will be distributed through a grant to OHR. These funds will pay back salaries for government employees, except those of the Ministries of Justice, Defense, and Interior. Some progress was made in 1997 toward the creation of institutions, laws, and symbols of the joint Bosniak-Croat Federation under intense pressure from the United States and others; however, at the end of the year the Federation was not yet a fully functioning governmental entity, and the Bosniaks and Bosnian Croats still maintained separate administrative structures. The Federation Parliament met more frequently during 1997. It passed laws on privatization on October 21, 1997, and after international arbitration, on the resolution of territorial issues associated with split and new municipalities on January 22, 1998. The ministries, particularly the Defense Ministry, have begun to acquire staff and facilities and have started to function; the higher courts have been established and have begun to hear cases; and police restructuring and integration have made some progress in integrating Bosniak and Bosnian Croat police forces at the cantonal and municipal levels. In addition, according to international advisors to the Federation, all 10 of the Federation’s cantonal governments were established by October 1997; 9 of 10 cantons had passed laws on courts by late February 1998; and most of the cantons had started to restructure their court systems. Despite this progress in developing Federation institutions, in April 1998 the High Representative reported that illegal structures of government in the Federation had not been dissolved or integrated, despite three formal announcements in 1996 that they had been abolished. According to international observers in Bosnia, real governmental power and authority in the Federation continues to reside in separate Bosniak and Bosnian Croat governmental structures. There, Bosnian Croat political leaders, and some hard-line Bosniak political leaders, carry on their obstruction of the development of Federation institutions. The Bosnian Croats still maintain the administrative structures and symbols of their separate para-state, known as Herceg-Bosna, and continue to use Croatia’s education policy and currency, the Kuna, as they did during the war. Bosniaks have also kept their separate institutions, those of the former Republic of Bosnia and Herzegovina, including the Bosniak-controlled internal security service, whose presence has impeded the development of an integrated Federation Ministry of Interior. Furthermore, cantonal governments in areas of the Federation containing a sizable number of both Bosniaks and Bosnian Croats—particularly the Neretva and Central Bosnia cantons—have constantly resisted international pressure to pass laws that would link the two groups and integrate their administrative, police, and court systems. This intransigence is due in large part to hard-line Bosnian Croat leaders. On September 13 and 14, 1997, municipal elections held in Bosnia resulted in the election of multiethnic municipal governments throughout Bosnia, as a number of people, primarily Bosniaks, chose to vote for municipal governments where they lived in 1991. If fully implemented, according to observers, the municipal election results would be a positive step forward in the development of democratic institutions in Bosnia and could help pave the way for creating conditions that would allow people to return home across ethnic lines. However, the election results proved very difficult to implement in many municipalities that had a different ethnic composition before the war, including in Srebrenica. Recognizing the potential problems, an interagency working group led by OSCE developed a municipal election implementation plan in May 1997 and a final operational plan in mid-October 1997. The implementation plan called for a final certification that confirms which municipal councils had been duly formed by the end of 1997. According to an OSCE official, final certification means that the “shell” of a municipal government has been formed. The implementation plan recognized that candidates who win office must be able to travel to municipal council meetings and to move about their municipality without fear of physical attack or intimidation. It called for local police to provide security for council members and for IPTF and SFOR to supervise the development of the security plan. In addition, IPTF and SFOR, together with OSCE and other organizations, were to monitor the plan’s implementation through the National Election Results Implementation Committee. In mid-October 1997, an OSCE official told us that OSCE expected that up to 12 of the 136 municipalities that held elections would have problems achieving final certification by December 31, 1997, primarily because they would involve installing multiethnic assemblies and governments. Two of the more difficult cases were projected to be (1) Srbrenica, a city that had a prewar Bosniak-majority population but was “ethnically cleansed” by Serbs in 1995; its prewar residents successfully elected a predominantly Bosniak council and (2) Drvar, a town with a predominantly Serb majority before and during much of the war but now populated in large part by Bosnian Croats; Bosnian Serbs won the majority on the municipal council of Drvar. The OSCE projection proved overly optimistic: as of December 31, 1997, 126 of the 136 municipalities had not yet achieved final certification. An OSCE official told us that OSCE had underestimated the difficulty of establishing municipal governments in many areas. However, according to a State Department official, the unexpected parliamentary elections in Republika Srpska contributed to the early difficulties, as OSCE resources were diverted to administering and supervising the elections from September through December 1997. On December 10, 1997, in response to the slow pace of implementing the municipal election results, the Peace Implementation Council gave OSCE and OHR increased authority over the installation of municipal governments. Specifically, it gave the OSCE Head of Mission and High Representative final and binding arbitration authority over municipalities that had not fulfilled final certification requirements before February 28, 1998. According to the chairman of the National Election Results Implementation Committee, the committee was using this authority in early 1998 to convoke meetings of noncompliant municipal councils and negotiate solutions that would allow the formation of local governments. Even with this intense international involvement and effort, however, as of February 6, 1998, only 79 of the 136 municipalities that held elections had established their governments and received final certification by OSCE. As of that date, OSCE estimated that 31 municipalities would be subjected to final arbitration by OSCE and OHR. By March 5, 1998, the number of municipalities receiving final certification had increased to 115, leaving 21 municipalities subject to OSCE and OHR arbitration. By early May 1998, 133 municipalities had received final certification, and 3 had received arbitration awards that had not yet been implemented. According to OSCE officials, final certification alone does not ensure that municipal governments will continue to function in a democratic manner. Recognizing this, the election implementation plan called for an interagency structure that would continue to monitor and report on the functioning of municipal assemblies, thus ensuring that elected candidates are able to carry out their duties as envisioned by the Dayton Agreement. In early February 1998, OSCE officials told us that this envisioned function and structure had not yet been fully defined, nor the level of the international community’s involvement in promoting the development of municipal governments clearly articulated. These officials said that the involvement may go beyond monitoring and reporting to include proactive development of local governments. For Srebrenica, the international community established an interim executive board to replace the elected municipal council, after repeated attempts at crafting a solution mutually acceptable to Bosniak and Serb elected municipal councillors had failed and subsequent arbitration awards were not honored. On April 6, 1998, OHR and OSCE issued a supplementary arbitration award that suspended the work of Srebrenica’s elected council and established the interim executive board to be composed of two Bosniaks and two Serbs and chaired by an international official. On April 16, 1998, OHR announced that a U.S. citizen had been appointed as chair of the board and that each of the four local members would choose their deputies from the opposing ethnic group in the days to come. In consultation with the parties, the board will administer Srebrenica municipality under the supervision of the High Representative and the OSCE Head of Mission and will assume authority over all municipal funds, material, and assets. In early 1998, the OSCE’s election appeals subcommission issued decisions that removed from office or otherwise penalized individuals who had obstructed the functioning of municipal governments. For example, on April 17, 1998, the subcommission (1) ruled that two SDS councillors and one Serb Radical Party councillor in Srebrenica had obstructed the mediation process and the formation of the municipal government, (2) removed these councillors from office, and (3) banned them from occupying administrative posts in the municipality. The subcommission placed a similar ban on a Coalition party member, who did not hold office, because he also had obstructed mediation sessions. On the same day, the subcommission decided to remove from the Teslic assembly an SDS member who served as the assembly’s Vice-President, because this official had used inflammatory language in an attempt to disrupt the implementation process and intimidate Bosniak councillors. The results of the municipal elections led to the establishment of a multiethnic administration, judiciary, and police force in the strategically important area of Brcko, largely due to the efforts of the interim Brcko supervisory regime. After the municipal elections, the Brcko Supervisor issued three orders (plus amendments) that specified requirements for the establishment of these multiethnic institutions. Based on the voters’ registration list and election results, the amendments to the Supervisory orders specified the ethnic composition of the multiethnic administration, police, and judiciary as 52.2 percent Serb, 39.1 percent Bosniak, and 8.7 percent Croat. In October 1997, the Brcko Supervisor told us that he foresaw nothing but troubles, turbulence, and obstruction from hard-line SDS leaders in Pale in trying to implement Brcko’s municipal election results. In early February 1998, OHR reported that obstruction by the Serb parties slowed the process to a pace that only allowed minimum compliance with orders and regulations, saying that the development of Brcko’s municipal government had been slow and had required a considerable amount of mediation by OHR. Table 3.3 describes the progress made in establishing Brcko’s institutions as of late April 1998. On March 15, 1998, the Brcko arbitration tribunal issued a second decision on the status of the Brcko area. This decision deferred until early 1999 a final decision on whether the Brcko area should be transferred to the Federation, remain within the territory of Republika Srpska, or be declared a “special” or “neutral district.” In the decision, the tribunal recognized the systematic, blatant, and at times violent attempts of the SDS leaders in Pale to thwart the Dayton objective of returning Bosnia, particularly Brcko, to its prewar multiethnicity, as well as the promise that Dodik’s commitment to a multiethnic Bosnia may change the level of Bosnian Serb compliance in Brcko over the coming year. The tribunal’s decision called for the continuation of the Brcko supervisory regime under the auspices of OHR because (1) Brcko’s new multiethnic institutions were “shallowly rooted”; (2) the SDS and its leaders continued to have influence in the area, keeping tensions and instability high by resisting the Supervisor’s efforts to promote Dayton compliance; and (3) Bosnia’s national and entity-level institutions had not yet become fully effective. The decision also gave the Brcko Supervisor new authority equivalent to the High Representative’s powers. These included the power to remove from office any official considered by the Supervisor to be inadequately cooperative with his efforts to achieve compliance with the Dayton Agreement, to strengthen democratic institutions in the area, and to revitalize the local economy. The problems in establishing multiethnic institutions can largely be attributed to political leaders of Bosnia’s three major ethnic groups retaining their wartime goals, views that are still largely shared by the ethnic groups they represent. In February 1998, international observers in Bosnia told us that most Bosnian Serb and Croat political leaders still want to establish ethnically pure states separate from Bosnia. According to an international official in Bosnia, the new Prime Minister of Republika Srpska—while more moderate and more willing to work with the international community than nationalist Bosnian Serb leaders—sees himself as the Prime Minister of an autonomous entity and will be constrained in truly unifying the country. On the other hand, Bosniak political leaders continue to profess support for a unified, multiethnic Bosnia, although, according to some observers, with Bosniaks in control. According to polls conducted by USIA in January 1998, most Bosnian Serbs and Croats still agree with their political leaders that a unified Bosnia should not exist (see fig. 3.2). However, Bosnian Serb support for this goal has increased from 4 percent in late 1995 to 18 percent in early 1998. Furthermore, 92 percent of Bosnian Serbs and 74 percent of Bosnian Croats said that it would be best for their respective areas to become independent or become part of Serbia or Croatia, respectively. In contrast, almost all Bosniaks have continued to support a unified Bosnia, with 56 percent of them believing a unified Bosnia is worth dying for. In general, though significant problems remain, human rights and other observer reports indicate improvements during the latter part of 1997 in (1) freedom of association and political pluralism; (2) freedom of movement across ethnic boundaries; and (3) freedom of the media. Further, police-related human rights abuses declined and ethnic intolerance eased somewhat during 1997; however, according to SFOR Assessment Cell data, incidents of police abuse and ethnic conflict increased significantly in March 1998 and remained at high levels during April. SFOR data also showed that incidents of a political nature had increased in late 1997 but had declined sharply by March 1998. According to observer reports, the political environment leading up to elections held in September and November 1997 was much more open than the campaign period for the national elections held 1 year earlier. Nevertheless, the elections were still a long way from meeting international standards as fully free and fair. Much less fraud occurred during the municipal elections than during the September 1996 elections, as OSCE reregistered Bosnia’s voting population under international supervision using strict rules for where people could register to vote. OSCE’s election appeals subcommission often took action against the three ruling political parties after they violated electoral rules and regulations during the registration process and campaign period, particularly against the SDS and the Bosnian Croat ruling party, the Croatian Democratic Union (HDZ). OSCE also deployed an international supervisor to every polling station during the election, a crucial advance over the 1996 elections. One observer report characterized OSCE’s administration of the municipal elections as a considerable achievement, despite their technical shortcomings, given the fact that the elections were organized within the context of a conflict resolution process. Furthermore, the municipal and Republika Srpska national assembly elections contributed to the development of a more pluralistic political culture, particularly in Republika Srpska where opposition political parties significantly increased their representation and broke the hold of the SDS. Opposition political parties also improved their showing in Bosniak-controlled areas in the municipal elections, but the SDA and the HDZ remained dominant in Bosniak and Bosnian Croat-controlled areas, respectively. The elections also had negative aspects of people voting largely along ethnic lines, a situation that observers expected given the recent war and remaining fear and uncertainty of the people that the war is not yet over. And although media access and freedom of association were better for political parties in 1997 as compared with 1996, political parties generally did not campaign in areas of the country controlled by another ethnic group. Further, many opposition parties did not have full access to the media. Also, according to an observer report, the broadcast of extreme propaganda and hate campaigns by the SDS and HDZ during the lead-up to the municipal elections had adverse consequences for the campaign environment and did not in any way serve the electorate or enable it to make informed choices. According to human rights and other observers in Bosnia, freedom of movement across ethnic boundaries slowly and incrementally improved throughout Bosnia in the second half of 1997, although major impediments discouraged people from traveling freely across ethnic lines at the end of the year. Signs of improvement included (1) increased circulation of private vehicles across the interentity boundary line, with the notable exception of the Prijedor (Republika Srpska)-Sanski Most (Federation) corridor; (2) the institution of several public bus lines by both UNHCR and private companies; (3) approval and heightened interentity cooperation by authorities for assessment and graveyard visits; (4) increased foot and vehicle traffic across ethnic boundaries in the Brcko and Mostar areas; and (5) new roadside markets located along the former front lines. One such market started operating in Mostar in July 1997 that serves both Bosniaks and Bosnian Croat and a second started operating near Zvornik in Republika Srpska that serves both Bosniaks and Bosian Serbs. Human rights and other observers attributed the increased freedom of movement to the success of IPTF’s police checkpoint policy, which is described later in this chapter. The establishment of joint police forces in some areas of the Federation was cited as a factor in increasing freedom of movement, including in the Mostar area between the predominantly Bosniak east and predominantly Croat west sides of the city. Further, according to an SFOR document, some of the improvement late in the year was due to an improving political situation in Republika Srpska. Despite these positive developments, people still feared to drive, visit, or return to their homes across ethnic lines since those who attempted such crossings often suffered incidents of harassment, intimidation, and violence. For example, people who attempted to drive into an area controlled by another ethnic group were easily identified by their license plates as likely belonging to a specific ethnic group and subject to police harassment through the collection of illegal visa fees and taxes, particularly by Republika Srpska police, as well as for roadside assaults, robberies, and vehicle hijackings, primarily at night in Republika Srpska. Furthermore, at the end of 1997, local authorities in both entities continued to refer to “lists of war crimes suspects” in an attempt to discourage return of refugees and displaced persons, harass citizens, and deter elected municipal councils of other ethnic groups from taking office. And people attempting to cross ethnic lines to visit or return to their homes suffered numerous acts of intimidation and violence, in some cases including murder. These incidents are discussed in more detail in chapter 5 of this report. To promote increased freedom of movement across ethnic boundaries, the Peace Implementation Council pressured Bosnia’s political leaders to develop a uniform license plate for all areas of the country by the end of 1997. Bosnia’s Council of Ministers signed a memorandum of understanding on the development of this license plate on January 28, 1998, and promotion of the plate occurred on February 2, 1998, in Sarajevo and Banja Luka. According to an OHR report, reaching an agreed design for the uniform license plate proved less contentious than resolving other national symbols, as the majority of the people in both entities strongly favored a license plate that would not reveal the driver’s ethnic group. OHR and human rights observers believe that with the issuance of the new plates, freedom of movement across ethnic lines in Bosnia will increase dramatically as the plate’s design guarantees anonymity. On April 20, 1998, OHR extended the original deadlines for implementing the new license plates due to technical reasons related to registration documents. By June 1, 1998, the new plate will be compulsory for travel outside Bosnia; by August 31, 1998, it will be illegal for residents to use any other plate for travel within Bosnia. In 1997, according to the State Department human rights report, the right to freedom of speech and the press was partially respected in the Federation and in western Republika Srpska, but less so in eastern Republika Srpska. Some progress was made in shutting down offensive media outlets and in establishing more open and independent media, particularly in Banja Luka and in Bosniak-controlled areas of the Federation. Party-controlled media—particularly Croatian State Radio and Television—are the only electronic media available to the vast majority of citizens in Bosnian Croat-controlled areas of the Federation. Party-controlled television is the only television available to roughly half of Bosnia’s population until the Open Broadcast Network, an independent television network supported by the international community, is fully functioning. Radio is a freer medium, with independent radio available to about 70 percent of the population. Using the expanded interpretation of his authority granted by the Peace Implementation Council Steering Board in May 1997, the High Representative took a series of escalatory actions starting in August 1997 to counter SRT-Pale violations (app. I provides information on these actions). Most importantly, SFOR seized control of five transmission towers of SRT-Pale in October 1997 in order to remove its inflammatory messages (see fig. 3.3). In early October, the High Representative dismissed SRT-Pale’s managing board, stating that it could not broadcast using these towers until it agreed to be restructured in accordance with western democratic standards. In the interim, only the SRT station in Banja Luka was authorized to continue originating the SRT broadcasts. After parts of the microwave link were stolen from the Veliki Zep hub transmitter in October 1997, OHR and SFOR reconnected the system by leasing a satellite system. The new Republika Srpska government recovered and replaced the microwave links in early 1998. SRT broadcasts now originate from Banja Luka under international supervision. Following President Plavsic’s break with the SDS leaders in Pale, SRT-Banja Luka began to broadcast its own programming, giving a favorable slant to Plavsic’s activities. In comparison with SRT-Pale’s earlier broadcasts, however, SRT-Banja Luka broadcasts (1) were generally more open to opposing views, (2) presented the Dayton Agreement and the international community in a much more favorable light, and (3) began to open a discussion of surrendering indictees to the war crimes tribunal and promoting reconciliation among the ethnic groups. On February 13, 1998, the new Republika Srpska government signed an agreement with the High Representative in which it agreed to (1) the restructuring of SRT into a public service television station that operates in accordance with western democratic standards of public service broadcasting and (2) the appointment of an international administrator and provision of international technical and financial assistance for the network. On April 13, 1998, the SFOR Commander and Prime Minister Dodik signed a memorandum of understanding that could lead to SFOR transferring the control and security of the five SRT towers to the government. As mid-May 1998, SFOR still controlled the towers. At the end of 1997, according to the State Department human rights report, independent or opposition radio stations broadcast in Republika Srpska, particularly in Banja Luka, but they tended to skirt most significant political issues for fear of retaliation by the SDS. And the SDS still controlled television and radio in some areas of Republika Srpska, including Brcko, using them to broadcast vitriolic, anti-Dayton messages. While generally considered the most open, along with SRT-Banja Luka, the SDA-run, Federation state television station faithfully served the interests of the Bosniak-ruling party, the SDA, giving preferential coverage to SDA leaders and greatly limiting reports on the political opposition. While its broadcasts were often biased, they were rarely inflammatory. Radio broadcasting in Bosniak-controlled areas of the Federation was diverse, and opposition viewpoints were reflected in the news programs of independent broadcasters. Media access in Bosnian Croat areas remained largely under the control of the HDZ ruling party, and most Bosnian Croats relied on the state-controlled media of Croatia for their information. News programs and editorials on the Croatian state television station frequently criticized the Dayton Agreement, their weather maps showed the Federation as part of Croatia, and coverage of Bosnian events often left the impression that the scene pictured was actually in Croatia. Further, local radio stations in Croat-controlled areas were usually highly nationalistic and did not tolerate opposition viewpoints. In January 1998, the State Department and the SFOR Assessment Cell reported that police-related human rights violations had declined during 1997, although police continued to commit abuses throughout the country. The most important advance in 1997 was the success of the IPTF checkpoint policy in reducing the number of illegal police checkpoints that had hampered freedom of movement, particularly along the interentity boundary line. Initiated with SFOR support on May 26, 1997, this policy addressed the “inordinate” number of checkpoints by defining as illegal any fixed or mobile checkpoint that (1) was manned by two or more police officers and (2) operated for more than 30 minutes without a valid IPTF checkpoint permit. SFOR supported IPTF in enforcing this policy by confiscating weapons and identity cards of noncompliant police, jointly patrolling with IPTF certain sensitive areas, such as Brcko, and by cooperating in removing 38 out of 151 identified illegal checkpoints (as of March 12, 1998). SFOR Assessment Cell data show that the number of incidents of police abuse increased by 86 percent from January through March 1998 and declined slightly in April 1998. According to an assessment cell report, this increase was mainly due to an IPTF “crackdown” on illegal police checkpoints in the zone of separation around Sarajevo. This crackdown resulted in a large number of IPTF noncompliance reports against police, primarily in the Federation. Despite this advance, according to observers in Bosnia, Bosnia’s political leaders continued to use police as tools for furthering their political aims. For example, according to the 1997 State Department human rights report, Bosnian Serb police often employed excessive force to prevent Bosniak former residents from returning to, or staying in, Republika Srpska; Bosnian Serb police also apparently took no action against the perpetrators of severe incidents involving harassment. Similar problems of abuse occurred in Croat-majority areas. According to the report, IPTF investigated a number of cases of police abuse in Brcko and Banja Luka, as well as in the Croat-controlled town of Drvar; the officers found responsible were either dismissed from the force or fined. SDA-controlled local police in Velika Kladusa and Cazin continued a pattern of severe police abuses, according to the State Department human rights report, although the frequency of such acts had greatly diminished since 1996 as a result of intense monitoring and intervention by international human rights organizations. Most of the people abused by local police in these areas were associated with Fikret Abdic, a businessman who led a breakaway Bosniak region during the war. Moreover, according to State’s human rights report, Bosnia’s police and mobs that appeared organized by local authorities committed a few extrajudicial killings; members of security forces abused and mistreated citizens; and police continued to use arbitrary arrest and detention, although to a lesser extent than in 1996. In both entities, police still exercised great latitude based on Communist-era criminal procedure laws that permit the police to detain persons up to 6 months without bringing formal charges against them. In the Federation, the laws were being revised with the aim of eliminating this practice. SFOR Assessment Cell data show that ethnic tensions—measured in terms of ethnically-related incidents of hostile activity—had decreased during the last third of 1997; these incidents, however, increased by 200 percent from January through March 1998—with a further 16 percent increase by the end of April—as people began to cross ethnic lines to visit or return home. While these occurrences were fewer in January and February 1998 than the year before, they appeared to be more organized than in the past, for example, the burning of potential returnees’ houses in Drvar. For March and April 1998, the number of ethnic incidents was higher than the prior year by 41 percent and 130 percent, respectively. USIA polls indicated that animosity among Bosnia’s three ethnic groups remained strong in 1997 but lessened slightly during the year. For example, Bosniaks and Bosnian Croats held slightly more favorable opinions of Bosnian Serbs at the end of 1997 than at the beginning. Further, after a period of dramatically worsening relations during 1996, the percent of Bosniaks holding favorable opinions of Bosnian Croats rose from 42 percent to 59 percent. However, a large majority of Bosnian Serbs and Croats still viewed other ethnic groups unfavorably, and the majority of Bosniaks still held negative opinions of Bosnian Serbs. Appendix IV provides USIA polling data on these issues from December 1995 through January 1998. In February 1997, the Archbishop of Sarajevo said that Bosnians held negative views of each other because their political leaders controlled and used the media to encourage animosity and discourage reconciliation among the ethnic groups. During the year, the international community took concrete steps to shut down some media outlets that inflamed ethnic animosity and took steps (described later in this chapter) to develop a more open, tolerant media in Bosnia. Further, according to international observers, the bitter memories from the recent war contributed to the strong ethnic animosities—people remember who killed their family members or forced them from their homes. USIA polls show that despite the slight lessening of ethnic animosity during the year, Bosnian Serbs and Croats would agree that the war has severely harmed ethnic relations in Bosnia. In January 1998, a large majority of Bosnian Serbs (74 percent) and Bosnian Croats (73 percent) believed that the war has done too much damage for people of the three ethnic groups to live together peacefully again. In contrast, only 5 percent of Bosniaks believed that the war had irreparably damaged ethnic relations—91 percent of them believed that Bosniaks, Serbs, and Croats could again live together peacefully, an increase from 65 percent who believed this at the end of the war. While the SFOR Assessment Cell noted a decrease in ethnic incidents during late 1997, it also noted an increase in “terrorist” incidents in the Federation and Republika Srpska in December 1997 and January 1998. The cell defines “terrorist incidents” as being distinct from ethnic events in that the motive is political rather than ethnic hatred. Examples of these terrorist incidents include conflicts associated with Bosniak/SDA resistance to the return of 600 Bosniaks—supporters of Fikret Abdic and his opposition party—to Velika Kladusa; violent incidents involved with the interparty, intra-ethnic struggles between SDS and SNS members in Bijeljina; a series of explosions in Mostar; and incidents revolving around the implementation and results of municipal elections. The number of politically-motivated terrorist incidents declined significantly over February and March 1998, as the number of ethnic incidents and police abuses increased sharply. The democratization projects started during 1996 by many international aid donors—including USIA, USAID, and OSCE—began to show very early results in 1997. These projects were designed to (1) develop alternative and independent media outlets; (2) foster ethnic tolerance and reconciliation within and across the two entities, primarily through support for local political, social, cultural, religious, and business organizations that would link Bosnia’s ethnic groups; and (3) develop the institutions and practices of a democratic culture at all levels. According to a State Department document, the international community intended that these efforts would constitute part of a long-term democratization effort to counter the continued presence of separatists and unreconstructed, authoritarian centralists in Bosnia. According to OHR and State Department officials, efforts to enlarge and improve access to independent media are at the heart of the international democratization program. As OHR and SFOR sought to break SDS control of SRT broadcasts from Pale, international donors were attempting to develop a more open, objective SRT-Banja Luka and alternative and independent media outlets throughout Bosnia. Since late August 1997—when SRT-Banja Luka staff broke from Pale and started alternative broadcasts—the United States has provided equipment to SRT-Banja Luka to help it improve the quality and quantity of its programming. According to SRT-Banja Luka officials, the station’s signal could reach about 70 percent of Republika Srpska territory in late October 1997. The Open Broadcast Network, created in 1996 by the international community, expanded its broadcast range and programming in 1997 with international assistance, though it still did not have Bosnian Croat participation at the end of the year. This four-station network now broadcasts 6 hours daily and, if it were fully funded, would have the ability to expand its coverage from about 50 percent of Bosnia’s territory to 80 percent using state-of-the-art broadcast technology supplied by international donors. According to the State Department and other reports, the network has been plagued by poor management at its Sarajevo hub and by problems with affiliate relations and funding. The network has not increased its geographic coverage and remains short of operating funds because many donors have failed to provide money they had pledged as of mid-April 1998. Thus, according to USIA polls, as of January 1998, only 50 percent of Bosniaks, 26 percent of Bosnian Serbs, and 21 percent of Bosnian Croats were able to receive the network’s broadcasts, although the vast majority of those who had access regularly watched the network’s programs. Further, lack of a government licensing agreement hindered the network’s ability to attract advertising and its plans to become a self-sustaining enterprise. During 1997, USAID funded Internews, a California-based nongovernmental organization, to provide on-site technical assistance and training to independent radio and television operations in the Federation and Republika Srpska. USAID’s Office of Transition Initiatives helped create and develop 34 Bosnia-based television (7) and radio stations (27), which, according to USAID, reach about 70 percent of Bosnia’s population. USAID’s Office of Transition Initiatives is the leading funder of alternative newspapers and journals (19) in the Federation and Republika Srpska, according to a USAID document. One of these USAID grantees, the publisher of an independent newspaper in Banja Luka, told us that the USAID-provided computers, broadcast equipment, and funds have allowed the organization to open four correspondent bureaus in eastern Republika Srpska. This grantee told us that USAID supported him during 1996 when he and other publishers of independent newspapers were considered “traitors”; now, after the political changes in Banja Luka, they are considered heroes. According to USAID, total circulation for independent publications increased from virtually none in 1995 to over 100,000 independent dailies, weeklies, and monthlies near the end of 1997. The State Department human rights report noted that some independent media in the two entities assist in the distribution of each others’ publications in their respective entities; however, independent publications still face difficulty gaining access to distribution systems in many parts of Bosnia, and their journalists generally cannot freely move across ethnic lines. In mid-1997, some efforts of the international community to counter ethnic intolerance began to reestablish links between Bosnia’s ethnic groups that had been broken during the war. According to a September 1997 OSCE report, many OSCE and other internationally sponsored democratization activities in the fall of 1997 resulted in cooperation between the ethnic groups in a way that would have been unthinkable just a few months earlier. At that time, almost all efforts to link the ethnic groups across the interentity boundary line were blocked by hard-line SDS leaders. Among the 1997 efforts were the following: OSCE helped organize an interentity editorial meeting for a youth magazine in Sarajevo that included the participation of young people from Foca/Srbinje, a hard-liner-controlled town in eastern Republika Srpska. The OSCE democratization unit also facilitated the participation of three members of the University of Banja Luka’s philosophy faculty in a 1-day conference in Sarajevo, the first time since the war that the academics had attended a conference in Sarajevo. As of April 1998, USAID’s Office of Transition Initiatives has provided over 300 grants of direct assistance to more than 100 Bosnia-based civil organizations working to build a viable multiethnic civil society in Bosnia, including women’s, children’s, and refugees’ advocacy associations; youth and student groups; private business associations; and legal aid societies. Many of these organizations are linking their activities across the interentity boundary line and across ethnic lines in the Federation, some as a step toward developing countrywide organizations. One such civic organization in Mostar now provides economic support and jobs for its 2,000 displaced Bosniak and Bosnian Croat women members and their families and provides 35,000 more displaced women in the community access to legal, psychological, and economic counseling. OSCE sponsored meetings of the national Interfaith Council on several occasions in 1997. The Council has called for the establishment of a multiethnic Truth and Reconciliation Committee to develop an historical accounting of abuses suffered during the war. This effort is also being supported by the U.S. Institute of Peace. According to an institute official, this committee will probably not be established until after Bosnia’s September 1998 elections. USAID, USIA, and OSCE also supported Bosnian efforts in 1997 to develop governmental institutions that function in a democratic, open manner and to promote democratic practices among Bosnians. For example, USAID funded the International City Managers Association to assist in the development of cantonal government structures and transparent budget practices; the American Bar Association’s Central and East European Law Initiative to assist in establishing various working groups to address judicial reform issues and provide technical assistance to strengthen judicial independence in the Federation; and the National Democratic Institute, which supported political party building in Bosnia with party-building seminars, consultations, and poll watcher training. As part of this effort, the National Democratic Institute worked extensively with opposition parties in Republika Srpska before the 1997 municipal and parliamentary elections. Using USAID funds, the International Foundation for Election Systems and the National Democratic Institute also conducted civic education programs throughout Bosnia to educate Bosnians about their rights and responsibilities in a democratic society. For example, the foundation used the municipal election campaign and implementation period to provide information sessions on issues such as the administration and outcomes of municipal elections, the functioning of municipal assemblies and governments, citizens’ responsibility to hold elected officials accountable for their actions, and human rights (see fig. 3.4). Further, USIA has funded civic education activities in Bosnia since 1996, with funds going toward the training of 1,500 teachers, the distribution of 28,000 textbooks, and the participation of an estimated 37,500 students in civic education instruction by the end of 1997. Moreover, USIA’s international visitors’ programs in the United States have promoted interentity cooperation among Bosnian professionals, educators, and politicians. The newly elected Prime Minister of Republika Srpska, Dodik, was among those who attended an April 1997 visit to the United States on creating effective political opposition organizations in a multiparty, multiethnic democracy. The Dayton Agreement calls for all parties to arrest people indicted for war crimes and surrender them to the war crimes tribunal. According to many international officials and observers in Bosnia, bringing to justice indictees—particularly Radovan Karadzic, a major alleged war criminal—is critically important to furthering the implementation of the Dayton Agreement and bringing peace and stability to Bosnia. Considerable progress was made toward achieving this goal in 1997 and early 1998, but a large number of indictees remained at large due to the noncompliance of Bosnian Serb and Serbian political leaders. The number of at-large indictees dropped significantly from late April 1997 through early May 1998, largely due to an increase in arrests of indictees as international peacekeepers, particularly SFOR, detained indictees; the Croatian government was pressured by the United States to became more active in facilitating the surrender of indictees to the tribunal; and Bosnian Croats and Bosnian Serbs became more willing to voluntarily surrender. Also, in an attempt to reallocate its resources, the war crimes tribunal withdrew charges against a large number of Bosnian Serb indictees who had not been arrested or surrendered, thereby further reducing the number of at-large indictees. In mid-1997 the international community started taking steps that substantially weakened the hold of Radovan Karadzic and his supporters on the levers of power in Republika Srpska; nonetheless, he remained at large and capable of obstructing Dayton implementation. While the North Atlantic Council, NATO’s political leadership, had not mandated that SFOR arrest indictees whom the parties refuse to surrender to the tribunal, SFOR troops will detain indictees when they come upon them in the normal course of their duties, if the tactical situation allows, and surrender them to the tribunal. According to State Department officials and documents, until indicted war criminals are arrested and turned over to the tribunal, it will be impossible to establish a stable peace in the region. Human rights reports support this conclusion; according to some reports, indicted war criminals control the economy and governmental institutions in many places in Bosnia. Further, according to an expert on Bosnian culture, reconciliation among Bosnians cannot take place until war criminals are brought to justice and held accountable for their actions. During our June 1997 field work in Bosnia, many officials with whom we spoke were unequivocal in their opinion that Radovan Karadzic must be arrested or otherwise removed from the scene in Bosnia as soon as possible. They told us that Karadzic, a leader who is not accountable to the electorate, is blocking international efforts to work with the more “moderate” Bosnian Serb political leaders in implementing the Dayton Agreement. For example, he had not allowed other political leaders, including elected ones, to abide by agreements they had made with the international community on small-scale attempts to link the ethnic groups politically or economically. Observers also told us then that Karadzic still controlled Republika Srpska financial institutions and police and dominated Bosnian Serb political leaders through a “reign of terror.” In early December 1997, the High Representative said that there can be no lasting peace in Bosnia while so many war crimes indictees remain at liberty. He noted in particular the presence of Radovan Karadzic, whose “malign influence contaminates the entire social, political, and economic atmosphere in Bosnia.” From April 25, 1997, through May 27, 1998, the number of at-large indictees dropped from 66 (of 74 named indictees) to 32 (of 62 named indictees)because (1) progress was made in surrendering indictees to the war crimes tribunal; and (2) the tribunal decided to withdraw indictments of 14 at-large Bosnian Serb suspects for reasons related to the tribunal’s resources, workload, and prosecutorial and investigative strategies. Of the named indictees who remained at large, 30 were ethnic Serbs, almost all of whom were Bosnian Serb, and two were Bosnian Croats. Bosniak authorities had already surrendered the three indictees in their area of control in 1996. Since April 25, 1997, the number of war crimes indictees brought to the tribunal increased from 8 (of 74 indictees) to 30 (of 62 indictees). This progress resulted from the arrest of 1 Croatian Serb suspect that was facilitated by international peacekeeping forces in Croatia; detentions of 4 Bosnian Serb and 2 Bosnian Croat indictees by SFOR; and the negotiated, voluntary surrender of 10 Bosnian Croats and 5 Bosnian Serbs (see table 4.1). As of May 27, 1998, only Bosnian Serb and Serbian political leaders had not surrendered any people indicted for war crimes in their areas of control; instead, Bosnian Serb indictees voluntarily surrendered shortly after SFOR troops detained Bosnian Serb indictees (U.S. SFOR on January 22, 1998, and British SFOR on April 8, 1998) and after the newly elected, moderate Prime Minister of Republika Srpska, Milorad Dodik, had assumed office (on January 31, 1998). In February 1998, Dodik offered to allow the war crimes tribunal to open an office in Banja Luka and publicly encouraged indictees to voluntarily surrender themselves to the tribunal. He said that his government would not arrest indictees, although he could not and would not attempt to stop SFOR from detaining them and surrendering them to the tribunal. On May 5 and 8, 1998, the tribunal decided to withdraw indictments against 14 at-large Bosnian Serbs. These indictees had been charged with atrocities against Bosnian Muslim and Croat civilian prisoners held at the Omarska and/or the Keraterm camps outside of Prijedor. The tribunal had previously withdrawn charges against three Bosnian Croats, who had surrendered voluntarily, for lack of sufficient evidence. However, in the case of the 14 indictees, the tribunal’s announcement said that the decision to withdraw the charges was not based on any lack of evidence. According to the tribunal’s announcement, this decision was made so that it could reallocate its available resources in a manner that would allow it to (1) fairly and expeditiously respond to a much larger than anticipated number of trials and (2) maintain its investigative focus on persons who hold higher levels of responsibility or who have been personally responsible for exceptionally brutal or otherwise extremely serious offenses. Given these two aims, the Prosecutor did not consider it feasible to hold multiple separate trials for related offenses committed by people who could appropriately be tried in another judicial forum, such as a national court. In withdrawing the indictments, the Prosecutor reserved the right to pursue the same or other charges against the 14 accused if the circumstances change, and offered assistance to domestic jurisdictions that in good faith pursued charges of violations of international humanitarian law against any of them. According to a State Department official, an increase in the tribunal’s resources would not necessarily result in the Prosecutor deciding to pursue charges against any of the 14 former indictees. Other factors would likely be more important in the Prosecutor’s decision to do so. For example, the Prosecutor may decide to pursue charges if the testimony of one of the former indictees is needed to build a case against a high ranking indictee. In 1996 and early 1997, the international community failed in its attempts to politically isolate and remove Karadzic from power. For example, in July 1996 he stepped down as head of the SDS under international pressure; however, instead of losing power, according to international observers, Karadzic effectively retained his control over Republika Srpska and grew in popularity among people there. Observers said that Karadzic and his supporters retained control of key levers of power: the police, media, and financial and economic institutions of Republika Srpska. Further, as of early June 1997, Karadzic and the SDS dominated politics and governmental institutions at the national, entity, and municipal levels in Republika Srpska. In mid-1997, around the time the division in the Bosnian Serb political leadership became public, the international community began to take steps to weaken the hold of Karadzic and his supporters on key levers of power (see table 4.2). By weakening the hold of Karadzic and the SDS over the media and police, particularly the special police, the international community has reduced his ability to instigate violence against the international community and to block the implementation of the Dayton Agreement. However, his continued control of economic and financial institutions in Republika Srpska, as well as his smuggling activities, diverts revenue from all levels of government and inhibits the entity’s economic recovery. Since June 1997, Karadzic and the SDS have lost substantial power over Bosnian Serb politics and Republika Srpska governmental institutions at the municipal, entity, and national levels, a trend supported by actions of the international community. For example: At the municipal level, the SDS lost control of many municipal governments in western Republika Srpska after the September 1997 elections but retained control either alone or in coalition with the hard-line Serb Radical Party in the eastern part of the entity (see fig. 4.1). Further, a number of newly elected SDS candidates resigned from the SDS and joined President Plavsic’s new party, the SNS, as did many SDS members throughout Republika Srpska. The OSCE ruled after the municipal election that moves between parties by elected councillors were legal. At the entity level, the SDS lost control of the Republika Srpska parliament as a result of elections held on November 22 and 23, 1997. Further, according to observers in Bosnia, the election of Dodik as Prime Minister of Republika Srpska led to shock among SDS leaders, who in early February 1998 appeared to be in disarray. At the national level, the international community undercut the ability of Karadzic and other hard-liners—particularly Momcilo Krajisnik, the Bosnian Serb member of Bosnia’s collective Presidency—to impede the functioning of Bosnia’s national institutions by supporting the expanded interpretation of the High Representative’s mandate in early December 1997. The new interpretation of the mandate allows the High Representative to impose interim measures when Bosnia’s political leaders cannot reach agreement and to remove from office any elected representative who consistently does not show up for meetings or otherwise prevents the institutions from effectively conducting their business. As the hold of Karadzic and the SDS over the police, media, and political situation in Republika Srpska has weakened, his popularity among Bosnian Serbs has also declined, according to USIA polls, although he still remains very popular (see figure 4.2). In 1997, President Plavsic sought to undercut Karadzic’s popularity by conducting an anticorruption media campaign against him and his supporters. While Karadzic has lost a substantial amount of power in Republika Srpska, many international and U.S. officials still believe that he must be arrested and brought to the war crimes tribunal to ensure that the peace process can continue. According to a senior international official, even with the presence of 35,000 SFOR soldiers in Bosnia, the international community appears to be weak and unable to implement Dayton as long as Karadzic remains at large. Annex 7 of the Dayton Agreement gave Bosnia’s 1.3 million refugees and 1 million internally displaced persons the right to freely return to their prewar homes and to have property they lost during the war restored to them. Despite these guarantees and intensive efforts of the international community, political leaders of Bosnia’s three ethnic groups, but particularly Bosnian Serb and Croat leaders, continued to prevent large numbers of people from returning to their prewar homes across ethnic lines. As a result of their leaders’ intransigence, most of the 200,800 refugees that returned to Bosnia and the 223,000 displaced persons who returned home since the signing of the Dayton Agreement have gone to areas where their ethnic group represents a majority of the population. The annual number of returns across ethnic lines increased from about 9,500 in 1996 to about 39,000 in 1997, for a total number of about 48,500 minority returns, as the international community provided a number of political, economic, and security measures to support returns across ethnic lines. As of early 1998, however, major political barriers to minority return had not been addressed, and there were no indications that large-scale, orderly returns would occur during the year without an SFOR security presence. In 1997, UNHCR developed a plan for returning refugees and displaced persons to their prewar homes. The plan recognized the difficulty of returning people to their homes across ethnic lines and therefore established a low estimate for minority returns, which was exceeded during the year. While the overall number of minority returns was low, surveys and reports indicate that a significant majority of affected people do wish to return home across ethnic lines and that the majority of people in Bosnia would support such returns. In the 1997 repatriation and return plan for Bosnia, UNHCR recognized that minority returns would be difficult to accomplish during the year in light of the continuing intransigence of Bosnia’s political leaders and the resulting hostile, insecure environment for returnees. The plan, therefore, estimated that although 200,000 refugees would return to areas where they would be in the majority ethnic group, these people would not necessarily return to their prewar homes. UNHCR also hoped for the minority return of 30,000 displaced persons to areas controlled by another ethnic group. UNHCR and the international community planned to use assistance and other means to further these small-scale minority returns and encourage Bosnia’s political climate to change from one of separation to one of reconciliation, thereby allowing larger numbers of minority returns. According to UNHCR data, in 1997 approximately 39,000 people returned to their homes in areas where they were in the minority ethnic group (see table 5.1) compared to 9,500 in 1996, bringing the total number of minority returns to about 48,500 in 1996 and 1997. Most of these returns occurred in the Federation. UNHCR believes that these figures very likely understate actual numbers of minority returns in many areas of Bosnia because many people returned spontaneously to their prewar homes, that is, they were not part of a return program organized by UNHCR or they did not register with the local authorities once they had returned. Appendix VI provides more information on total returns of Bosnia’s refugees and displaced persons in 1996 and 1997. Approximately 79 percent of these returns were to Bosniak-controlled areas, 17 percent to Bosnian Croat-controlled areas, and 3 percent to Bosnian Serb-controlled areas. Most of the people that have returned to minority areas are elderly. According to UNHCR, younger people with families are not returning to areas controlled by another ethnic group for fear of personal security and lack of employment opportunities. In many cases, minority returns took place under very difficult conditions and with strong international support in strategically important or otherwise contentious areas such as Brcko, Stolac, Jajce, and Doboj, areas with limited minority returns as recently as June 1997. As minority returns increased, however, a large number of returning refugees added to the number of displaced persons in Bosnia who could not return to their prewar homes across ethnic lines. While about 120,000 refugees returned to Bosnia in 1997, December 1997 UNHCR and OHR reports indicate that about 50 percent of them, particularly those returning to Bosniak areas, did not go back to their original homes. Instead, they had to relocate to other areas inside Bosnia because their prewar homes were in areas controlled by another ethnic group. According to a December 1997 report by UNHCR and the Commission on Real Property Claims of Displaced Persons and Refugees (hereafter referred to as the real property commission), the “relocation” taking place inside Bosnia by refugees and displaced persons includes (1) passive relocation—the normal case for displaced persons—where the displacement of individuals or groups becomes a de facto permanent condition, although the decision to relocate is not freely made and does not respect property rights of original owners; (2) hostile relocation, which involves the deliberate placement of groups of people in housing belonging to other ethnic groups to secure control over territory and prevent minority return; and (3) voluntary relocation through the sale or relocation of property, which occurs with the consent of both parties (that is the original owner and the displaced person). A certain degree of voluntary relocation was expected due to the rural-urban labor migration that accompanies the transition from a planned to a market economy. However, the passive or hostile relocation of large numbers of refugees and displaced persons, according to UNHCR, is a danger to the peace process because it consolidates ethnic separation. Comprehensive data are not available on how many of Bosnia’s refugees and displaced persons would choose to return home across ethnic lines. However, substantial evidence—including limited polling, observer reports, and the results of the municipal elections—indicates that segments of all three major ethnic groups, particularly Bosniaks, want to return home. According to the December 1997 report by UNHCR and the real property commission, a real property commission survey suggests that while there is an important group considering voluntary relocation, it remains a minority, and that within the Federation, the dominant pressure is for return (see table 5.2). The report further states that the majority of people remain strongly attached to their home of origin, including younger people with families, and are likely to constitute a significant political force for return into the indefinite future. Under current conditions, according to UNHCR and other reports, many people cannot freely choose whether to return home, primarily because they fear for their physical security if they attempt to visit or return to their prewar homes. Both Bosnian Croat and Bosnian Serb authorities have threatened to cut off humanitarian assistance to or otherwise harass people of their own ethnic group if they attempt to return to their prewar homes in areas controlled by other ethnic groups. These authorities want to keep people from their own ethnic group in their area of control to ensure that the original inhabitants cannot return to their prewar homes and to show that their people support ethnically pure states. According to public opinion surveys conducted by USIA in January 1998, Bosniak and Bosnian Croats largely support the right of people, including those from other ethnic groups, to return home; the majority of Bosnian Serbs do not support this right, although support for minority returns among Bosnian Serbs has increased significantly since the beginning of 1997 (see fig. 5.1). Specifically, over 90 percent of Bosniaks have indicated that they support returns of people from other ethnic groups, and about 70 percent of Bosnian Croats do so as well. Bosnian Serb support for returns of people from other ethnic groups rose from 9 percent in January 1997 to 38 percent in January 1998; at the same time, strong opposition among Bosnian Serbs to minority returns decreased from 65 percent to 35 percent. Despite the poll results, a senior international official told us that individual Bosnian Serbs would accept the return of their former neighbors. Officials in Brcko stated that incidents of violence directed toward returnees of other ethnic groups are generally caused by Serbs who are displaced from other areas and refugees who are manipulated by local authorities or are resentful due to the treatment that their families received during and after the war. According to the December 1997 report by UNHCR and the real property commission, all ethnic groups believed that their inability to return home across ethnic lines was caused by “politicians” rather than “ordinary people.” The international community initiated a number of projects in 1997 that condition economic assistance on municipalities’ willingness to accept and create an environment conducive to minority returns, including UNHCR’s Open Cities Initiative and the State Department’s minority return initiative. Unlike prior minority return efforts, these initiatives provide economic assistance to the entire community, rather than only to recent returnees, as a means of facilitating minority returns. Figure 5.2 shows the locations of cities participating in UNHCR’s Open Cities and the State Department’s minority return initiatives as of April 20, 1998. Table 5.3 shows that during 1997, about 9,560 people had crossed ethnic lines to return home to cities designated as open by UNHCR and/or provided with minority return-related assistance by State. The numbers are low in many cities as the cities only began taking serious steps toward accepting minority returns at the time they were selected to participate in the initiatives. The State Department provided $9 million to support returns to these cities during 1997. As of April 1998, UNHCR had provided approximately $12.6 million to support returns to areas participating in its Open Cities Initiative. UNHCR’s Open Cities Initiative was announced in March 1997 as a means of encouraging minority returns to cities or municipalities where reconciliation between ethnic communities is believed possible. The initiative was also intended to provide an incentive to communities to receive minorities and reward those communities that were receptive. Under this initiative, UNHCR designates cities or municipalities as “open” based on a common set of criteria that include a genuine and consistent political will on the part of local authorities to allow minority returns, confirmation that minority returns are occurring or will occur without any abuse of returnees, and demonstrated impartiality of the police. UNHCR and international agencies monitor the progress of returns in open cities, and provide assistance incrementally, in accordance with the progress of returns. UNHCR’s recognition of Mrkonjic Grad as the first “open city” in Republika Srpska on December 17, 1997, was a major step in light of past resistance from hard-line SDS members, including Karadzic. At the time of our mid-October 1997 field work in Bosnia, two of the three UNHCR open cities that we visited—Konjic, a Bosniak-majority area, and Busovaca, a Croat-majority area—were actively promoting minority returns. Vogosca, a predominately Bosniak area in Sarajevo Canton, was not. In Konjic, according to UNHCR and IPTF officials, the Mayor (a Bosniak), the Mayor’s deputy (a Bosnian Croat), and the Chief of Police were all genuinely committed to allowing people from other ethnic groups to return home and to providing security for those who did return. In Busovaca, returnees and people working on their homes in preparation for return told us that they were not afraid to return nor did they fear that their newly repaired homes would be destroyed. In both locations, significant problems remained in returning people to their homes, such as finding other accommodation for people living in the homes of potential returnees, clearing landmines from farmland, and improving the economy. In Vogosca, according to UNHCR officials, the return initiative had essentially stopped after an incident in early August 1997, during which Bosniak displaced persons disrupted an assessment visit of Bosnian Serbs to their prewar homes in Vogosca. Although the Mayor and cantonal police responded appropriately to the violence by protecting the Bosnian Serbs, local extremist political factions had organized a group of Bosniak women displaced from Srebrenica to disrupt the visit. According to UNHCR officials, this incident effectively halted any efforts at non-Bosniak returns to the area. Through its minority return initiative, implemented by nongovernmental organizations, the State Department committed $9 million in assistance to 13 municipalities during 1997-98—10 in the Federation and 3 in Republika Srpska. As of December 1997, the number of minority returns directly and indirectly facilitated by State’s initiative included an estimated 1,100 people (225 families). According to State, in addition to demonstrating progress on minority return, Vares and Bugojno—two Bosniak majority municipalities controlled by antireturn elements of the SDA—were included in the initiative to underscore the U.S. government’s conviction that minority returns had to and could occur everywhere. According to the State Department, State at times threatened to cut off assistance to Vares when local officials showed signs of not complying with their agreement to allow people of all ethnic groups to return to their homes. The assistance was never stopped because the officials eventually complied with the terms of the agreement. Many minority returns took place in some of the more contentious locations in Bosnia that had seen few returns in 1996 and early 1997. These returns required strong international pressure, as well as SFOR support, to overcome local and higher-level political resistance. Throughout the year, people who attempted to return home across ethnic lines, particularly to strategically important areas, faced extremely difficult, hostile conditions upon their return due to this political resistance. For example, returnees and potential returnees often faced destruction of property (see fig. 5.3); intimidation, beatings, violent evictions, and in some cases murder; the laying of landmines near their homes; local authorities who refused to provide basic services such as water, electricity, or phone service; and local police who did not intervene to protect them or who refused to guarantee their safety. As in 1996, NATO-led forces in Bosnia had to respond to many violent incidents directed against minority returnees. Table 5.4 provides a more detailed description of the difficult circumstances under which people returned to their homes across ethnic lines in the contentious areas of Brcko, Drvar, Jajce, Stolac, and the zone of separation, particularly Doboj. To facilitate the phased, orderly return of refugees and displaced people to particularly contentious areas, the international community in mid-1997 became more active in supporting security measures for returnees. Most importantly, SFOR provided a security presence in many contentious returnee areas, patrolling in a manner that demonstrated SFOR’s presence and generally discouraged incidents of violence against returnees. Figure 5.4 shows patrols by U.S. SFOR in Brcko; Spanish SFOR in Stolac; and British SFOR in Jajce. According to a senior NATO officer, NATO plans to add a specialized unit to its military force in Bosnia after June 1998. NATO expects that this unit would allow SFOR to enhance its security presence in minority return areas. By the fall of 1997, IPTF’s efforts to integrate Federation police forces were showing some early, encouraging results. In October 1997, joint Bosniak-Croat police patrols were cited by returnees in Jajce and Busovaca as an important factor in increasing their sense of security. Returnees told us that they believed the police would help them if they requested assistance. In Stolac, Bosniak police had just arrived there and were not jointly patrolling with Bosnian Croat police; still, their presence was viewed as a positive sign by returnees. A senior human rights observer in Bosnia told us that where joint police patrols have been instituted—thus far only in the Federation—security conditions and human rights in general have improved. Returnees and observers also stated, however, that SFOR needed to continue its presence in contentious areas to ensure that security problems did not occur. During 1997, the international community also created a number of commissions that oversee the returns process and attempt to ensure that minority returns do not spark violence. For example, after numerous incidents in the zone of separation, the European Commission, IFOR, IPTF, OHR, and UNHCR in 1996 established a commission to develop procedures for, and monitor progress in, returning people to their homes. A similar international commission was established for monitoring returns to Brcko under the auspices of the Brcko Supervisor. The Supervisor is strictly managing the returns process there in close consultation with SFOR, IPTF, and UNHCR to reduce the likelihood of violent incidents. Although authorities of many municipalities are not supporting minority returns, donors still provide economic reconstruction funds to them as a means of assisting in the revitalization of the economy and encouraging compliance with the provisions of the Dayton Agreement. For example, under the Municipal Infrastructure and Services Project, USAID has funded small-scale economic assistance projects in many municipalities that have not been declared “open” by UNHCR or provided with minority return assistance by State. Between July and December 1997, USAID signed memorandums of understanding with 26 such municipalities (excluding those in Sarajevo Canton)—21 in the Federation and 5 in Pale/SDS-controlled areas of Republika Srpska—and had provided $72.7 million in economic assistance to them ($56.9 million to the Federation and $15.8 million to Republika Srpska). USAID’s memorandums of understanding with these municipalities state, among other things, that municipal officials agree to support returns of people including those from other ethnic groups. A senior USAID official told us that the USAID mission does not have the resources to monitor whether municipalities are complying with these conditions. In April 1998, in commenting on a draft of this report, USAID stated that it does require municipalities to demonstrate that they are fulfilling the commitments made in the memorandums of understanding; those that “blatantly disregard” the memorandum lose the assistance. They said that through nongovernmental organizations, other donors, USAID contractors, and other groups and individuals working in Bosnia, USAID is able to monitor the commitment of a municipality to live up to its agreements. USAID also commented that it only invests in municipalities that are already, by and large, in compliance with the conditions contained in its memorandums. However, our examination and those of other international observers show that some of the municipalities that have signed memorandums and received assistance, such as Doboj, have exhibited poor performance on minority returns and continue to obstruct the returns process. The single largest area where minority returns occurred in 1997 was the return of 13,300 Bosnian Croats to Sarajevo. The return of minorities to Sarajevo is crucial to support the city’s status both as the capital of the Federation and Bosnia and as a model of co-existence and tolerance for the rest of the country. Further, returns of displaced Bosnian Serbs to Sarajevo would help open up housing for non-Serb returnees to Brcko. During 1998, the international community will push for increased returns of non-Bosniaks to the Sarajevo. To move this effort forward, in February 1998 international and Bosnian officials established the Sarajevo Declaration, which is designed to guide and accelerate the return of minorities to Sarajevo. The declaration contains the general principles that must be followed and the legislative, housing, education, employment, public order, and security issues that must be addressed to enable Bosnian Serbs and Croats to return. In addition, it assigns specific tasks and related deadlines to various organizations such as OHR’s Reconstruction and Return Task Force; local police; and Federation Ministry of Social Affairs, Displaced Persons and Refugees. The declaration also calls for the establishment of a Sarajevo Return Commission, comprised of relevant international and Bosnian officials. The commission’s role is to oversee the implementation of the provisions of the declaration. Officials from State, UNHCR, and Bosnia’s municipalities have identified several unresolved issues that, even with the security presence provided by SFOR, are hindering minority returns in Bosnia. These issues include (1) breaking the logjam of people living in the homes of potential returnees, (2) revising existing property legislation so that minority returnees can reclaim their homes, and (3) reducing the level of unemployment. Potential minority returnees often cannot return home because their homes are occupied by people of the majority ethnic group. During our fieldwork, international and local observers described three categories of people who are living in the homes of potential returnees: Displaced persons of the majority ethnic group who cannot safely return home across ethnic lines or who are afraid to cross ethnic lines to return home; Croatian Serb refugees in Republika Srpska who cannot return home to Croatia because the Croatian government has not created conditions for their return; and People of the majority ethnic group who moved to the city from nearby villages during the war. People in this category choose to stay in their city homes even though their prewar homes are located in areas controlled by their own ethnic group. These people sometimes remain in their city homes while their family members move back to their prewar homes in nearby villages, a situation referred to by UNHCR and State as “double occupancy.” During 1997, according to OHR and State, property laws in both entities did not comply with the provisions of the Dayton Agreement and continued to be the largest source of complaints brought to human rights monitors and institutions. For example, the Federation law on abandoned apartments required persons who left socially-owned apartments during the war to reclaim their property within 15 days of the cessation of hostilities. Since most people could not return within the established time frame, the law ensured that the original occupants could not return to the apartments they occupied before the war. Consequently, this law and others placed insurmountable legal barriers in the path of returnees, effectively blocking hundreds of thousands of people from returning to their homes. In March 1998, according to OHR and USAID, the Federation, under intense international pressure, passed property legislation that complied with the Dayton Agreement. However, since the laws had only recently been passed, the policies and procedures necessary to implement the laws had not been completed. Republika Srpska had yet to pass any property legislation that complied with Dayton. Despite the appearances of growth in major cities like Sarajevo, some municipalities are experiencing grave economic conditions. Unemployment is high, and people continue to depend on humanitarian assistance, remittances from relatives living abroad, and black market activity. Unemployment is considerably higher in small villages. Potential returnees view the lack of employment as another reason not to return, and those people that have already returned view new returnees as threats to their future employment. The employment issue must be solved in order for large-scale minority returns to occur. UNHCR’s 1998 repatriation and return plan for Bosnia calls for the international community to focus its efforts on minority returns of refugees and displaced persons. In October 1997, international observers noted some positive signs and improved prospects for creating conditions that would favor minority returns. These include the political crisis and potential change in government in Republika Srpska, the softening of attitudes of some Bosnian Serb political leaders, the results of the September 1997 municipal elections, and the progress in developing and implementing a cantonwide return plan in the Federation’s Central Bosnia Canton. However, as of early 1998, major political barriers to minority returns had not been addressed, and there were no indications that large-scale, orderly returns would occur during the year without an SFOR security presence. UNHCR’s main priority in 1998 will be the repatriation of refugees and the return of displaced people to minority areas in Bosnia. In its plan, UNHCR estimates that as many as 220,000 refugees could return to Bosnia in 1998. The actual level of return is contingent upon the occurrence of several actions, including the (1) return of 50,000 minority displaced people to their prewar homes by June 1998 (which would open up housing belonging to refugees and allow them to return home); (2) progress in the normalization of relations among states in the region; and (3) implementation of policy decisions by west European states hosting refugees that would force nonvoluntary returns and would encourage voluntary returns. Progress in normalizing relations among Bosnia, Croatia, and the Federal Republic of Yugoslavia must occur for these states to develop and implement a coordinated effort to accept potential returnees currently residing in each of these states. In December 1997, the Peace Implementation Council directed UNHCR, in cooperation with authorities of each country in the region and with relevant international organizations, including the OHR, to develop a regional return strategy. As of April 1998, the strategy had not been completed. Policy decisions made by west European states hosting refugees could force or encourage large numbers of people to return. If there are no changes in the policies of the countries hosting refugees, the refugees may decide to remain where they are. UNHCR realizes that if the actions do not occur, the level of refugee returns in 1998 could be much lower than in 1997. Even if the actions do take place, UNHCR believes that Bosnia may be unable to absorb 220,000 refugees due to continued housing and employment problems. UNHCR hopes that the Open Cities Initiative and other efforts to encourage minority returns will help overcome housing shortages, unemployment, and other obstacles and lead to a significant increase in minority returns. UNHCR expects to see a considerable number of open cities recognized in 1998. Potential open cities include Donji Vakuf, Tuzla, and Bosanski Petrovac in the Federation and Ribnik and Banja Luka in Republika Srpska. International officials acknowledge that to accomplish this, a strong NATO-led military presence will be required throughout at least 1998, but that in the long term, security will have to be provided by Bosnians, rather than the international community. Although there were no indications as of April 1998 that large-scale, orderly returns would occur during the year without an SFOR security presence, a number of statements by President Plavsic and the results of Republika Srpska Assembly and Bosnian municipal elections are seen as positive steps toward creating an environment more conducive to the return of minorities. International observers in Bosnia view President Plavsic and other moderate Bosnian Serbs as more open to returns of other ethnic groups to Republika Srpska than SDS political leaders, particularly returns to areas where these ethnic groups would not constitute a majority. In late 1997, Plavsic told UNHCR that all of Banja Luka’s original inhabitants would be welcome to return, while noting that solutions would need to be found for refugees and displaced people currently living in the city. The election of a more moderate Republika Srpska parliament in November 1997 and Prime Minister in January 1998 are also viewed as positive steps toward solving the problem of minority returns. In February 1998, the new Prime Minister stated that his goal was to have 70,000 non-Serbs return to Republika Srpska during the year. He also recognized, however, that there are “realistic problems” that may prevent them from returning, including the 35,000 Serbs from other parts of Bosnia and from Croatia who cannot return home and are living in houses belonging to non-Serbs. The municipal elections held in 1997 are viewed by the international community as a positive step toward creating favorable conditions for minority returns. The elections could provide potential returnees with a sense of security because they believe the newly elected leaders will support them when they return. As of early May 1998, 133 of the 136 municipal governments had been certified as formed by OSCE. However, much work remains to be done to make them functioning governments. In anticipation of larger numbers of minority returns in 1998, SFOR and OHR’s Reconstruction and Return Task Force developed plans to facilitate the phased and orderly return of refugees and displaced people. Likewise, the implementation of the Central Bosnia Canton Return Plan demonstrates to both the international community and potential returnees that the authorities in this area are willing to take steps to create an environment that encourages people to return to their prewar homes. It is estimated that, if completed, the plan could benefit over 100,000 people. According to a senior executive branch official, the Federation and Republika Srpska must develop integrated return policies and procedures that are self managed and effective. Until this is done, the international community, with the support of SFOR, will have to remain in Bosnia to ensure the right of people to return to their prewar homes. The Dayton Agreement’s goals for the economy of Bosnia and Herzegovina include economic reconstruction, building national government and Federation economic institutions, and promoting the transition from a command economy to a market economy. To support these goals, the government of Bosnia, with the assistance of the international community, designed a 3- to 4-year, $5.1-billion assistance program known as the Priority Reconstruction Program. This program gave the international community a framework for the economic reconstruction and integration of Bosnia. In the program’s first year, 59 donors—48 countries and 11 organizations—pledged $1.9 billion for Bosnia’s reconstruction program at two donors’ conferences held in December 1995 and April 1996. During 1997, the pace of donor contributions slowed somewhat, as 31 of the program’s original donors pledged an additional $1.2 billion for Bosnia’s economic reconstruction, for a total pledge of $3.1 billion.Economic conditions continued to improve throughout Bosnia in 1997, although progress in Republika Srpska still lagged because donors were withholding assistance due to ongoing noncompliance by hard-line Bosnian Serb political leaders. Signs of progress in the economic reconstruction program were evident throughout 1997. However, the continued obstruction and improper economic and fiscal practices of Bosnia’s political leaders threatened Bosnia’s economic recovery. The international community and Bosnia’s governments recommended actions in 1997 to address shortcomings in Bosnia’s public finance system that could generate opportunities for fraud and corruption and lead to improper use of economic assistance going to Bosnia. By the end of the year, donors’ practice of attaching political conditions to economic assistance had contributed to some important political changes in Bosnia, but it had not increased the level of cooperation of hard-line Bosnian Serb or Croat political leaders. International donor support to Bosnia’s reconstruction program continued in 1997, but the pace of donor contributions slowed from 1996. At a meeting in Brussels in January 1997, international donors estimated that the program needed $2.5 billion for 1997-98, of which the 1997 requirement is $1.4 billion. The $1.2 billion pledged at the third donors’ conference in July 1997 fell short of this goal, and the total number of donors declined from 59 in 1996 to 31 in 1997. The World Bank and European Commission cited delays in holding the third donors’ conference and the political turmoil in Republika Srpska as having contributed to the slowdown in new donor contributions. According to an OHR report, the third donors’ conference was scheduled to take place at the beginning of 1997. However, it was postponed several times due to the failure of Bosnia’s political leaders to meet the necessary conditions, including the adoption of economic laws—known as the “Quick Start Package”—related to the Central Bank, national budget, external debt management, and customs policies. The approval of these laws by Bosnia’s parliament on June 20, as well as the agreement reached between the IMF and Bosnia’s authorities on almost all of the elements of a draft agreement on a letter of intent requesting an IMF standby arrangement, cleared the way for the third donors’ conference to be held on July 23 and 24, 1997. The U.S. government, primarily through USAID, committed $294.4 million during 1996 and $234.4 million during 1997 for economic reconstruction. These funds have been primarily used to repair municipal infrastructure and provide municipal services, small business loans, and technical assistance for the development of national and Federation economic institutions. In October 1997, international officials in Bosnia told us that USAID’s reconstruction and technical assistance projects were the first to be implemented and the first to show results in many areas of the country. During 1996 and 1997, donors committed about $3.3 billion to the Priority Reconstruction Program. With $528.79 million in commitments, the United States was the second leading individual donor after the European Commission ($698.64 million). As a group, European donors contributed 48.8 percent of the committed funds, and the United States contributed 16.2 percent (see fig 6.1). Of the $3.3 billion committed during the program’s first 2 years, an estimated $1.7 billion—52 percent of the committed funds—had been expended, that is, spent on the ground. The United States expended more funds than any other donor, about $347.5 million, or 66 percent of U.S. commitments. Appendix VI provides more information on the Priority Reconstruction Program. The economy continued to grow significantly but unevenly in 1997. In a number of areas where donor support has been particularly strong—including housing, fiscal and social support, industry/finance, employment generation, and education—implementation has proceeded at a steady pace. Further, the pace of clearing landmines accelerated and there were positive signs of reestablishing economic links between the ethnic groups during the year. In some areas where there have been political disagreements, such as telecommunications and railways, the progress has been slow. The creation and strengthening of common government institutions continues to be a major challenge. Economic growth in Bosnia, estimated to have been 50 percent in 1996 according to the World Bank, was expected to slow somewhat in 1997 to a growth rate of 35 percent. According to PlanEcon, in mid-1997 the economy was at roughly one-fifth its prewar level, up somewhat from the 10-15 percent World Bank estimates for 1996. Unemployment, albeit down from its postwar high of 90 percent, is still very high—around an estimated 30 to 40 percent of the labor force at the end of 1997—with wide regional variations throughout the country. These overall unemployment rates are comparable to those in the immediate prewar period (27 percent in 1991). Economic recovery in the Federation has been far more robust than in Republika Srpska, which in 1996 had received only 3.2 percent of the international aid being implemented due to the noncompliance of its political leaders with the Dayton Agreement. According to OHR data, gross domestic product in Republika Srpska is estimated at less than a quarter of that of the Federation. At mid-1997, wages in the Federation varied by sector and by canton between $140-$200 per month; in Republika Srpska, wages were estimated to be $48 a month, with severe delays in wage payments. After 2 years of reconstruction, progress continued to be made in key sectors of the economic reconstruction program. For example, some 60,000 private houses or public apartment units, benefiting some 250,000 people, have been repaired or have received repair assistance; at least $62 million financed social programs for the most vulnerable in the population—the children, the elderly and the disabled; about $120 million in small- and medium-sized business loans have helped revive commerce and have generated some 18,000 permanent new jobs; about 200 public works projects were completed in 98 municipalities (70 in the Federation and 28 in Republika Srpska), resulting in the creation of 25,000 person-months of employment in addition to the 10,000 person-months in 1996; priority was given to areas with high unemployment, heavy war damage, and high levels of displaced persons and refugee returns; donor assistance has been critical in the rehabilitation of some 490 primary schools and 90 kindergartens; and the Sarajevo airport continues to be open for commercial service, about 900 kilometers of the main road network have been completed, and 14 major bridges have been reconstructed. As of April 1998, one of USAID’s major economic assistance projects, the Municipal Infrastructure and Services Project, had helped generate an estimated 5,000 short-term jobs and provided an estimated 17,000 people with permanent employment. These funds have gone toward such things as repairs or construction of water supply systems, bridges, railroads, schools, and hospitals (see fig. 6.2). In addition, 8,700 demobilized soldiers were temporarily employed through about 300 Community Infrastructure Rehabilitation Projects that were funded by USAID and administered by SFOR soldiers in the U.S. military sector. Further, USAID’s Bosnian Reconstruction Finance Facility program, as of October 1997, had disbursed $49 million in loans that averaged $485,000 for the year for businesses such as clothes and shoes manufacturing; baked goods, fruit juice, and dairy production; furniture manufacturing; construction; sawmills; and agriculture. Appendix I provides more information on USAID’s economic reconstruction and stabilization programs. Progress was made in 1997 in clearing landmines and in developing Bosnia’s capacity to manage a mine clearance program. However, the country’s estimated 1 million landmines remained a significant threat—particularly along the former front lines and strategically important areas where the parties remained reluctant to remove them—and continued to inhibit economic reconstruction and returns of people to their prewar homes. Donors funded over 1,000 deminers in Bosnia, who removed 28,425 landmines and 19,572 pieces of unexploded ordnance during the year. These efforts opened up roads and railways and allowed access to homes and farmland that had been unusable because of landmines or because people feared that landmines were present. Further, in January, 1997, a National Commission for Demining was organized to take over demining responsibility for the country. The commission was ordered to be formally established by the High Representative on December 24, 1997, after a hard-line SDS member of the Council of Ministers would not sign the documents that would make the commission a legal entity. Appendix VII provides more information on Bosnia’s demining program. Moreover, often with intense international involvement and pressure, Bosnia’s political leaders and people took first steps during 1997 and early 1998 toward linking the ethnic groups economically, a major change from 1996 when they generally refused to cooperate across ethnic lines. The new, relatively moderate Republika Srpska government was credited with facilitating the delivery of mail from Sarajevo to Banja Luka and the signing of a memorandum of understanding on the resumption of rail service between the two entities. Table 6.1 provides a description of important links that were established during the year. During the year, business people showed signs of reestablishing cross-ethnic economic ties that had been broken by the war. For example, with USAID support, small business associations were established throughout each entity as a step toward developing a countrywide small business association. Further, the first post-war Sarajevo business fair was held in Banja Luka on November 26, 1997, and a Banja Luka trade fair was held in Sarajevo on February 25, 1998. Despite these initiatives, there is no consensus among ethnic groups on economic cooperation. USIA polling data from February 1998 show that given the choice between economic independence or cooperation between the two entities, only Bosniaks (83 percent) clearly favor working together. A majority of Bosnian Serbs (61 percent) say they prefer economic independence, and Bosnian Croats are more equally divided (50 percent favor economic independence, and 41 percent favor working together). Previous USIA surveys have shown that the majority of people from each of the three ethnic groups support trade with the other groups, suggesting that opposition to economic cooperation in principle may be outweighed by practical economic opportunities. Despite favorable steps in Bosnia’s economic reconstruction, in early December 1997 the Peace Implementation Council expressed concern that Bosnia’s political leaders were placing reconstruction and sustained economic growth at risk by, among other things, allowing the common institutions’ shortcomings to impede sound economic management and their political differences to slow down the pace of economic transition. Most importantly, Bosnia’s political leaders had only partially implemented the key economic legislation passed on June 20, 1997. They had not adopted national-level legislation called for by the council in May 1997. According to a council document, as of early December, the lack of an economic policy framework was preventing an IMF standby arrangement and World Bank adjustment lending, thus rendering the country vulnerable to financial crisis To address these problems, the Peace Implementation Council called on Bosnia’s national authorities to agree on a common approach on the standby arrangements and open negotiations with the IMF without delay. The council also established a number of short-term deadlines for actions related to steps that the parties had thus far refused to take. Table 6.2 shows the status of actions called for by the council, with deadlines up to March 1, 1998. In December 1997, the Peace Implementation Council said that Bosnia’s economic recovery was being threatened by, among other things, the parties’ insufficient action against fraud and the lack of transparency in the use of public funds. In late 1997, OHR, the World Bank, and major donors concluded that donor assistance had not been used inappropriately by the Bosnian or entity governments; however, they acknowledged that legislative and administrative shortcomings in public finance generated opportunities for fraud that have been exploited in the areas of (1) public revenue collection, specifically the evasion of customs duties and sales taxes; (2) the misappropriation of public funds; and (3) activities of extrabudgetary institutions. To address the problem of government corruption and prevent the misuse of donor assistance, OHR, USAID, the European Commission’s Customs and Fiscal Assistance Office, the World Bank, and the Federation government have instituted a number of measures to investigate and combat the inappropriate use of donor funds and corruption. In late 1997, the High Representative and other representatives of the international community stated that there is no evidence of corruption related to donor funds. In a proposed anticorruption strategy presented to the Peace Implementation Council in December, the High Representative said that major donor funds for World Bank reconstruction projects were fully accounted for and adequately monitored and audited. The High Representative, however, also said that the lack of coordination with smaller donor organizations, such as private voluntary organizations, could lead to multiple funding of the same project activities. He also noted that weak project management by these organizations could lead to overcharging for goods and services by contractors and suppliers. Although the donor community identified no diversion of donor assistance funds, it pointed out the need for more transparency and continued vigilance in the accounting for and use of international assistance funds. To ensure that USAID’s program funding is accounted for and used appropriately, USAID’s Office of Inspector General has completed a series of audits of the agency’s two major assistance efforts in Bosnia, the Municipal Infrastructure and Services project and the Bosnian Reconstruction and Finance Facility program. These audits, which have been conducted on a periodic basis throughout the life of the programs, have not identified any major systemic internal control weaknesses or misuse of program funds. According to the State Department, other donors have similar systems for auditing and accounting to safeguard against fraud. Investigations conducted by the European Commission’s Customs and Fiscal Assistance Office (hereafter referred to as the customs assistance office) have identified incidents of corruption involving government customs and purchasing organizations. The corrupt practices include (1) diversion of customs duties to parallel government structures, (2) false transit destination documentation, (3) undervaluation of imported goods, (4) false certificates of origin on imported goods, (5) abuse of duty-free shop concessions, (6) abuse of duty-free warehouse concessions, and (7) commercial smuggling at guarded customs posts. The customs assistance office estimates that customs fraud in the Federation alone cost the entity government approximately $56 million over a 1-year period. The customs assistance office was established in January 1996 to help Bosnia form a coherent customs system at the national and entity levels. In addition, the office facilitates coordination and cooperation between entity customs administrations by verifying customs documentation on a random basis and provides advice to the customs administrations. While executing these tasks, officials from the office uncovered systematic transit fraud involving more than 300 high-duty consignments declared as in transit across the Federation to Republika Srpska. The goods never reached their declared destination, and the customs duty deposits, paid at the border, were reclaimed by the criminals through the use of false receipts issued by Republika Srpska customs officials. These illegal practices resulted in the loss of customs duties and tax revenues of about $11 million over a 6-month period. The customs assistance office recommended that, among other things, both entity governments take immediate action, including legal proceedings, to stop the smuggling of goods and associated loss of revenue. In another investigation, the customs assistance office found that the Bosniak-controlled and Bosnian Croat-controlled State Directorates for Strategic Reserves, which were supposed to cease to exist after the signing of the Federation constitution in 1994, were importing large quantities of fuel and goods duty free. The resulting loss of revenue incurred by the Federation government was estimated at about $11 million over a 1-year period. The results of the investigations were presented in two reports that were given to Federation Minister of Finance. In response to the reports and the resulting media publicity, according to a customs assistance office official, the Federation Minister of Finance replaced the Director and Deputy Director of the Federation Customs Administration and four other Customs Administration officials. The Republika Srpska Customs Director fired all eight of its customs-house managers. In addition, the Federation Ministry of Finance conducted investigations of the operations of the Bosniak and Bosnian Croat State Directorates for Strategic Reserves. As of January 1998, the investigation of the Bosnian Croat Directorate was complete and the Directorate had ceased operations. An agreement was reached at the December 1997 Peace Implementation Conference to close the Bosniak Directorate. An OHR official stated that the Bosniak Directorate will be closed as soon as the contracts it has entered into can be completed; as of April 1998, it was still operating. In December 1997, the World Bank reported on problems in the budgeting and financial management of entity-level governments that could result in international assistance replacing diverted government funds. The bank reported that many opportunities exist for the misappropriation of government funds, a problem shared by other successor states of the former Yugoslavia. Although the World Bank identified the problem, it was unable to determine the extent to which opportunities for misappropriation are being exploited. The national and entity governments and the international community have established a number of organizations and provided assistance designed to address the issue of corruption in donor assistance and in government operations and revenues (see table 6.3). According to an IPTF official, IPTF intends to work with ministries in both entities in 1998 to improve their capacity to identify and deal with financial crime that corrupts public institutions. As part of this effort, IPTF plans to extend its monitoring and advisory work to this area of law enforcement and to train entity police forces in the detection of financial crime, organized crime, smuggling, and corruption and to assist in the development of special anticorruption units. In order to implement these plans, a number of experts in financial crime will need to be hired to form a specialized training team. As of March 1998, budget and staffing estimates had been developed for the team, but no specific date for its implementation had been established. The customs assistance office is continuing to assist Bosnia’s national and entity-level governments in updating its system of customs laws and tariffs and in modernizing customs operations through the computerization of procedures and the training of customs personnel in customs operations and investigation. In December 1997, the Peace Implementation Council urged Bosnia’s entity authorities to extend the office’s mandate to cover all indirect taxes levied by national or entity governments. The council also required the national and entity governments to give the customs assistance office access to all relevant customs and fiscal records. In January 1998, the office began conducting an investigation into the valuation of imported goods and an examination of the organization and administration of the Federation tax administration. The investigation pertaining to the valuation of goods was still ongoing as of April 1998; however, it had found that undervaluation of goods is endemic and is responsible for multimillion dollar loss of revenue to the Federation budget. The tax administration examination was completed in March and did not find any hard evidence to suggest corruption; however, it did find evidence of major tax evasion. In February 1998 the new Republika Srpska government drafted a decision to allow the office to examine its customs and tax administrations. A customs assistance official stated that Republika Srpska officials were doing their best to provide all of the information requested to conduct its examination. USAID has implemented a number of projects to address public accountability and transparency and combat corruption in a systemic manner. USAID’s ongoing and planned programs include activities that (1) support the federal, cantonal, and municipal governments in developing budgets and financial management systems that are transparent and meet international standards; (2) provide training to customs officers to increase their professionalism and establish a code of ethics; (3) increase the Federation and Republika Srpska’s banking agencies’ capacity to combat white collar crime; (4) assist the Federation government in the revision of the criminal code; and (5) support the drafting of key commercial laws that are essential to any anticorruption effort. USAID also conducted a study of corruption in Bosnia and drafted a strategy to address corruption in a more comprehensive manner. The study stated that for the economic and democratic development of Bosnia to succeed, the large-scale fraud and corruption in the government must be reduced substantially. Bank fraud, customs fraud, tax fraud, procurement fraud, bribery, extortion, and an active organized crime network severely undermine economic and democratic reforms. The losses resulting from fraud and corruption appear massive yet cannot be quantified accurately due to the lack of transparency in government and business operations. The strategy developed by USAID consists of introducing a legislative agenda; federalizing law enforcement; improving governmental budgeting, accounting, and auditing; and implementing a massive public and legal education/training campaign. The Peace Implementation Council and international donors have stated repeatedly since December 1996 that economic assistance provided to Bosnia is conditioned—both negatively and positively—on the compliance of Bosnia’s political leaders with political provisions of the Dayton Agreement. By placing political conditions on economic assistance, the international community has attempted to give additional impetus to the peace process by rewarding authorities at all levels who cooperate with the international community in the implementation of Dayton, depriving assistance to authorities who obstruct the peace process, and encouraging change by linking assistance to improvements in complying with specific aspects of the agreement. At the July 1997 donors’ conference, the task of coordinating donors’ efforts to implement political conditionality was assigned to OHR’s economic task force, which established guidelines for donors to follow for certain projects. By late 1997, donors’ use of attaching political conditions to economic assistance had resulted in some important political changes in Bosnia, but it had not increased the level of cooperation of hard-line Bosnian Serb or Croat political leaders. In 1997, OHR’s economic task force determined that applying strict rules to determine when and how to condition assistance would not achieve the international community’s intended objectives because the various donors operate differently, the situation in Bosnia is in a constant state of change, and available information on recipients is imperfect. Consequently, the task force uses a set of general guidelines. These are applied on a case-by-case basis to assess the applicability of political conditionality to assistance projects. The task force’s guidelines call for assistance to be withheld from (1) municipalities where authorities actively obstruct the peace process, (2) institutions and companies controlled by indicted war criminals, and (3) persons actively involved in obstructing the peace process. The guidelines also state that donors should focus housing projects on municipalities that allow significant minority returns and should consult the economic task force on all projects over $10 million before either approving them or suspending them on noneconomic grounds. USAID has attached political conditions to its two major economic reconstruction projects—the Bosnian Reconstruction Finance Facility program and the Municipal Infrastructure and Services project—since the programs started in 1996. For example, USAID requires municipal authorities that want assistance under the Municipal Infrastructure and Services project to sign memorandums of understanding stating that, among other things, (1) the people living in the municipality agree to abide by the principles of the Dayton Agreement and will support the return of displaced people who want to move back to their homes regardless of their religion or ethnic origin; (2) the municipality agrees to allow freedom of movement for all persons, at all times, and the police will enforce and honor this right under the law; and (3) the municipality certifies that no indicted war criminal is a member of the municipal government or is involved in the operation and maintenance of any project funded by the program. However, in October 1997 and February 1998, USAID officials stated that the USAID mission does not have the resources to effectively monitor the assistance to ensure that municipalities or companies comply with the provisions in the memorandums. According to the Mission Director in Sarajevo, he was unable to gain approval to hire an additional staff person to monitor compliance with the memorandums. Instead, USAID had informal monitoring procedures, relying on information from its contractors, State’s refugee office, IPTF, OHR, and other international monitors. Although this information was often “episodic” and varied greatly depending on the source, this official believed that by and large USAID has a fairly good, impressionistic view of how municipalities are doing in terms of complying with conditions placed on assistance. This official also said that USAID never expected that the memorandums would bring about a major change in municipalities; instead, they were intended to show at the grass-roots level that the international community would support those who support Dayton. In some Republika Srpska municipalities, such as Doboj and Bijeljina, USAID now expects a good deal of forward movement in implementing Dayton due to the changing political conditions there. A USAID official said that monitoring efforts are made more difficult by the lack of a master list of which municipalities are complying with the Dayton Agreement. OHR’s economic task force had planned to produce a list of the municipalities that were not complying with the Dayton Agreement in 1997. However, as of December 1997, according to State officials, OHR had not done so. OHR and other officials told us that the international donor community would request a list from OHR during 1998. After the election of the new, moderate Republika Srpska government, the U.S. government pledged to provide increased assistance to Republika Srpska. However, human rights organizations have expressed concerns that this assistance would be going to municipalities that do not meet the conditions of USAID memorandums, particularly the condition related to people indicted for war crimes. In early February 1998, a USAID official said that due to a lack of USAID resources, it would be difficult for the mission to monitor the new tranche of assistance that the U.S. government plans to provide to the new Republika Srpska government. According to USAID, U.S. assistance to Republika Srpska in 1998 is estimated to be $60 million including $21 million for reconstruction activities implemented as part of the Municipal Infrastructure and Services project. In the past, USAID has stated that it would provide up to one-third of its total assistance for Bosnia to Republika Srpska if the government complied with the provisions of the Dayton Agreement. An additional grant of $5 million in budgetary support for the Republika Srpska government has been signed with OHR to pay back-salaries for government employees; employees of the Ministries of Justice, Defense, and Interior will not be paid with U.S. funds. Other donors have assisted in this effort as well. According to a USAID mission official, USAID’s Inspector General’s office and the mission’s controller in Sarajevo are working with OHR to monitor this support. In commenting on a draft of this report, in April 1998 USAID officials stated that USAID does adequately monitor existing assistance and will monitor the new tranche of assistance to municipalities through on-site visits and information provided by its contractors, the State Department’s refugee office, IPTF, OHR, nongovernmental organizations, and other international monitors. The mission plans to hire a staff person dedicated to monitoring and recognizes that further monitoring of projects would necessitate additional staffing. According to U.S. and other international officials, the use of conditionality in providing economic assistance has contributed to the political split in Republika Srpska and supported the relatively moderate forces there as they worked to install a new, relatively moderate entity-level government. It has also encouraged some minority returns in some municipalities, as discussed in chapter 5. The use of conditionality, however, has not yet affected the attitudes or actions of hard-line Bosnian Serb and Croat political leaders in complying with Dayton. In March 1997, State and USAID officials told us that some Bosnian Serb political leaders, including President Plavsic, had shown a willingness to accept economic assistance that includes conditions such as employing multiethnic work forces; however, there were no tangible results in this area as of late June because hard-line Bosnian Serb political leaders, particularly Karadzic, were blocking every attempt of moderate Bosnian Serb political leaders to work with the international community. These leaders, according to State, were willing to accept conditional assistance because they saw the growing gap in economic recovery between the Federation and Republika Srpska. Starting in July 1997, events in the Republika Srpska political crisis indicated that the conditioning of economic assistance contributed to the political split in Republika Srpska. Specifically, the conditioning of assistance helped President Plavsic and the more moderate Bosnian Serb political leaders demonstrate how the unwillingness of hard-line leaders to comply with the Dayton Agreement was preventing Republika Srpska from receiving assistance, thereby slowing the entity’s economic recovery and causing people to suffer. In July, State officials told us that there was increasing evidence that elected officials of Republika Srpska were under mounting political pressure to make the necessary concessions to qualify for reconstruction assistance. Specifically, President Plavsic had just started to move away from the more extreme SDS leadership in Pale. During this time, Plavsic openly argued that these SDS leaders, led by Karadzic, were enriching themselves through corruption and not complying with Dayton; as a result, Plavsic argued, the Serb people were being denied reconstruction assistance. After being elected on January 18, 1998, the new Prime Minister publicly stated that he would help promote returns of other ethnic groups to Republika Srpska and would encourage indictees to surrender to the war crimes tribunal, if the international community would provide economic assistance to the new government. Despite these promising developments and indications that conditioning assistance was proving effective in encouraging some municipalities to accept returns, U.S. and other international officials told us that applying conditions to economic assistance had not changed the attitudes of hard-line Bosnian Serb and Croat political leaders and separatists. Further, it had not resulted in Bosnian Serb authorities surrendering indictees to the war crimes tribunal. According to these officials, conditioning economic assistance has had no impact on hard-line SDS authorities who are loyal to Karadzic because they have other sources of funding, for example, smuggling and other illegal activities. It had not had an impact on hard-line Bosnian Croat authorities as well, because (1) they obtain assistance from Croatia and illegal activities and (2) the areas they control have received relatively little international economic assistance, as those areas were relatively undamaged by the war. DOD, USAID, and the State Department provided written comments on a draft of this report. DOD generally concurred with the report, and USAID commented further on the progress that has been made in Bosnia over the past year. State commented that the report acknowledges and catalogs many of the significant successes recorded over the last year in the implementation of the Dayton Agreement but does not sufficiently convey the momentum, hope, and prospects that the developments of the last half of 1997 and the first few months of 1998 have brought to the overall circumstances in Bosnia. In particular, State identified a number of changes that have occurred since late spring of 1997 that give cause for optimism. These include the ability of Bosnians to move more freely around the country, further democratization and pluralism in the political arena, and advances in arms control. Although State agreed that caution is in order, it noted its inclination to be somewhat more optimistic than the report. While we agree with State that there is some cause for optimism in Bosnia, the facts, events, and progress suggest that one may want to view Bosnia’s future with greater caution than State does. We believe that the report strikes an appropriate balance in describing the progress in achieving the goals of the Dayton Agreement and the challenges that remain. The report discusses in some detail the events referred to by State and specifically states that the pace of implementing the Dayton Agreement has accelerated. However, as noted in the Executive Summary and throughout the report, this progress was achieved largely because of intense international pressure and involvement, the momentum for continued progress is not self-sustaining, and conditions will have to improve significantly before international military forces could substantially draw down. It is widely accepted in the international community that, even with the accelerated pace of implementing the agreement, it will likely be some time before these conditions are realized. Further, while events in the last half of 1997 and early 1998 give cause for optimism, more recent events in March and April 1998—specifically, an increase in incidents of ethnic conflict associated with people crossing ethnic lines to visit or return to their prewar homes—illustrate the difficulties that Bosnians and the international community still face in implementing key aspects of the agreement. DOD, USAID, and State also provided technical comments, updated information, and other suggestions that have been incorporated where appropriate. DOD and USAID comments are provided in appendixes VIII and IX respectively. State comments and our evaluation of them are included in appendix X. | Pursuant to a congressional request, GAO updated its review of the Bosnia peace operation, focusing on the progress made since mid-1997 in achieving the operation's objectives. GAO noted that: (1) the actions taken by the international community starting in mid-1997 accelerated the pace of progress toward reaching the Dayton Agreement's objectives; (2) during this period, with the military situation remaining stable, some advancements were made in providing security for the people of Bosnia, creating a democratic environment, establishing multiethnic institutions at all levels of government, arresting those indicted for war crimes, returning people to their prewar homes across ethnic lines, and rebuilding the infrastructure and revitalizing the economy; (3) moreover, there has been a weakening of hard-line Bosnian Serb control over police and the media and the election of a new, moderate Prime Minister in Republika Srpska; (4) however, the goal of a self-sustaining peace process in Bosnia remains elusive, primarily due to the continued intransigence of Bosnia's political leaders; (5) almost all of the results were achieved only with intense international involvement and pressure, both political and military; (6) for example, the High Representative imposed numerous temporary solutions when Bosnia's political leaders could not reach agreement; (7) further, a substantial NATO-led force is still needed to provide security for the civil aspects of the operation; (8) conditions will have to improve significantly before international military forces could substantially draw down; even with the accelerated pace of implementing the agreement, it will likely be some time before these conditions are realized; and (9) Bosnia for all intents and purposes lacks functioning, multiethnic governments at all levels; the majority of those indicted for war crimes remain at large; about 1.4 million people have not yet been resettled as Bosnia's political leaders continue to prevent people from returning to their homes across ethnic lines; and few economic links have been reestablished among Bosnia's ethnic groups or between its two entities. |
NISP was established by executive order in 1993 to replace industrial security programs operated by various federal agencies. The goal of the national program is to ensure that contractors’ security programs detect and deter espionage and counter the threat posed by adversaries seeking classified information. Contractor facilities must be cleared prior to accessing or storing classified information and must implement certain safeguards to maintain their clearance. The National Industrial Security Program Operating Manual (NISPOM) prescribes the requirements, restrictions, and safeguards that contractors are to follow to prevent the unauthorized disclosure—or compromise—of classified information. DSS is responsible for providing oversight, advice, and assistance to U.S. contractor facilities that are cleared for access to classified information. Contractor facilities can range in size, be located anywhere in the United States, and include manufacturing plants, laboratories, and universities. Industrial security representatives work out of DSS field offices across the United States and serve as the primary points of contact for these facilities. Representatives’ oversight involves educating facility personnel on security requirements, accrediting information systems that process classified information, approving classified storage containers, and assisting contractors with security violation investigations. DSS representatives also conduct periodic security reviews to assess whether contractor facilities are adhering to NISPOM requirements and to identify actual and potential security vulnerabilities. Contractors are required to self-report foreign business transactions on a Certificate Pertaining to Foreign Interests form. Examples of such transactions include foreign ownership of a contractor’s stock, a contractor’s agreements or contracts with foreign persons, and whether non-U.S. citizens sit on a contractor’s board of directors. Contractors are required to report changes in foreign business transactions and to update this certificate every 5 years. Because a U.S. company can own a number of contractor facilities, the corporate headquarters or another legal entity within that company is required to complete the certificate. When contractors declare foreign transactions on their certificates and notify DSS, industrial security representatives are responsible for ensuring that contractors properly identify all relevant foreign business transactions. They are also required to collect, analyze, and verify pertinent information about these transactions. For example, by examining various corporate documents, the industrial security representatives are to determine corporate structures and ownership and identify key management officials. The representatives may consult with DSS counterintelligence officials, who can provide information about threats to U.S. classified information. If contractors’ answers on the certificates indicate that foreign transactions meet certain DSS criteria or exceed thresholds, such as the percentage of company stock owned by foreign persons, the representatives forward these cases to DSS headquarters. DSS headquarters works with contractors to determine what, if any, protective measures are needed to reduce the risk of foreign interests gaining unauthorized access to U.S. classified information. Field staff are then responsible for monitoring contractor compliance with these measures. In overseeing contractor facilities and contractors under FOCI, DSS did not systematically collect and analyze information to assess the effectiveness of its operations. Without this analysis, DSS was limited in its ability to detect trends in the protection of classified information across facilities, to determine sources of security vulnerabilities, and to identify those facilities with the greatest risk of compromise. In addition, DSS was unable to determine whether contractors were reporting foreign business transactions as they occurred or how much time a contractor facility with unmitigated FOCI had access to classified information. In overseeing contractor facilities, we found DSS evaluated its performance in terms of process factors, such as the percentage of security reviews completed, percentage of security reviews that covered all pertinent areas of contractors’ security programs, length of time needed to clear contractor facilities for access to classified information, and length of time needed to clear contractor personnel for access to classified information. While such indicators are important, they alone cannot measure where the greatest risks are, the types of violations that are occurring, and by whom. Performance indicators such as the ratings and number of findings that resulted from security reviews would have provided an indication as to whether DSS was achieving its mission. However, there were no such indicators to determine overall facility ratings, the sources of the violations, and their frequency. Without such information, DSS cannot ensure facilities are protecting the classified information entrusted to them. Similarly, DSS did not know how many contractors under FOCI were operating under all types of protective measures and, therefore, was unaware of the magnitude of potential FOCI-related security risks. Although DSS tracked information on contractors operating under some types of protective measures, it did not centrally compile data on contractors operating under all types of protective measures. Specifically, DSS headquarters maintained a central repository of data on contractors under voting trust agreements, proxy agreements, and special security agreements—protective measures intended to mitigate majority foreign ownership. However, information on contractors under three other protective measures—security control agreements, limited facility clearances, and board resolutions—were maintained in paper files in the field offices. DSS did not aggregate data on contractors for all six types of protective measures and did not track and analyze overall numbers. Such analysis would allow DSS to target areas for improved oversight. The NISPOM requires contractors with security clearances to report any material changes of business transactions previously notified to DSS. DSS is dependent on contractors to self-report transactions by filling out the Certificate Pertaining to Foreign Interests form. However, this form did not ask contractors to provide specific dates for when foreign transactions took place. Consequently, DSS did not know if contractors were reporting foreign business transactions as they occurred and lacked knowledge about how much time a contractor facility with unmitigated FOCI had access to classified information. In addition, DSS did not compile or analyze how much time passed before it became aware of foreign business transactions. DSS field staff told us that some contractors reported foreign business transactions as they occurred, while others reported transactions months later, if at all. During our review, we found a few instances in which contractors were not reporting foreign business transactions when they occurred. One contractor did not report FOCI until 21 months after awarding a subcontract to a foreign entity. Another contractor hired a foreign national as its corporate president but did not report to DSS, and DSS did not know about the change until 9 months later, when the industrial security representative came across the information on the contractor’s Web site. In another example, DSS was not aware that a foreign national sat on a contractor’s board of directors for 15 months until we discovered it while conducting our audit work. DSS also did not determine the time elapsed between the reporting of foreign business transactions by contractors with facility clearances until the implementation of protective measures or when suspensions of facility clearances occurred. Without protective measures in place, unmitigated FOCI at a cleared contractor increases the risk that foreign interests can gain unauthorized access to U.S. classified information. We found two cases in which contractors appeared to have operated with unmitigated FOCI before protective measures were implemented. For example, officials at one contractor stated they reported to DSS that their company had been acquired by a foreign entity. However, the contractor continued operating with unmitigated FOCI for at least 6 months. According to the NISPOM, DSS shall suspend the facility clearance of a contractor with unmitigated FOCI, and DSS relies on field office staff to make this determination. Contractor officials in both cases told us that their facility clearances were not suspended. Because information on suspended contractors with unmitigated FOCI is maintained in the field, DSS headquarters did not determine at an aggregate level the extent to which and under what conditions it suspends contractors’ facility clearances due to unmitigated FOCI. Industrial security representatives often failed to determine whether security violations by facilities resulted in the loss, compromise, or suspected compromise of classified information or made determinations that were not in accordance with approved criteria. Determinations of loss, compromise, or suspected compromise are important because the affected government customer must be notified so it can evaluate the extent of damage to national security and take steps to mitigate that damage. Even when representatives made an appropriate determination, they often took several weeks and even months to notify the government customer because of difficulties in identifying the customer. As a result, the customer’s opportunity to evaluate the extent of damage and take necessary corrective action was delayed. The NISPOM requires a facility to investigate all security violations. If classified information is suspected of being compromised or lost, the facility must provide its DSS industrial security representative with information on the circumstances of the incident and the corrective actions that have been taken to prevent future occurrences. The industrial security representative is to then review this information and, using the criteria specified in DSS’s Industrial Security Operating Manual, make one of four final determinations: no compromise, suspected compromise, compromise, or loss. If a determination other than no compromise is made, the Industrial Security Operating Manual directs the representative to inform the government customer about the violation so a damage assessment can be conducted. However, for 39 of the 93 security violations that we reviewed, industrial security representatives made no determination regarding the compromise or loss of classified information. For example, in two cases involving one facility, the representative made no determination of compromise even though the facility reported the improper transmission of classified information via e-mail. In another eight cases at another facility, the representative made no determination despite employees’ repeated failure to secure a safe room to ensure the protection of classified information. In the absence of a determination, the government customers were not notified of these violations and therefore were unable to take steps to assess and mitigate any damage that may have occurred. For the remaining 54 violations that we reviewed, representatives made determinations regarding the compromise or loss of information, but many were not consistent with the criteria contained in DSS’s Industrial Security Operating Manual. Representatives made 30 inappropriate determinations, such as “compromise cannot be precluded” or “compromise cannot be determined.” For example, in nine cases, the same facility reported that classified material was left unsecured, and the facility did not rule out compromise. In each of these cases, the industrial security representative did not rule out compromise but used an alternative determination. Senior DSS officials informed us that industrial security representatives should not make determinations other than the four established in the Industrial Security Operating Manual because the four have specific meanings based on accepted criteria. By not following the manual, representatives introduced variability in their determinations and, therefore, their decisions of whether to notify the government customer of a violation. The failure of representatives to always make determinations consistent with the Industrial Security Operating Manual was at least partially attributable to inadequate oversight. The Standards and Quality Branch is the unit within DSS responsible for ensuring that industrial security representatives properly administer the NISP. Branch officials regularly test and review field office chiefs and representatives on NISP requirements, particularly those related to granting clearances and conducting security reviews. However, the Standards and Quality Branch did not test or review how representatives responded to reported violations and made determinations regarding compromise. As a result, DSS did not know the extent to which representatives understood and were consistently applying Industrial Security Operating Manual requirements related to violations and, therefore, could not take appropriate action. While the Industrial Security Operating Manual did not specify a time requirement for notifying government customers when classified information had been lost or compromised, DSS was often unable to notify customers quickly because of difficulties in identifying the affected customers. DSS notified government customers regarding 16 of the 54 reported violations for which representatives made determinations. For 11 of these 16 violations, DSS did not notify the customer for more than 30 days after the contractor reported that information was lost, compromised, or suspected of being compromised. In one case, 5 months passed before an industrial security representative was able to notify a government customer that its information was suspected of being compromised. This delay was a result of the facility’s inability to readily determine which government customer was affected by the compromise. DSS relied on the facility to provide this information. However, facilities that were operating as subcontractors often did not have that information readily available. DSS industrial security representatives faced several challenges in carrying out their FOCI responsibilities, largely due to complexities in verifying FOCI cases, limited tools to research FOCI transactions, insufficient FOCI training, staff turnover, and inconsistencies in implementing guidance on FOCI cases. For industrial security representatives, verifying if a contractor is under FOCI is complex. Representatives are required to understand the corporate structure of the legal entity completing the Certificate Pertaining to Foreign Interests form and to evaluate the types of foreign control or influence that exist for each entity within a corporate family. For example, representatives are required to verify information on stock ownership by determining the distribution of the stock among the stockholders and the influence or control the stockholders may have within the corporation. This entails identifying the type of stock and the number of shares owned by the foreign person(s) to determine authority and management prerogatives. Some industrial security representatives told us they did not always have the tools needed to verify if contractors are under FOCI. They conducted independent research using the Internet or returned to the contractor for more information to evaluate the FOCI relationships and hold discussions with management officials, such as the chief financial officer, treasurer, and legal counsel. DSS headquarters officials told us additional information sources, such as the Dun and Bradstreet database of millions of private and public companies were not available in the field. In addition, industrial security representatives stated they lacked the training and knowledge needed to better verify and oversee contractors under FOCI. For example, DSS did not require its representatives to have financial or legal training. While some FOCI training was provided, representatives largely depended on DSS guidance and on-the-job training to oversee a FOCI contractor. In so doing, representatives worked with more experienced staff or sought guidance, when needed, from DSS headquarters. Despite DSS efforts to provide training on FOCI, we found that the training needs on complex FOCI issues were still a concern to representatives. In fact, many said they needed more training to help with their responsibility of verifying FOCI information, including how to review corporate documents, strategic company relationships, and financial reports. In addition, officials from one-third of the field offices we reviewed noted staff retention problems. DSS officials at two of these field offices said that in particular they have problems retaining more experienced industrial security representatives. Compounding these challenges are inconsistencies among field offices in how industrial security representatives said they understood and implemented DSS guidance for reviewing contractors under FOCI. For example, per DSS guidance, security reviews and FOCI meetings should be performed every 12 months for contractors operating under special security agreements, security control agreements, voting trust agreements, and proxy agreements. However, we found that some industrial security representatives did not follow the guidance. One representative said a contractor under a special security agreement was subject to a security review every 18 months because the contractor did not store classified information on-site. In addition, two industrial security representatives told us they did not conduct annual FOCI meetings for contractors that were operating under a proxy agreement and security control agreement, respectively. We also found that industrial security representatives varied in their understanding or application of DSS guidance for when they should suspend a contractor’s facility clearance when FOCI was unmitigated. The guidance indicates that when a contractor with a facility clearance is determined to be under FOCI that requires mitigation by DSS headquarters, the facility security clearance shall be suspended until a protective measure is implemented. However, we were told by officials in some field offices that they rarely suspend clearances when a contractor has unmitigated FOCI as long as the contractor is demonstrating good faith in an effort to provide documentation to DSS to identify the extent of FOCI and submit a FOCI mitigation plan to DSS. Officials in other field offices said they would suspend a contractor’s facility clearance once they learned the contractor had unmitigated FOCI. In conclusion, we believe that the weaknesses identified in the NISP and other programs designed to protect technologies critical to U.S. national security present significant challenges and need to be addressed. Although in its initial response to our reports, DOD did not agree with many of our recommendations or the need for corrective actions, we understand that DSS has subsequently begun to address some of the issues we raised. While we have not reviewed any of these actions and therefore can not address their potential effectiveness, we welcome DSS’s recognition that action is needed. Mr. Chairman this concludes my statement. I would be happy to answer any questions you or other members of the committee may have. For information about this testimony, please contact Ann Calvaresi Barr, Director, Acquisition and Sourcing Management, at (202) 512-4841 or [email protected]. Other individuals making key contributions to this product include Thomas J. Denomme, Brandon Booth, John Krump, Karen Sloan, Lillian Slodkowski, and Suzanne Sterling. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | The National Industrial Security Program (NISP) aims to ensure contractors appropriately safeguard the government’s classified information. NISP, along with other laws, regulations, policies, and processes, is intended to protect technologies critical to maintaining military technological superiority and other U.S. national security interests. The Defense Security Service (DSS) within the Department of Defense (DOD) administers NISP on behalf of DOD and other federal agencies. DSS grants clearances to contractor facilities so they can access and, in some cases, store classified information. In 2005, DSS monitored over 11,000 facilities’ security programs to ensure that they meet NISP requirements for protecting classified information. In 2004 and 2005, GAO issued reports that examined DSS responsibilities related to facilities accessing or storing classified information. The first report assessed DSS oversight of facilities and DSS actions after possible compromises of classified information. The second focused specifically on DSS oversight of contractors under foreign ownership, control, or influence (FOCI). This testimony summarizes the findings of these reports and their relevance to the effective protection of technologies critical to U.S. national security interests— an area GAO designated as a governmentwide high-risk area in 2007. DSS did not systematically collect and analyze the information needed to assess its oversight of both contractor facilities and contractors under FOCI. While DSS maintained files on contractor facilities’ security programs and their security violations, it did not use this information to determine, for example, whether certain types of violations are increasing or decreasing and why. As a result, DSS was unable to identify patterns of security violations across all facilities based on factors such as the type of work conducted, the facilities’ government customer, or the facilities’ corporate affiliation. Identifying such patterns would enable DSS to target needed actions to reduce the risk of classified information being compromised. With regard to contractors under FOCI, DSS did not collect and track the extent to which classified information was left in the hands of such contractors before measures were taken to reduce the risk of unauthorized foreign access. GAO found instances in which contractors did not report foreign business transactions to DSS for several months. DSS’s process for notifying government agencies of possible compromises to their classified information has also been insufficient. When a contractor facility reports a violation and the possible compromise of classified information, DSS is required to determine whether a compromise occurred and to notify the affected government agency so it can assess any damage and take actions to mitigate the effects of the suspected compromise or loss. However, for nearly 75 percent of the 93 violations GAO reviewed, DSS either made no determination regarding compromise or made a determination that was inconsistent with established criteria. In addition, in many cases in which DSS was required to notify the affected agencies of possible information compromises, the notification took more than 30 days; in one case, notification was delayed 5 months. Despite the complexities involved in overseeing contactor facilities and contractors under FOCI, DSS field staff lacked the guidance, tools, and training necessary to effectively carry out their responsibilities. According to DSS field staff, they lacked research tools and training to fully understand, for example, the significance of corporate structures, legal ownership, and complex financial relationships when foreign entities are involved— knowledge that is needed to effectively oversee contractors under FOCI. Staff turnover and failure to implement guidance consistently also detracted from field staff’s ability to effectively carry out responsibilities. GAO has made numerous recommendations aimed at improving NISP and DSS’s oversight of classified information that has been entrusted to contractors. Continued weaknesses in this and other areas that require rigorous oversight—such as export control, foreign acquisitions of U.S. companies, and foreign military sales—prompted GAO to designate the protection of critical technologies as high risk. |
The military services and defense agencies face three long-standing challenges with processing, exploiting, and disseminating ISR data. First, since 2002, DOD has rapidly increased its ability to collect ISR data in Iraq and Afghanistan; however, its capacity for processing, exploiting, and dissemination is limited and has not kept pace with the increase in collection platforms and combat air patrols. For example, the Air Force has substantially increased the number of combat air patrols that ISR collection platforms are performing in the U.S. Central Command theater of operations. Specifically, the number of combat air patrols flown by the Air Force’s Predator and Reaper unmanned aircraft systems has increased from 13 to 36 since 2007. Moreover, in the 2010 Quadrennial Defense Review Report, DOD stated that it will continue to expand the Predator and Reaper combat air patrols to 65 by fiscal year 2015. This increase in data collection will also increase the burden on the Air Force’s ground processing system, which processes, exploits, and disseminates the ISR information collected by these platforms. Second, transmitting data from ISR collection platforms to ground stations where analysts process, exploit, and then disseminate intelligence to users requires high-capacity communications bandwidth. However, bandwidth can be limited in a theater of operations by the satellite and ground-based communication capacity. An insufficient amount of bandwidth affects the ability to send, receive, and download intelligence products that contain large amounts of data. For example, intelligence products derived from ISR geospatial data have high bandwidth requirements—the higher the resolution of the product, the longer the transmission time via a given bandwidth. DOD officials have acknowledged that limited bandwidth is a continual challenge in Iraq because of the warfighter’s reliance on geospatial data. GAO and others have reported that DOD continues to face a growing need for communications bandwidth in combat operations. Third, the military services and defense agencies are challenged by shortages in the numbers of analytical staff available to exploit all of the electronic signals and geospatial ISR information being collected, raising the risk that important information may not be analyzed and made available to commanders in a timely manner. For example, according to U.S. Central Command officials, the command exploits less than one-half of the electronic signals intercepts collected from the Predator. According to DOD officials, finding native speakers of the collected languages to successfully translate and exploit data collected in those foreign languages is difficult, and training language analysts takes time and is difficult to manage with the deployment schedule. In addition, language analysts who translate and exploit electronic signals intelligence data must qualify for security clearances that require rigorous background examinations. The National Security Agency has experienced difficulties in hiring language analysts who can obtain clearances and have the appropriate skill levels in both English and the language for translation. DOD has recognized the need to enhance its processing, exploitation, and dissemination capabilities and is developing and implementing initiatives to do so, but its initiatives are in the early stages of implementation and it is too soon to tell how effective they will be in addressing current challenges. For example, in the short term, DOD has placed its priority for processing, exploitation, and disseminating electronic signals intelligence on the information collected in Afghanistan because the Commander of U.S. Central Command has designated those missions as a high priority. In the long term, DOD has taken several actions intended to sustain, expand, and improve processing, exploitation, and dissemination capabilities. For example, DOD has studies, such as an ISR force-sizing study, under way which include examining how to improve the management of its processing, exploitation, and dissemination capabilities. However, DOD has not set dates for when all of these studies will be complete and it is too soon to know whether they will lead to the desired effect of increased support to the warfighter for current operations. The Air Force and the National Security Agency also have plans to increase analyst personnel in response to the increase in ISR collection. The Air Force, reacting to scheduled increases in Predator and Reaper combat air patrols, is planning to add personnel who process, exploit, and disseminate ISR data. The National Security Agency also has taken steps to address shortages in language analyst personnel. For example, to better target its hiring effort for language analysts the agency is using U.S. Census Bureau data to locate centers of populations that contain the language skills needed to translate and exploit the foreign languages that are collected. According to National Security Agency officials, these efforts have helped increase the number of language analysts available to process and exploit collected signals intelligence data. DOD is also working on developing technical solutions to improve processing, movement, and storage of data. For example, files from wide-area sensors have to be saved to a computer disk and flown back to the United States for exploitation and dissemination because current networks in the theater of operations cannot handle the large amounts of data these sensors collect. U.S. Joint Forces Command is currently designing and testing technology already in use by the commercial entertainment industry to improve storage, movement, and access to full motion video data from wide-area sensors. Although DOD has recognized the need for maximizing the efficiency and effectiveness of the information it collects and has been taking steps to increase information sharing across the defense intelligence community, progress has been uneven among the military services. DOD began plans for its Distributed Common Ground/Surface System (DCGS), an interoperable family of systems that will enable users to access shared ISR information, in 1998. DOD subsequently directed the military services to transition their service-unique intelligence data processing systems into DCGS and each of the military services is at a different stage. As shown in table 1, the Air Force and the Navy each plan to have a fully functional version of DCGS by the end of fiscal years 2010 and 2013, respectively, and the Army does not expect to have a fully functional system until 2016. The Marine Corps has not yet established a completion date for the full operational capability of its DCGS. DOD has developed a system of standards and protocols, called the DCGS Integration Backbone (DIB), which serves as the foundation for interoperability between each of the four military services’ DCGS programs. However, the services have not completed the process of prioritizing and tagging the data they want to share in accordance with these standards and protocols or developed timelines to do so. As a result, the services are not sharing all of their collected ISR data. Although the Air Force has the capability to share some Air Force- generated ISR information with other DOD users through the DIB standards and protocols, it has not developed timelines or taken steps to prioritize the types of additional data that should be shared with the defense intelligence community. The Army also has the capability to share some of its intelligence data with other users, but has experienced difficulties tagging all of its data because of its large inventory of legacy ISR systems. Moreover, the Army has not established timelines for sharing data. The Navy and Marine Corps are not currently tagging all of the ISR data they intend to share and have neither developed timelines nor taken steps to prioritize the types of data that should be shared with the defense intelligence community. The Under Secretary of Defense for Intelligence has responsibility for ensuring implementation of DOD intelligence policy, including monitoring the services’ progress toward interoperability. Although the services are responsible for managing their DCGS programs and conforming to information-sharing standards, according to Office of the Under Secretary of Defense for Intelligence and military service officials, DOD has not developed overarching guidance, such as a concept of operations that provides needed direction and priorities for sharing intelligence information within the defense intelligence community. Without this overarching guidance, the services lack direction to set their own goals and objectives for prioritizing and sharing ISR information and therefore have not developed service-specific implementation plans that describe the prioritization and types of ISR data they intend to share with the defense intelligence community. For example, a concept of operations could provide direction to the military services and defense agencies to select data to prioritize for meta-data tagging and sharing, such as electronic signals intelligence data. As a result, it is not clear how much of the collected data are not being shared. Until DOD identifies what types of ISR information should be shared and assigns priorities for sharing data, it is unclear whether mission-critical information will be available to the warfighter. In addition, the inability of users to fully access existing information in a timely manner is a contributing factor to the increasing demand for additional ISR collection assets. Therefore, in our January 2010 report, we recommended that the Secretary of Defense take the following two actions: Direct the Under Secretary of Defense for Intelligence, in coordination with the Chairman of the Joint Chiefs of Staff and the Secretaries of the Army, Navy, and Air Force, to develop guidance, such as a concept of operations that provides overarching direction and priorities for sharing intelligence information across the defense intelligence community. Direct the Secretaries of the Army, Navy, and Air Force to develop service-specific implementation plans, consistent with the concept of operations, which set timelines and outline the prioritization and types of ISR data they will share with the defense intelligence community through the DIB. In written comments on our report, DOD agreed with our recommendations overall and stated that there is guidance either issued or in development to address our recommendations. However, this guidance does not fully address the intent of our recommendations, and we believe additional guidance is necessary. DOD officials cite ISR as vital to mission success in Iraq and Afghanistan, and Congress has responded by funding additional ISR assets. However, until all participants in the defense enterprise successfully share ISR information, inefficiencies will hamper the effectiveness of efforts to support the warfighter, and ISR data collection efforts may be unnecessarily duplicative. While the focus of my testimony has been on the processing, exploiting, and disseminating of ISR data, our prior work has also shown that collection taskings are fragmented in theater and visibility into how ISR assets are being used is lacking. These challenges increase the risk that operational commanders may not be receiving mission-critical ISR information, which can create the perception that additional collection assets are needed to fill gaps. Mr. Chairmen and members of the subcommittees, this concludes my prepared statement. I would be happy to answer any questions that you may have at this time. For further information regarding this testimony, please contact Davi M. D’Agostino at (202) 512-5431 or [email protected]. In addition, contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals who made key contributions to this testimony are Margaret G. Morgan and Marc J. Schwartz, Assistant Directors; Grace A. Coleman; Gregory A. Marchand; Erika A. Prochaska; Kimberly C. Seay; and Walter K. Vance. In addition, Amy E. Brown; Amy D. Higgins; Timothy M. Persons; and Robert Robinson made significant contributions to the January 2010 report that supported this testimony. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | The Department of Defense (DOD) has numerous intelligence, surveillance, and reconnaissance (ISR) systems--including manned and unmanned airborne, space-borne, maritime, and terrestrial systems--that play critical roles in support of current military operations. The demand for these capabilities has increased dramatically. Today's testimony addresses (1) the challenges the military services and defense agencies face processing, exploiting, and disseminating the information collected by ISR systems and (2) the extent to which the military services and defense agencies have developed the capabilities required to share ISR information. This testimony is based on GAO's January 2010 report on DOD's ISR data processing capabilities. GAO reviewed and analyzed documentation, guidance, and strategies of the military services and defense agencies in regard to processing, exploiting, and disseminating ISR data as well as information-sharing capabilities. GAO also visited numerous commands, military units, and locations in Iraq and the United States. The military services and defense agencies face long-standing challenges with processing, exploiting, and disseminating ISR data, and DOD has recently begun some initiatives to address these challenges. First, since 2002, DOD has rapidly increased its ability to collect ISR data in Iraq and Afghanistan, although its capacity for processing, exploiting, and dissemination is limited. Second, transmitting data from ISR collection platforms to ground stations where analysts process, exploit, and then disseminate intelligence to users requires high-capacity communications bandwidth. However, bandwidth can be limited in a theater of operations by the satellite and ground-based communication capacity, and this in turn affects the ability to send, receive, and download intelligence products that contain large amounts of data. Third, shortages of analytical staff with the required skill sets hamper the services' and defense agencies' abilities to exploit all ISR information being collected, thus raising the risk that important information may not be available to commanders in a timely manner. DOD is developing and implementing initiatives to enhance its processing, exploitation, and dissemination capabilities, such as increasing personnel, but its initiatives are in the early stages of implementation and it is too soon to tell how effective they will be in addressing current challenges. DOD is taking steps to improve the sharing of intelligence information across the department, but progress is uneven among the military services. DOD began plans for its Distributed Common Ground/Surface System (DCGS), an interoperable family of systems that will enable users to access shared ISR information in 1998. DOD subsequently directed the military services to transition their service-unique intelligence data processing systems into DCGS and each of the military services is at a different stage. While the Air Force and the Navy each plan to have a fully functional version of DCGS by the end of fiscal years 2010 and 2013, respectively, the Army does not expect to have a fully functional system until 2016. The Marine Corps has not yet established a completion date for the full operational capability of its DCGS. To facilitate the sharing of ISR data on this system, DOD developed the DCGS Integration Backbone, which provides common information standards and protocols. Although the services are responsible for managing their DCGS programs and conforming to information-sharing standards, according to the Office of the Under Secretary of Defense for Intelligence and military service officials, DOD has not developed overarching guidance, such as a concept of operations that provides direction and priorities for sharing intelligence information within the defense intelligence community. Without this overarching guidance, the services lack direction to set their own goals and objectives for prioritizing and sharing ISR information and therefore have not developed service-specific implementation plans that describe the prioritization and types of ISR data they intend to share. Moreover, the inability of users to fully access existing information contributes to the increasing demand for additional ISR collection assets. |
DOD’s primary military medical mission is to maintain the health of 1.6 million active duty service personnel and be prepared to deliver health care during wartime. Also, as an employer, DOD offers health care services to 6.6 million non-active duty beneficiaries, including active duty members’ dependents and military retirees and their dependents. Most care is provided in 115 hospitals and 471 clinics—called military treatment facilities (MTF)—operated by the Army, Navy, and Air Force worldwide. This direct delivery system is supplemented by DOD-funded care provided in civilian facilities. In fiscal year 1997, DOD spent about $12 billion for direct care and about $3.5 billion for civilian care. In response to such challenges as increasing health care costs and uneven beneficiary access to care, in the late 1980s DOD initiated a series of congressionally directed demonstration programs to evaluate alternatives to its existing health care delivery approaches. Drawing from its experience with the demonstration projects, DOD then designed TRICARE as its managed care health program. The Office of the Assistant Secretary of Defense for Health Affairs sets TRICARE policy and has overall responsibility for the program. The Army, Navy, and Air Force Surgeons General have authority over the MTFs in their respective services. TRICARE is designed to give beneficiaries a choice of three benefit options. These are TRICARE Prime, the health maintenance organization (HMO) option; TRICARE Standard, a fee-for-service benefit replacing the Civilian Health and Medical Program of the Uniformed Services (CHAMPUS) program; and TRICARE Extra, a preferred provider option. do not pay an enrollment fee; retirees under age 65 and their dependents and survivors pay an annual enrollment fee of $230 for an individual and $460 for a family. Copayments under Prime are lower than under the other options. TRICARE Standard provides beneficiaries with the greatest freedom in selecting civilian physicians but requires the highest beneficiary cost share. Under TRICARE Extra, beneficiaries do not enroll or pay annual premiums but, by using physicians in the TRICARE network, are charged copayments that are 5 percent less than under TRICARE Standard. In restructuring its health care program, DOD designed a program that has proven difficult to implement. More than 4 years after initiating TRICARE, DOD is now 1 year behind its schedule for fully implementing the nationwide program, and that schedule may slip further. As DOD implements TRICARE, it is also continuing to make significant changes to the program’s design. While these changes are aimed at improving TRICARE and addressing problems we and others have identified, they also create new implementation challenges. Moreover, DOD’s progress in implementing TRICARE has been hampered by enrollment shortfalls and administrative problems. As part of its implementation of TRICARE, DOD has awarded large, complex, competitively bid contracts to supplement and support the health care provided in MTFs. These 5-year contracts are estimated to cost a total of about $15 billion. DOD had planned to award a total of seven contracts for the 11 TRICARE regions nationwide by September 30, 1996, and health care delivery under TRICARE was expected to have begun in all regions by May 1997. (The appendix contains a map of the 11 TRICARE regions.) sustained protests if the reconsideration requests are denied, could further delay implementation of TRICARE in three regions. In 1995, we reported that such problems as DOD’s failure to evaluate offerors’ bids according to solicitation criteria led to the sustained protest of a pre-TRICARE contract award covering California and Hawaii. In response, DOD put in place such improvements as a revised methodology for evaluating bids, which it believed would reduce the chance of protests being sustained. The recent sustained protests indicate, however, that problems with bid evaluations continue. industry experts, and us, DOD is developing a more simplified procurement approach, which it will begin to use this summer as the first of the existing TRICARE contracts is recompeted. This new approach is designed to incorporate performance-based requirements and best commercial practices. DOD expected that, to take full advantage of cost-effective managed care principles and practices, significant numbers of beneficiaries would enroll in TRICARE Prime—especially those who rely on the military system for their health care. However, as of October 1997, only about half of the eligible beneficiaries using the military health care system had enrolled in TRICARE Prime. DOD set targets to help ensure high enrollment in Prime. It expected, for example, that 100 percent of active duty members would enroll in Prime by the end of 1996. However, as of October 1997, only about 70 percent of active duty members had enrolled. Moreover, DOD expected that at least 90 percent of non-active duty beneficiaries targeted for enrollment would enroll in Prime within 1 year of TRICARE’s implementation in each region. However, as of October 1997, in those regions where TRICARE had been implemented for at least a year, only about 57 percent of those targeted, or about 1.1 million beneficiaries, had enrolled. This less-than-optimal enrollment has several important implications. For example, DOD is less able to manage the utilization of health care for beneficiaries not enrolled in Prime. Under managed care, costs are contained in part through the use of primary care managers who ensure that beneficiaries receive necessary and appropriate care in the most cost-effective manner. Moreover, beneficiaries may sustain higher out-of-pocket health care costs if they choose not to enroll. Also, DOD is beginning to implement a new funding system— enrollment-based capitation—that is designed to motivate and reward MTF commanders for maximizing their enrolled population. Under this approach, DOD will fund MTFs on the basis of the number of beneficiaries enrolled in Prime at the MTF. Previously, DOD had set per capita rates according to past levels of military spending. This new capitation method is designed to better mirror private sector managed care funding methods. Under enrollment-based capitation, MTFs will continue to receive funding for the care they provide to nonenrollees, but at a lower rate than for those enrolled. We have identified a number of reasons why beneficiaries may not be enrolling in Prime. Beneficiaries who are accustomed to receiving care in MTFs may not see the need to enroll. Retirees under 65 years of age and their dependents, who must pay an annual enrollment fee, may opt not to enroll for that reason. In addition, Prime is not available in all areas of the country—for example, in areas where there is no MTF and no civilian provider network. Also, some beneficiaries may choose to continue receiving care under TRICARE’s traditional fee-for-service option. DOD asserts that it can provide care more cost-effectively in its MTFs than through civilian providers, and for that reason, TRICARE was designed to maximize the use of the MTFs before relying on civilian care. However, although enrollment capacity still exists in MTFs, beneficiaries are being allowed to enroll in civilian facilities that are near MTFs. As of late last year, about 74 percent of MTFs’ primary care capacity had been assigned to Prime enrollees. Thus, it appears that DOD could more fully and cost-effectively use its facilities before enrolling beneficiaries in civilian-provided care. access to quality care. Physicians raised concerns about untimely claims reimbursement, a slow preauthorization process to approve medical treatment, and unreliable customer telephone service, among other things. Some physicians also complained about the lower, “discounted” rates paid to TRICARE network physicians under its Prime and Extra options. Because of these administrative and cost issues, some physicians are becoming disillusioned with TRICARE. As we have noted, DOD’s goals in establishing TRICARE were to improve access while maintaining quality and controlling costs. DOD efforts to set goals and to measure access and quality are incomplete, however, and do not enable DOD or others to fully assess whether TRICARE has improved beneficiaries’ access to and quality of health care. Moreover, DOD’s failure to achieve expected cost savings under TRICARE raises questions about DOD’s cost-savings claims. DOD has not set programwide goals and performance measures to track its progress in meeting TRICARE access and quality program goals for care provided in MTFs and by contractors. DOD has developed a military health system performance report card that includes goals and measures for some aspects of access and quality, such as 95-percent beneficiary satisfaction with access to appointments and system resources. However, this report card applies only to MTFs and does not include care provided through civilian contractors—an estimated one-third of DOD’s peacetime health care delivery efforts. Under its managed care support contracts, DOD does set performance-related requirements, and contractors report to DOD their performance in meeting these requirements. However, this information is not yet compiled or consolidated with military facility data to provide a programwide picture. beneficiary satisfaction levels, on average, exceed those in civilian HMOs. However, DOD survey officials told us it is too soon to use the surveys’ results to assess TRICARE because the program is new and not yet implemented nationwide. Also, they said the results from surveys conducted to date constitute an insufficient basis from which to identify trends. Although important, beneficiaries’ perceptions do not totally measure DOD’s actual performance. To supplement beneficiary satisfaction information on access to care, we recommended in 1996 that DOD collect data on the timeliness of appointments. While DOD agreed with our recommendation, it has yet to fully implement this data collection effort. Moreover, the beneficiary satisfaction information DOD uses in its report card to measure access is based on monthly surveys of patients receiving outpatient care. Relying on the outpatient survey provides limited information on access and may mask the extent of difficulty beneficiaries face since it only collects information from those patients who were able to obtain care at a military facility. As required by the Congress, DOD has contracted for independent evaluations of TRICARE’s progress in improving access, maintaining quality, and controlling costs. These studies are currently under way but are not expected to be completed until June 1999. Given the importance of TRICARE, and concerns about access and quality raised by beneficiary groups and recent media reports, we are also planning to examine DOD health care access and quality issues. When TRICARE was designed, the Congress required that the program be cost neutral—that is, that TRICARE costs not exceed the health care costs DOD would have incurred without the program. To control TRICARE costs, DOD planned to achieve cost savings from managed care efforts and initiatives. However, there are reasons now to question how current and analytically complete DOD’s savings claims are. An important cost-saving feature of DOD’s partnership between military and civilian health care entities under TRICARE is resource sharing. To share resources, the contractor supplements the capacity of a military hospital or clinic by providing civilian personnel, equipment, or supplies. DOD had estimated that resource sharing could save about $700 million over 5 years. We reported last summer, however, that DOD and the contractors had made agreements likely to save about 5 percent of DOD’s overall resource sharing goal. At that rate, after 9 to 24 months of operation, DOD could have expected to realize only about $36 million. DOD’s plans to undertake a more current and complete cost analysis of MTF direct and contractor-provided care to determine TRICARE’s cost-effectiveness. Until this analysis is completed, questions will remain regarding the extent to which the legislative objective for TRICARE’s cost-effectiveness is being achieved. DOD’s efforts to fully implement TRICARE are occurring at a time when not only are changes being made in the organization to manage the program but other, perhaps more significant, changes are being contemplated for the military health care system itself. Planning for these changes and incorporating them into TRICARE is making an already complex task even more difficult. On February 10, 1998, as part of a DOD-wide reform initiative to consolidate headquarters functions, DOD established within the Office of the Assistant Secretary of Defense for Health Affairs what it called the TRICARE Management Activity. This activity unifies several Health Affairs operational elements with two field activities, including the TRICARE Support Office, which is responsible for TRICARE procurement activities. The activity is expected to strengthen program oversight and performance by developing and using specific performance measures for the program’s costs, quality, and health care access. We have found such measures to be needed. A second significant organizational change that may affect the future of TRICARE relates to the imminent retirement of the now Acting Assistant Secretary of Defense, who has served in Health Affairs for the past 9 years and has been a key force in the design and development of TRICARE. Strong leadership will be needed in the future as implementation of TRICARE proceeds, and filling this void will be a major challenge. provided to enrollees above the amount DOD currently spends for them. Under this concept—known as Medicare subvention—DOD believes it can provide care to older retirees in MTFs at a lower cost than Medicare HMOs can. Medicare subvention will improve enrollees’ access to care in MTFs and will allow Medicare HMOs to contract with DOD to provide specialty and inpatient care. While this program adds to the health care options available to certain military beneficiaries, it also introduces additional administrative complexities to the already complex TRICARE program, such as the need for new contracts with Medicare HMOs. Many legislative proposals have been introduced in the 105th Congress that would authorize, either for all Medicare-eligible military beneficiaries or for Medicare eligibles and certain other non-active duty beneficiaries, enrollment in one of the many Federal Employees Health Benefits Program (FEHBP) plans. Enactment of an FEHBP option for these beneficiaries could dramatically alter TRICARE by reducing beneficiaries’ demand for military health care. The most significant change in the system may occur if and when DOD completes its now overdue update of what is known as its “733 study,” which was completed in April 1994. In this study, conducted pursuant to section 733 of the National Defense Authorization Act for fiscal years 1992 and 1993, DOD’s Office of Program Analysis and Evaluation (PA&E) challenged the Cold War assumption that all military medical personnel employed during peacetime are needed for wartime. The study concluded that DOD’s wartime medical requirements are far lower—by as much as half—than the medical system then programmed for fiscal year 1999. Although no action was taken by DOD as a result of that study, the Deputy Secretary of Defense, in August 1995, directed that the study be updated and improved. We understand that PA&E has nearly completed the study and that DOD top management will likely review it before its release. If the updated review results in conclusions similar to those in the 733 study, and if DOD acts on those conclusions, the potential reductions in military medical personnel and facilities could be significant. TRICARE’s primary cost-saving advantages are rooted in the delivery of managed care at military facilities, and any significant reduction in such capacity would necessitate that beneficiaries be provided care in the contractors’ networks. This would alter the potential cost-effectiveness of the program. Mr. Chairman, this concludes my prepared statement. I will be glad to respond to any questions you or other Subcommittee members may have. We look forward to continuing to work with the Subcommittee as it exercises its oversight of this important program. Defense Health Care: Reimbursement Rates Appropriately Set; Other Problems Concern Physicians (GAO/HEHS-98-80, Feb. 26, 1998). Defense Health Care: DOD Could Improve Its Beneficiary Feedback Approaches (GAO/HEHS-98-51, Feb. 6, 1998). Defense Health Care: TRICARE Resource Sharing Program Failing to Achieve Expected Savings (GAO/HEHS-97-130, Aug. 22, 1997). Defense Health Care: Actions Under Way to Address Many TRICARE Contract Change Order Problems (GAO/HEHS-97-141, July 14, 1997). Military Retirees’ Health Care: Costs and Other Implications of Options to Enhance Older Retirees’ Benefits (GAO/HEHS-97-134, June 20, 1997). Defense Health Care: Dental Contractor Overcame Obstacles, but More Proactive Oversight Needed (GAO/HEHS-97-58, Feb. 28, 1997). Defense Health Care: Limits to Older Retirees’ Access to Care and Proposals for Change (GAO/T-HEHS-97-84, Feb. 27, 1997). Defense Health Care: New Managed Care Plan Progressing, but Cost and Performance Issues Remain (GAO/HEHS-96-128, June 14, 1996). Defense Health Care: Medicare Costs and Other Issues May Affect Uniformed Services Treatment Facilities’ Future (GAO/HEHS-96-124, May 17, 1996). Defense Health Care: Effects of Mandated Cost Sharing on Uniformed Services Treatment Facilities Likely to Be Minor (GAO/HEHS-96-141, May 13, 1996). Defense Health Care: TRICARE Progressing, but Some Cost and Performance Issues Remain (GAO/T-HEHS-96-100, Mar. 7, 1996). Defense Health Care: Despite TRICARE Procurement Improvements, Problems Remain (GAO/HEHS-95-142, Aug. 3, 1995). Defense Health Care: Issues and Challenges Confronting Military Medicine (GAO/HEHS-95-104, Mar. 22, 1995). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | GAO discussed the status of the Department of Defense's (DOD) implementation of its managed health care program, TRICARE, focusing on: (1) DOD's progress in implementing TRICARE; (2) whether DOD is adequately assessing TRICARE'S effects on military health care access, quality, and cost; and (3) the implications of ongoing and proposed changes in the military health care system itself for TRICARE's future. GAO noted that: (1) TRICARE was established in an era of military downsizing and rapidly escalating DOD health costs; (2) it was envisioned as a way to maintain beneficiary access to high-quality care while containing costs; (3) designing and implementing TRICARE to achieve these objectives, however, has proven to be a complex and difficult undertaking involving many stakeholders, including Congress, the individual services and their many facilities and contractors, and the more than 8 million beneficiaries of the military health care system; (4) DOD has taken steps to improve the program as it has evolved, but much remains to be done before TRICARE becomes the smooth-running and beneficiary-friendly endeavor envisioned by its developers; (5) moreover, many questions concerning its cost-effectiveness and ability to meet beneficiary access and quality of care concerns are still to be answered; (6) in addition to operational difficulties, TRICARE is likely to continue to be implemented amid many changes that could profoundly affect not only the program but the entire military health care system; and (7) the result of the continuing evolution of TRICARE and the collective effects of these individual changes on it remain to be seen. |
During the three decades in which uranium was used in the government’s nuclear weapons and energy programs, for every ounce of uranium that was extracted from ore, 99 ounces of waste were produced in the form of mill tailings—a finely ground, sand-like material. By the time the government’s need for uranium peaked in the late 1960s, tons of mill tailings had been produced at the processing sites. After fulfilling their government contracts, many companies closed down their uranium mills and left large piles of tailings at the mill sites. Because the tailings were not disposed of properly, they were spread by wind, water, and human intervention, thus contaminating properties beyond the mill sites. In some communities, the tailings were used as building materials for homes, schools, office buildings, and roads because at the time the health risks were not commonly known. The tailings and waste liquids from processing uranium ore also contaminated the groundwater. Tailings from the ore processing resulted in radioactive contamination at about 50 sites (located mostly in the southwestern United States) and at 5,276 nearby properties. The most hazardous constituent of uranium mill tailings is radium. Radium produces radon, a radioactive gas whose decay products can cause lung cancer. The amount of radon released from a pile of tailings remains constant for about 80,000 years. Tailings also emit gamma radiation, which can increase the incidence of cancer and genetic risks. Other potentially hazardous substances in the tailings include arsenic, molybdenum, and selenium. DOE’s cleanup authority was established by the Uranium Mill Tailings Radiation Control Act of 1978. Title I of the act governs the cleanup of uranium ore processing sites that were already inactive at the time the legislation was passed. These 24 sites are referred to as Title I sites. Under the act, DOE is to clean up the Title I sites, as well as the nearby properties that were contaminated. In doing so, DOE works closely with the affected states and Indian tribes. DOE pays for most of this cleanup, but the affected states contribute 10 percent of the costs for remedial actions. Title II of the act covers the cleanup of sites that were still active when the act was passed. These 26 sites are referred to as Title II sites. Title II sites are cleaned up mostly at the expense of the private companies that own and operate them. They are then turned over to the federal government for long-term custody. Before a Title II site is turned over to the government, the Nuclear Regulatory Commission (NRC) works with the sites’ owners/operators to make sure that sufficient funds will be available to cover the costs of long-term monitoring and maintenance. The cleanup of surface contamination consists of four key steps: (1) identifying the type and extent of the contamination; (2) obtaining a disposal site; (3) developing an action plan, which describes the cleanup method and specifies the design requirements; and (4) carrying out the cleanup using the selected method. Generally, the primary cleanup method consists of enclosing the tailings in a disposal cell—a containment area that is covered with compacted clay to prevent the release of radon and then topped with rocks or vegetation. Similarly, the cleanup of groundwater contamination consists of identifying the type and extent of the contamination, developing an action plan, and carrying out the cleanup using the selected method. According to DOE, depending on the type and extent of the contamination, and the possible health risks, the appropriate method may be (1) leaving the groundwater as it is, (2) allowing it to cleanse itself over time (called natural flushing), or (3) using an active cleanup technique such as pumping the water out of the ground and treating it. Mr. Chairman, we now return to the topics discussed in our report: the status and cost of DOE’s surface and groundwater cleanup and the factors that could affect the federal government’s costs in the future. Since our report was issued on December 15, 1995, DOE has made additional progress in cleaning up and licensing Title I sites. As of April 1996, DOE’s surface cleanup was complete at 16 of the 24 Title I sites, under way at 6 additional sites, and on hold at the remaining 2 sites. Of the 16 sites where DOE has completed the cleanup, 4 have been licensed by NRC as meeting the standards of the Environmental Protection Agency (EPA). At 10 of the other 12 sites, DOE is working on obtaining such a license, and the remaining 2 sites do not require licensing because the tailings were relocated to other sites. Additionally, DOE has completed the surface cleanup at about 97 percent of the 5,276 nearby properties that were also contaminated. Although DOE expects to complete the surface cleanup of the Title I sites by the beginning of 1997, it does not expect all of NRC activities to be completed until the end of 1998. As for the cleanup of groundwater at the Title I sites, DOE began this task in 1991 and currently expects to complete it in about 2014. Since its inception in 1979, DOE’s project for cleaning up the Title I sites has grown in size and in cost. In 1982, DOE estimated that the cleanups would be completed in 7 years and that only one pile of tailings would need to be relocated. By 1992, however, the Department was estimating that the surface cleanup would be completed in 1998 and that 13 piles of tailings would need to be relocated. The project’s expansion was caused by several factors, including the development of EPA’s new groundwater protection standards; the establishment or revision of other federal standards addressing such things as the transport of the tailings and the safety of workers; and the unexpected discovery of additional tailings, both at the processing sites and at newly identified, affected properties nearby. In addition, DOE made changes in its cleanup strategies to respond to state and local concerns. For example, at the Grand Junction, Colorado, site, the county’s concern about safety led to the construction of railroad transfer facilities and the use of both rail cars and trucks to transport contaminated materials. The cheaper method of simply trucking the materials would have routed extensive truck traffic through heavily populated areas. Along with the project’s expansion came cost increases. In the early 1980s, DOE estimated that the total cleanup cost—for both the surface and groundwater—would be about $1.7 billion. By November 1995, this estimate had grown to $2.4 billion. DOE spent $2 billion on surface cleanup activities through fiscal year 1994 and expects to spend about $300 million more through 1998. As for groundwater, DOE has not started any cleanup. By June 1995, the Department had spent about $16.7 million on site characterization and various planning activities. To make the cleanup as cost-effective as it can, DOE is proposing to leave the groundwater as it is at 13 sites, allow the groundwater to cleanse itself over time at another 9 sites, and use an active cleanup method at 2 locations, in Monument Valley and Tuba City, Arizona. The final selection of cleanup strategies depends largely on DOE’s reaching agreement with the affected states and tribes. At this point, however, DOE has yet to finalize agreements on any of the groundwater cleanup strategies it is proposing. At the time we issued our report, the cleanups were projected to cost at least another $130 million using the proposed strategies, and perhaps as much as another $202 million. More recently, DOE has indicated that the Department could reduce these costs by shifting some of the larger costs to earlier years; reducing the amounts built into the strategies for contingencies, and using newer, performance-based contracting methods. Once all of the sites have been cleaned up, the federal government’s responsibilities, and the costs associated with them, will continue far into the future. What these future costs will amount to is currently unknown and will depend largely on how three issues are resolved. First, because the effort to clean up the groundwater is in its infancy, its final scope and cost will depend largely on the remediation methods chosen and the financial participation of the affected states. Since the time we issued our report, DOE has reported some progress in developing its groundwater cleanup plans. However, it is still too early to know whether the affected states or tribes will ultimately persuade DOE to implement more costly remedies than those the Department has proposed or whether any of the technical assumptions underlying DOE’s proposed strategies will prove to be invalid. If either of these outcomes occurs, DOE may implement more costly cleanup strategies, and thereby increase the final cost of the groundwater cleanup. In its fiscal year 1997 congressional budget request, DOE identified five sites where it believes it may have to implement more expensive alternatives than the ones it initially proposed. In addition, the final cost of the groundwater cleanup depends on the ability and willingness of the affected states to pay their share of the cleanup costs. According to DOE, several states may not have funding for the groundwater cleanup program. DOE believes that it is prohibited from cleaning up the contamination if the states do not pay their share. Accordingly, as we noted in our report, we believe that the Congress may want to consider whether and under what circumstances DOE can complete the cleanup of the sites if the states do not provide financial support. Second, DOE may incur further costs to dispose of uranium mill tailings that are unearthed in the future in the Grand Junction, Colorado, area. DOE has already cleaned up the Grand Junction processing site and over 4,000 nearby properties, at a cost of about $700 million. Nevertheless, in the past, about a million cubic yards of tailings were used in burying utility lines and constructing roads in the area and remain today under the utility corridors and road surfaces. In future years, utility and road repairs will likely unearth these tailings, resulting in a potential public health hazard if the tailings are mishandled. In response to this problem, DOE has worked with NRC and Colorado officials to develop a plan for temporarily storing the tailings as they are unearthed and periodically transporting them to a nearby disposal cell—referred to as the Cheney cell, located near the city of Grand Junction—for permanent disposal. Under this plan, the city or county would be responsible for hauling the tailings to the disposal cell, and DOE would be responsible for the cost of placing the tailings in the cell. The plan envisions that a portion of the Cheney disposal cell would remain open, at an annual cost of roughly $200,000. When the cell is full, or after a period of 20 to 25 years, it would be closed. However, DOE does not currently have the authority to implement this plan because the law requires that all disposal cells be closed upon the completion of the surface cleanup. Accordingly, we suggested in our report that the Congress might want to consider whether DOE should be authorized to keep a portion of the Cheney disposal cell open to dispose of tailings that are unearthed in the future in this area. Finally, DOE’s costs for long-term care are still somewhat uncertain. DOE will ultimately be responsible for the long-term custody, that is, the surveillance and maintenance, of both Title I and Title II sites, but the Department bears the financial responsibility for these activities only at Title I sites. For Title II sites, the owners/operators are responsible for funding the long-term surveillance and maintenance. Although NRC’s minimum one-time charge to site owners/operators is supposed to be sufficient to cover the cost of the long-term custody so that they, not the federal government, bear these costs in full, at the time we issued our December 1995 report, NRC had not reviewed its estimate of basic surveillance costs since 1980, and DOE was estimating that basic monitoring would cost about three times more than NRC had estimated. Since then, NRC and DOE have worked together to determine what level of basic monitoring should occur and how comprehensive the inspection reports should be. However, DOE still maintains that ongoing routine maintenance will be needed at all sites, while NRC’s charge does not provide any amount for ongoing maintenance. In light of the consequent potential shortfall in maintenance funds, our report recommended that NRC and DOE work together to update the charge for basic surveillance and determine whether routine maintenance will be required at each site. On the basis of our recommendations, NRC officials agreed to reexamine the charge and determine the need for routine maintenance at each site. They also said that they are working with DOE to clarify the Department’s role in determining the funding requirements for long-term custody. Mr. Chairman, this concludes our prepared statement. We will be pleased to answer any questions that you or Members of the Subcommittee may have. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | GAO discussed the status and cost of the Department of Energy's (DOE) uranium mill tailings cleanup program and the factors that could affect future costs. GAO noted that: (1) surface contamination cleanup has been completed at two-thirds of the identified sites and is underway at most of the others; (2) if DOE completes its surface cleanup program in 1998, it will have cost $2.3 billion, taken 8 years longer than expected, and be $621 million over budget; (3) DOE cleanup costs increased because there were more contaminated sites than originally anticipated, some sites were more contaminated than others, and changes were needed to respond to state and local concerns; (4) the future cost of the uranium mill tailings cleanup will largely depend on the future DOE role in the program, remediation methods used, and the willingness of states to share final cleanup costs; and (5) the Nuclear Regulatory Commission needs to ensure that enough funds are collected from the responsible parties to protect U.S. taxpayers from future cleanup costs. |
Long-term care includes many types of services needed by individuals who have a physical disability, a mental disability, or both. Long-term care services can be provided in a variety of settings, including an individual’s home or an institution, such as a nursing home. To be eligible for Medicaid coverage for long-term care, individuals must be within certain eligibility categories—such as those that are aged or disabled—and meet functional and financial criteria. Within broad federal standards, states determine whether an individual meets the functional criteria for long-term care coverage by assessing an individual’s ability to carry out activities of daily living (ADL), such as eating and getting around the house; and instrumental activities of daily living (IADL), such as preparing meals and shopping for groceries. The financial eligibility criteria are based on individuals’ assets—income and resources together. States are responsible for determining whether applicants meet the financial and other eligibility criteria for Medicaid coverage for long-term care. To qualify for Medicaid coverage for long-term care, individuals must have assets that fall below established standards, which vary by state, but are within standards set by the federal government. The Medicaid program bases its characterization of assets—income and resources—on that used in the Supplemental Security Income program. Income is something received during a calendar month, paid either in cash or in- kind, that is used or could be used to meet food or shelter needs; resources are cash or real or personal property that are owned that can be converted to cash and be used for food or shelter. (See table 1 for examples of different types of income and resources.) In establishing policy for determining financial eligibility for Medicaid coverage for long- term care, states can decide, within federal standards, which assets are countable. For example, states may disregard certain types or amounts of income, and may elect not to count certain resources. In most states, to be financially eligible for Medicaid coverage for long-term care, individuals must have $2,000 or less in countable resources ($3,000 for a married couple). Federal law limits Medicaid payment for long-term care services for persons who divest themselves of—or “transfer”—their assets for less than FMV within a specified time period. As a result, when an individual applies for Medicaid coverage for long-term care, states conduct a review, or “look back,” to determine whether the applicant (or his or her spouse, if married) transferred assets to another person or party. If the state determines an applicant transferred an asset for less than FMV during the look-back period, the individual may be ineligible for Medicaid coverage for long-term care for a period of time, called the penalty period. The DRA extended the look-back period for transfers made on or after February 8, 2006, to 60 months; prior to that, it was generally 36 months. The DRA also specified circumstances under which the purchase of certain assets—such as an annuity, promissory note or loan, or life estate—is considered a transfer for less than FMV, and when entrance fees for CCRCs are countable for purposes of determining Medicaid eligibility. Additionally, while an individual’s primary residence is generally not a countable resource for determining Medicaid eligibility, the DRA specified when an individual with substantial equity interest in his or her home is to be excluded from eligibility for Medicaid payment for long-term care; the amount of allowable equity interest is established by each state within federal guidelines. See table 2 for a summary of these DRA provisions. Most, but not all, of these DRA provisions became applicable on the date the law was enacted, February 8, 2006. To assess applicants’ financial eligibility for Medicaid coverage for long- term care, and to determine whether they transferred assets for less than FMV, states generally require applicants to submit applications and to provide documentation of certain assets reported on the applications. State Medicaid programs may also obtain information from third parties, such as financial institutions or other government agencies, such as the Social Security Administration. Such information helps states verify the accuracy of applicants’ reported assets, as well as determine whether applicants have assets they failed to report or transferred for less than FMV during the look-back period. The processing of Medicaid applications—including the collection of documentation and information from applicants and third parties—is generally performed by local or county-based eligibility workers. In 2008, Congress passed legislation that required states to implement electronic asset verification systems (AVS) to verify the assets of aged, blind, or disabled applicants for Medicaid, including those seeking Medicaid coverage for long-term care, with financial institutions. An AVS would provide states with an electronic mechanism to contact multiple financial institutions, including those not reported by an applicant, to determine if an applicant has, or had, an account and the value of any existing accounts. The law provides for states’ implementation of these systems to occur on a rolling basis; the first systems were to be implemented by the end of fiscal year 2009, with all states implementing systems by the end of fiscal year 2013. (See table 3.) States reported requiring applicants to provide documentation for most of the 13 types of assets contained in our survey; however, the types and number of months of documentation that the states considered to be acceptable proof for determining an applicant’s financial eligibility for Medicaid coverage for long-term care varied. Specifically, 44 states required documentation for at least 12 of the 13 types of assets. All states reported requiring documentation of annuities, burial contracts and prepaid funeral arrangements, financial and investment resources, life estates, and trusts, while fewer states reported requiring documentation of other types of assets. States were least likely to require documentation of vehicles (38 states) and primary residence (37 states), resources that states may choose not to count for purposes of determining financial eligibility for Medicaid coverage for long-term care. (See fig. 1.) Officials from several states reported not requiring documentation for a particular type of asset because the state was able to obtain the necessary information from a third party. For example, while 50 states reported requiring applicants to submit documentation of earned income, one state did not have such a requirement because the amount of earned income was verified through an interface with the state’s Department of Labor. It was less clear how states assessed other assets—such as vehicles or CCRC entrance fees—absent documentation. Our survey also showed that states varied in how they treated specific types of documentation; that is, whether the documentation was required, acceptable as proof by itself, acceptable as proof with other documentation, or not acceptable as proof of an applicant’s assets. For example, while states generally found a written statement of earnings from an employer as acceptable documentation of earned income, there was more variation in how they treated other types of documentation. (See table 4.) States also varied in the number of months of documentation required from applicants, especially as it related to financial and investment resources. Although all 51 states reported requiring documentation of financial and investment resources, 27 required only current documentation, while the remaining 24 required both current and past documentation. Of the 24 states that required documentation of both current and past financial and investment resources, most required 60 months of documentation, while the remaining states required fewer months of documentation. (See fig. 2.) There was some variation, but to a lesser extent, in the amount of documentation of earned and unearned income that states required from applicants. Of the 50 states that required applicants to provide earned income documentation, 47 required only current documentation. Of the 3 remaining states, 2 required 3 months of documentation and 1 required 2 months of documentation. Of the 49 states that required unearned income documentation, 47 only required documentation of current unearned income. The other 2 states required 3 months of documentation. In addition to the documentation required to assess whether applicants’ assets were within state financial eligibility levels, 38 states reported requiring additional documentation from at least some applicants to identify assets transferred for less than FMV. Of these 38 states, 16 indicated they required additional documentation only if an applicant reported making a transfer, 5 reported doing so only if an applicant’s information was questionable, and 8 reported both of these reasons. Of the remaining 9 states, 7 required additional documentation from all applicants, and 2 did not specify the circumstances that would result in a request for additional documentation. Appendix II provides additional information on states’ asset documentation requirements for individuals applying for Medicaid coverage for long-term care. All 51 states reported that they obtained some amount of asset information from third parties, although the extent of the screenings conducted varied by state. No state had implemented an electronic AVS, which would allow them to contact multiple financial institutions— including those not reported by an applicant—to determine the existence and value of any accounts belonging to an applicant. States reported challenges to implementing an AVS including not having sufficient resources. To varying degrees, states reported obtaining information from third parties through a variety of mechanisms, including data matches, direct contact with financial institutions, and property and vehicle records searches. Some states also reported taking additional verification steps to determine if an applicant transferred assets for less than FMV during the look-back period. All 51 states reported that they conduct data matches with the Social Security Administration to verify at least some applicants’ assets. However, states’ use of data matches with other sources of asset information—primarily related to income—varied, ranging from 48 states reporting data matches with state unemployment records, to as few as 6 states reporting data matches with state tax records. On average, states conducted data matches with 6 of the 10 sources included in our survey; the number of sources states reported using ranged from 1 to 9.In addition to variations in the use of data match sources, states varied in terms of the proportion of applicants screened, and when during the eligibility process the state conducted the screen. For example, most states reported conducting a data match with the Social Security Administration generally before determining an applicant’s eligibility. In contrast, of the 30 states that reported conducting a data match with the Internal Revenue Service for at least some applicants, 21 reported doing so generally after determining eligibility. (See table 5 for summary information and app. III for more detailed information about the data matches conducted by states.) Twenty-four states reported that they contact financial institutions to verify at least some applicants’ financial and investment resources, while the remaining 27 did not. However, these 24 states varied in terms of the range of financial institutions they contacted and the proportion of applicants about whom they inquired. (See table 6.) Some states reported contacting multiple types of financial institutions, such as institutions applicants reported having accounts with, and some that applicants did not report. However, 13 of the 24 states reported contacting only financial institutions with whom the applicant reported having an account; of these 13 states, 3 states contacted financial institutions for all applicants, while the other 10 states did so for some applicants. These 13 states, and the 27 that reported not contacting any financial institutions, are unlikely to identify accounts that an applicant failed to report. Of the remaining 11 states that reported contacting financial institutions not reported by an applicant, 3 reported contacting only local institutions, whereas the other 8 contacted a combination of local, statewide, and national institutions. Regarding the proportion of applicants for which states contact financial institutions, half of the 24 states reported they only contact financial institutions if an applicant submits insufficient information or provides questionable information. Of the 24 states that reported they contact financial institutions to request information, the type of information—account balances or itemized statements—and the number of months they request varied. (See fig. 3.) Specifically, half of these states requested itemized statements that include information on each transaction, while the other half requested monthly account balances. Most of the states that requested account balances did so for 3 or fewer months. Of the 12 states that requested itemized statements, 5 requested 60 months of statements, 3 requested only the statement for the current month, and the other 4 states requested between 3 and 36 months of information. Most of the states (20 of 24) reported that they contact financial institutions for asset information before determining eligibility. Appendix IV provides additional information about states’ contact with financial institutions to verify applicants’ assets. Thirty-five states reported that they conduct some type of property records search to verify at least some applicants’ real property. The extent of the searches varied in terms of the geographic area covered and the proportion of applicants for which property searches were conducted. (See table 7.) Of the 35 states that reported conducting property searches, 2 states conduct property searches only within an applicant’s county of residence. Of the 33 states that reported conducting property searches beyond an applicant’s county of residence, 8 do so only if they have reason to believe the applicant lived in another county or state. Additionally, 12 of the 35 states indicated they conduct property searches only when an applicant submits questionable or insufficient information, an applicant reports the property, or a combination of both factors. States reported being able to conduct property searches using several types of information, including an applicant’s name (33 states), property address (32 states), property zip code (12 states), or an applicant’s Social Security number (10 states). Most of the states (29 of the 35) reported that they generally conduct property searches before determining eligibility. Appendix V provides additional information about the property searches conducted by states. Thirty states reported that they conduct searches of Department of Motor Vehicles’ (DMV) records to verify at least some applicants’ vehicles. Specifically, 14 states reported conducting vehicle searches for all applicants and 1 state reported conducting searches for most of its applicants. The remaining 15 reported conducting such searches for less than half of their applicants; of these 15 states, 6 indicated that they only conduct searches of vehicle records if they receive information from an applicant that they deem questionable. States reported being able to search these records using several types of information, including an applicant’s name (29 states), a vehicle identification number (20 states), and an applicant’s driver’s license or license plate number (18 states each). Most of the states (25 of the 30 states) reported that they generally conduct DMV searches before determining eligibility. Appendix VI provides additional information about the vehicle searches conducted by states. Twenty-two states reported taking additional steps to obtain information from third parties, such as conducting additional property searches, to identify assets transferred for less than FMV; 7 states reported doing this for all applicants; and 15 states reported doing this for some applicants. Of the 15 states that reported taking additional verification steps for just some applicants, most of them indicated they do so only if they question the information provided by applicants or have reason to believe a transfer may have occurred, such as if an applicant reported making a transfer. Appendix VII provides information on the proportion of applicants for which each state reported taking additional verification steps to identify assets transferred for less than FMV during the look-back period. Although 25 states were supposed to have implemented their electronic AVS to obtain information from financial institutions by the end of fiscal year 2011, no state had implemented one at the time of our survey. Eighteen states reported that they were in the process of implementing an AVS, while the remaining 33 states had yet to begin implementation. When asked about the challenges to implementing an AVS, 32 states reported that they did not have enough resources—such as money, staff, or time—required to implement such a system, and 18 states reported that it had been or would be challenging to get financial institutions to participate and provide information. One state reported that it had initially planned to have its AVS implemented by December 2011, but was unable to do so because financial institutions in the state were unwilling to participate in the AVS until state legislation is passed that releases the financial institutions from any liability, ensures they are fairly reimbursed for their services, and makes the process voluntary. The state Medicaid program is seeking such legislation during the state’s 2012 legislative session and then plans to proceed with implementing its AVS. CMS acknowledged that states may have challenges that could affect their ability to implement an AVS as scheduled. CMS officials were aware of states’ progress in implementing the AVS and told us that the agency was regularly communicating with states regarding AVS implementation. On the basis of states’ responses to questions about the documentation required from applicants and the asset information obtained from third parties, it is unclear whether some states obtain sufficient information to implement certain DRA provisions, particularly the provisions related to the look-back period and home equity. The results of our survey raise questions about some states’ implementation of the DRA, but are not conclusive, and we have additional work planned related to Medicaid long-term care financial eligibility. Look-back Period. We asked states about (1) the number of months of financial and investment resources documentation required to determine eligibility, (2) additional documentation required to identify assets transferred for less than fair market value, and (3) number of months of documentation obtained directly from financial institutions. When considering states’ responses to those questions, we found that 31 states reported obtaining less than 60 months of information about at least some applicants’ assets. Three of the 31 states reported requiring a single month of documentation from applicants and did not obtain any information from financial institutions. CMS officials noted that it is costly and time consuming to conduct a review for the 60 month look-back period. Thus, these officials stated, it was understandable for states to use discretion and only conduct reviews when there is reason to believe that a transfer could have been made during the look-back period. For example, a state might determine the need to conduct a more thorough review as a result of red flags found through other checks, such as when an applicant has very high income and no resources. However, the application forms in 6 of the 31 states did not ask about transfers made during the entire look-back period. Thus, it is unclear how these 6 states would know whether assets were transferred for less than FMV in the 60 months prior to application, and how all 31 states would be able to detect unreported transfers of assets made during the entire look-back period. In contrast, 20 states reported requiring 60 months of documentation from all applicants, 5 of which also requested 60 months of information from financial institutions for at least some applicants. Home Equity. Fourteen states reported not requiring documentation of a primary residence. Of these 14 states, 8 indicated that they conduct property record searches in the county of residence for at least some applicants to try to obtain information about property the applicant may own. Additionally, 1 state indicated that it could obtain information about an applicant’s primary residence from a third party. The remaining 5 states reported they did not conduct property records searches; as such, it is unclear how these states would determine if an applicant owns a home that he or she failed to report, and the value of an applicant’s equity interest in the home. Of the 37 states that reported requiring applicants to submit documentation of a primary residence, only 3 reported requiring documentation that could provide the state with information on the value of the home or an applicant’s equity interest in the home. The remaining 34 states reported requiring documentation about applicants’ primary residence, but the documentation received may not provide all of the information necessary to determine if applicants’ equity interest in their homes exceeds the state’s allowable amount. Life Estates. Among the 32 states that provided information on our survey about life estates, 2 states reported not assessing the length of time an applicant with a life estate resided in the property. The remaining 30 states reported requiring some type of proof in order to determine the amount of time an applicant resided in the property after the purchase of the life estate interest. On the basis of our analysis of state responses, 7 of the 30 states reported relying only on a statement from an applicant or another person who owns the residence to determine the length of time an applicant resided in the property; however, 2 of these 7 states said that they would require more documentation if they determined that the information they received was questionable. The remaining 23 states reported relying on documentation, such as a utility bill; a statement from an applicant; a statement from a third party; or some combination of these sources to determine the length of time an applicant lived in the property. CCRCs. On the basis of our survey results, most states—46 of 51—reported requiring applicants to provide documentation of CCRC or life care community entrance fees, such as a copy of the contract or agreement. Thus, these states should have sufficient information to determine if, under the DRA, an applicant’s entrance fees should be countable resources for determining Medicaid eligibility. However, it is unclear how the remaining 5 states would be able to determine if an applicant has paid such fees, and whether they should be counted toward the applicant’s Medicaid long-term care eligibility determination. Promissory Notes. Fifty states reported on our survey that they require applicants to provide documentation of promissory notes or loans; such documentation should allow the state to determine if a note meets the requirements specified in the DRA, such as providing for payments to be made in equal installments throughout the course of the loan, or if a note should be treated as a transfer of assets for less than FMV. Annuities. In responding to our survey, all states reported requiring documentation of annuities and thus should have sufficient information to determine whether an annuity should be considered a transfer of assets for less than FMV under the DRA. Additionally, our review of 49 states’ long-term care application forms found that 45 required the disclosure of any interest the applicant or spouse has in an annuity and 27 contained statements regarding the state becoming a remainder beneficiary of such annuities. (See app. VIII.) As the demand for long-term care services increases and federal and state resources continue to be strained, it is important to ensure that only eligible individuals receive Medicaid coverage for long-term care. Since each state is responsible for day-to-day implementation of its Medicaid program, variation in policies and practices for determining financial eligibility is expected. However, some of the variation we found may raise questions regarding how states determine Medicaid eligibility for long- term care and enforce certain provisions of the DRA. States must balance the costs of eligibility determination efforts with the need to ensure that those efforts provide sufficient information to implement federal requirements. While third-party verification of applicants’ financial information likely provides states with the best assurance of having a complete picture of an applicant’s financial status, it can be a complex and costly process that requires a significant amount of information and review. Given the complexities involved, it may be reasonable for states to adhere to a risk-based approach and focus their eligibility determination efforts on applicants who appear to be more likely to have assets or to have transferred assets that would make them ineligible. The electronic AVS that is required by law may help states identify some unreported or transferred assets. However, it is too early to assess its overall effectiveness, which will ultimately depend on the breadth of the financial institutions participating and the depth of the information obtained. We provided a draft of this report to HHS for its review, and HHS provided written comments (see app. IX). HHS concurred with our findings and noted that the results of our comprehensive report will serve as a resource for all interested parties. Further, HHS indicated that the report will be helpful for targeting CMS’s ongoing technical assistance and oversight efforts with states. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Administrator of CMS and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix X. State reported it did not count a primary residence when determining an applicant’s eligibility—and therefore did not require documentation—if an applicant was living in the home or intended to return home. State reported it did not require documentation of primary residence unless it appeared as though the primary residence could exceed the allowable amount of equity interest. State reported it did not require documentation of primary residence because it was not countable when determining an applicant’s eligibility. State reported it did not count a primary residence when determining an applicant’s eligibility—and therefore did not require documentation—if an applicant’s spouse, child under 20 years old, or disabled child was living in the home. State reported requiring documentation of vehicle loan value only if it affected eligibility determination. State reported requiring documentation of a life insurance policy if the applicant reported that it had a face value of $10,000 or more. State reported it did not require documentation of primary residence unless the applicant reported the value of the residence was within $200,000 of the allowable amount of equity interest. = Required. = Acceptable as proof by itself. = Acceptable as proof with other documentation. = Not acceptable as proof. — = Response not provided. = Required. = Acceptable as proof by itself. = Acceptable as proof with other documentation. = Not acceptable as proof. — = Response not provided. = Required. = Acceptable as proof by itself. = Acceptable as proof with other documentation. = Not acceptable as proof. — = Response not provided. = Required. = Acceptable as proof by itself. = Acceptable as proof with other documentation. = Not acceptable as proof. — = Response not provided. Rhode Island South Carolina South Dakota Tennessee Texas Utah Vermont Virginia = Required. = Acceptable as proof by itself. = Acceptable as proof with other documentation. — = Response not provided. = Required. = Acceptable as proof by itself. = Acceptable as proof with other documentation. = Not acceptable as proof. — = Response not provided. = Required. = Acceptable as proof by itself. = Acceptable as proof with other documentation. = Not acceptable as proof. — = Response not provided. = Required. = Acceptable as proof by itself. = Acceptable as proof with other documentation. = Not acceptable as proof. — = Response not provided. = All applicants. = Most (more than half) applicants. = Some applicants. = No applicants. — = Response not provided. = All applicants. = Most (more than half) applicants. = Some applicants. = No applicants. — = Response not provided. = Generally before eligibility determination. = Sometimes before and sometimes after eligibility determination. = Generally after eligibility determination. N/A = Not applicable, state did not conduct data match with respective source. — = Response not provided. = Generally before eligibility determination. = Sometimes before and sometimes after eligibility determination. = Generally after eligibility determination. N/A = Not applicable, state did not conduct data match with respective source. — = Response not provided. Appendix IV: States’ Contact with Financial Institutions to Verify Applicants’ Assets State Alaska Arizona Connecticut Illinois Indiana Kentucky Louisiana Maine Maryland Michigan Missouri New Jersey New York Rhode Island South Carolina Texas Vermont = All applicants. = Most (more than half) applicants. = Some applicants. = No applicants. — = Response not provided. — = Response not provided. = Generally before eligibility determination. = Sometimes before and sometimes after eligibility determination. = Generally after eligibility determination. Appendix V: States’ Use of Property Records Searches to Verify Applicants’ Assets State Alabama Alaska Arizona Colorado Connecticut Delaware Hawaii Illinois Iowa Kansas Kentucky Louisiana Maine Maryland Michigan Mississippi Missouri Montana Nevada New Hampshire New York North Carolina Ohio Oklahoma Oregon Pennsylvania Rhode Island = All applicants. = Most (more than half) applicants. = Some applicants. = No applicants. — = Response not provided. — — = Response not provided. = Generally before eligibility determination. = Sometimes before and sometimes after eligibility determination. = Generally after eligibility determination. Pennsylvania Rhode Island South Carolina = All applicants. = Most (more than half) applicants. = Some applicants. = No applicants. — = Response not provided. = Generally before eligibility determination. = Sometimes before and sometimes after eligibility determination. = Generally after eligibility determination. Rhode Island South Carolina South Dakota Tennessee Texas = All applicants. = Some applicants. = No applicants. — = Response not provided. Appendix VIII: GAO Analysis of Annuity Language Contained in States’ Application Forms ● Application form contained language regarding Name the state as a remainder 27 State collected information from applicants through an interview rather than a paper application form. In addition to the contact named above, Michelle B. Rosenberg, Assistant Director; Emily Binek; Julianne Flowers; Kaycee M. Glavich; Shirin Hormozi; Emily Loriso; Christina Ritchie; and Phillip J. Stadler made key contributions to this report. | Medicaida joint federal-state health care financing program for certain low income individualspaid for nearly half of the nations $263 billion long-term care expenditures in 2010. To be financially eligible for Medicaid coverage for long-term care, applicants cannot have assetsincome and resourcesabove certain limits. Federal law discourages individuals from artificially impoverishing themselves in order to establish financial eligibility for Medicaid. Specifically, those who transfer assets for less than fair market value during a specified time periodor look-back periodbefore applying for Medicaid may be ineligible for coverage for longterm care for a period of time. The DRA extended the look-back period to 60 months and introduced new requirements for the treatment of certain types of assets, such as annuities, in determining eligibility. States are responsible for assessing applicants eligibility for Medicaid, the criteria for which varies by state. GAO was asked to provide information on states requirements and practices for assessing the financial eligibility of applicants for Medicaid long-term care coverage. GAO examined the extent to which states (1) require documentation of assets from applicants, (2) obtain information from third parties to verify applicants assets, and (3) obtain information about applicants assets that could be used to implement eligibility-related DRA provisions. From October 2011 to November 2011, GAO surveyed Medicaid officials from each of the 50 states and the District of Columbia. GAO also interviewed officials from CMS, the agency within HHS that oversees Medicaid. States reported requiring applicants to provide documentation for most of the 13 types of assets included in GAOs survey. States varied in the extent to which they obtained information from third parties to verify applicants assets. For example, all states conducted data matches with the Social Security Administration but used other sources to a lesser extent. While states implementation of an electronic asset verification system (AVS) was required on a rolling basis beginning in 2009, no state had fully implemented an AVS at the time of GAOs survey. Among the implementation challenges reported by states were lack of resources and getting financial institutions to participate. Officials from the Centers for Medicare & Medicaid Services (CMS) were aware of states progress and challenges and told GAO that they regularly communicated with states on AVS implementation. On the basis of states responses to questions about the extent of documentation required from applicants and information obtained from third parties, it is unclear whether some states obtain sufficient information to implement certain provisions of the Deficit Reduction Act of 2005 (DRA). For example, 31 states reported requiring less than 60 months of documentation from applicants and financial institutions. The results of GAOs survey raise questions about states implementation of the DRA, but are not conclusive. CMS officials said that it is reasonable for states to only conduct reviews when there is reason to believe a transfer of assets occurred. GAO has additional work planned related to Medicaid long-term care financial eligibility. States must balance the costs of eligibility determination efforts with the need to ensure that those efforts provide sufficient information to implement federal requirements. Given the complexities involved, it may be reasonable for states to adhere to a risk-based approach and focus their eligibility determination efforts on applicants who appear to be more likely to have assets or to have transferred assets that would make them ineligible. It is too early to assess the effectiveness of the AVS; its utility will ultimately depend on the breadth of the financial institutions participating and the depth of the information obtained. The Department of Health and Human Services (HHS) concurred with GAOs findings and commented that GAOs comprehensive report will serve as a helpful resource for CMS and other interested parties. |
From the beginning of the Manhattan Project in the 1940s, a primary mission of DOE and its predecessor organizations has been to design, test, and build the nation’s nuclear weapons. To accomplish this mission, DOE constructed a vast nuclear weapons complex throughout the United States. Much of this complex was devoted to the production and fabrication of weapons components made from two special nuclear materials—plutonium and highly enriched uranium. The end of the Cold War changed the department’s focus from building new weapons to extending the lives of existing weapons, disposing of surplus nuclear material, and cleaning up no longer needed weapons sites. NNSA is responsible for extending the lives of existing weapons in the stockpile and for ultimately disposing of surplus nuclear material, while EM is responsible for cleaning up former nuclear weapons sites. Contractors, who are responsible for protecting classified information, nuclear materials, nuclear weapons, and nuclear weapons components, operate both NNSA and EM sites. In addition to NNSA and EM, DOE has two other important security organizations. DOE’s Office of Security develops and promulgates orders and policies, such as the DBT, to guide the department’s safeguards and security programs. DOE’s Office of Independent Oversight and Performance Assurance supports the department by, among other things, independently evaluating the effectiveness of contractors’ performance in safeguards and security. It also performs follow-up reviews to ensure that contractors have taken effective corrective actions and appropriately addressed weaknesses in safeguards and security. The key component of DOE’s well-established, risk-based security practices is the DBT, a classified document that identifies the characteristics of the potential threats to DOE assets. The DBT has been traditionally based on a classified, multiagency intelligence community assessment of potential terrorist threats, known as the Postulated Threat. The DBT considers a variety of threats in addition to terrorists. Other adversaries considered in the DBT include criminals, psychotics, disgruntled employees, violent activists, and spies. The DBT also considers the threat posed by insiders, individuals who have authorized, unescorted access to any part of DOE facilities and programs. Insiders may operate alone or may assist an adversary group. Insiders are routinely considered to provide assistance to the terrorist groups found in the DBT. The threat from terrorist groups is generally the most demanding threat contained in the DBT. DOE counters the terrorist threat specified in the DBT with a multifaceted protective system. While specific measures vary from site to site, all protective systems at DOE’s most sensitive sites employ a defense-in- depth concept that includes a variety of integrated alarms and sensors capable of detecting intruders; physical barriers, such as fences and antivehicle obstacles; numerous access control points, such as turnstiles, badge readers, vehicle inspection stations, special nuclear material detectors, and metal detectors; operational security procedures, such as a “two person” rule that prevents only one person from having access to special nuclear material; hardened facilities and/or vaults; and a heavily armed paramilitary protective force equipped with such items as automatic weapons, night vision equipment, body armor, and chemical protective gear. Depending on the material, protective systems at DOE Category I special nuclear material sites are designed to accomplish the following objectives in response to the terrorist threat: Denial of access. For some potential terrorist objectives, such as the creation of an improvised nuclear device, DOE may employ a protection strategy that requires the engagement and neutralization of adversaries before they can acquire hands-on access to the assets. Denial of task. For nuclear weapons or nuclear test devices that terrorists might seek to steal, DOE requires the prevention and/or neutralization of the adversaries before they can complete a specific task, such as stealing such devices. Containment with recapture. Where the theft of nuclear material (instead of a nuclear weapon) is the likely terrorist objective, DOE requires that adversaries not be allowed to escape the facility and that DOE protective forces recapture the material as soon as possible. This objective requires the use of specially trained and well-equipped special response teams. The effectiveness of the protective system is formally and regularly examined through vulnerability assessments. A vulnerability assessment is a systematic evaluation process in which qualitative and quantitative techniques are applied to detect vulnerabilities and arrive at effective protection of specific assets, such as special nuclear material. To conduct such assessments, DOE uses, among other things, subject matter experts, such as U.S. Special Forces; computer modeling to simulate attacks; and force-on-force performance testing, in which the site’s protective forces undergo simulated attacks by a group of mock terrorists. The results of these assessments are documented at each site in a classified document known as the Site Safeguards and Security Plan. In addition to identifying known vulnerabilities, risks, and protection strategies for the site, the Site Safeguards and Security Plan formally acknowledges how much risk the contractor and DOE are willing to accept. Specifically, for more than a decade, DOE has employed a risk management approach that seeks to direct resources to its most critical assets—in this case Category I special nuclear material—and mitigate the risks to these assets to an acceptable level. Levels of risk—high, medium, and low—are assigned classified numerical values and are derived from a mathematical equation that compares a terrorist group’s capabilities with the overall effectiveness of the crucial elements of the site’s protective forces and systems. Historically, DOE has striven to keep its most critical assets at a low risk level and may insist on immediate compensatory measures should a significant vulnerability develop that increases risk above the low risk level. Compensatory measures could include such things as deploying additional protective forces or curtailing operations until the asset can be better protected. In response to a September 2000 DOE Inspector General’s report recommending that DOE establish a policy on what actions are required once high or moderate risk is identified, in September 2003, DOE’s Office of Security issued a policy clarification stating that identified high risks at facilities must be formally reported to the Secretary of Energy or Deputy Secretary within 24 hours. In addition, under this policy clarification, identified high and moderate risks require corrective actions and regular reporting. Through a variety of complementary measures, DOE ensures that its safeguards and security policies are being complied with and are performing as intended. Contractors perform regular self-assessments and are encouraged to uncover any problems themselves. In addition to routine oversight, DOE Orders require field offices to comprehensively survey contractors’ operations for safeguards and security every year. These surveys, which can draw upon subject matter experts throughout the complex, generally take about 2 weeks to conduct and cover such areas as program management, protection program operations, information security, nuclear materials control and accountability, and personnel security. The survey team assigns ratings of satisfactory, marginal, or unsatisfactory. DOE’s Office of Independent Oversight and Performance Assurance provides yet another check through its comprehensive inspection program. This office performs such inspections roughly every 18 months at each DOE site that has specified quantities of Category I special nuclear material. All deficiencies (findings) identified during a survey require the contractors to take corrective action. DOE took immediate steps to improve physical security in the aftermath of the September 11, 2001, terrorist attacks. These steps included the following: Raised the level of security readiness. Presidential Decision Directive 39, issued in June 1995, states that the United States shall give the highest priority to developing effective capabilities to detect, prevent, and defeat terrorists seeking nuclear weapons or materials. In response, DOE Notice 473.6 specifies SECONs that have to be implemented at its Category I special nuclear material sites in response to a terrorist threat. On September 11, 2001, within a matter of hours, DOE sites went from their then-normal SECON level 4—terrorist threat level low—to SECON level 2—terrorist threat level high. Sites were required to implement nearly 30 additional measures, such as increasing vehicle inspections and badge checks; increasing stand-off distances between public and sensitive areas to protect against large vehicle bombs; activating and manning emergency operations centers on a continuous basis; and more heavily arming and increasing the number of protective forces on duty. Sites maintained SECON level 2 through October 2001 before dropping to an enhanced SECON level 3. The sites have returned to SECON level 2 several times since September 11, 2001, most recently in December 2003, when the national threat warning system was elevated to Orange Alert. The new baseline for security at DOE sites is generally assumed to be the measures currently associated with SECON level 3. Denial protection strategies. On October 3, 2001, the Secretary of Energy issued a classified directive ordering all sites to develop and implement plans to move to a denial protection strategy. DOE Manual 5632.1C-1 states that a denial protection strategy should be used where unauthorized access presents an unacceptable risk. In this regard, denial programs are designed to prevent an unauthorized opportunity to credibly initiate a nuclear dispersal or detonation or to use available materials for on-site assembly of an improvised nuclear device. Denial has typically been understood to mean that terrorists would never gain access to certain types of special nuclear material. The October 2001 directive also increased levels of performance testing for the protection of special nuclear material at DOE’s most critical facilities to ensure that these denial strategies were effective. Conducted security reviews, studies, and analyses. DOE conducted a number of security-related reviews, studies, and analyses. For example, within days after the terrorist attacks, DOE and NNSA officials conducted a classified assessment of their facilities’ vulnerabilities to an attack by aircraft, such as the attacks that occurred on September 11, 2001, or large vehicle bombs. NNSA also organized a 90-day Combating Terrorism Task Force, composed of 12 federal and contractor employee teams that looked at a number of security areas. One team, the site-by-site security review and vulnerability assessment group, identified and set priorities for over 80 security improvement projects, totaling more than $2 billion, that could be completed within 5 to 6 years. These projects ranged from hiring additional protective forces to consolidating special nuclear material. Increased liaison with federal, state, and local authorities. Before the September 11 terrorist attacks, DOE headquarters offices and sites maintained a variety of relationships, memoranda of understanding, and other formal and informal communications with organizations such as the Federal Aviation Administration, Federal Bureau of Investigation, and state and local law enforcement and emergency management agencies. After the terrorist attacks, DOE officials increased their communications with these organizations and established direct links through sites’ emergency operations centers. Because of the potential threat of aircraft attacks created by the September 11 attacks and because of such attacks’ potentially devastating consequences, sites worked closely with the Federal Aviation Administration and the U.S. military. Several benefits have resulted from these immediate measures. With respect to improved security, DOE security officials believe that the implementation of SECON levels 2 and 3 has, for example, increased the visible deterrence at DOE sites by placing more protective forces around the sites. Studies and analyses have also resulted in different and less vulnerable storage strategies for some special nuclear material. For example, one NNSA site purchased special fire and blast-resistant safes to store special nuclear material. Finally, some long-recognized security enhancement projects have received more funding, such as the construction of a new storage facility at an NNSA site, and efforts to control access to public areas and roads adjacent to several NNSA sites. While these measures have produced several positive outcomes, they have also had the following negative impacts: First, the role of the implemented SECON measures in improving DOE physical security is uncertain. While DOE Notice 473.6, which established the department’s SECON levels, does not explicitly require SECON measures to be performance tested, DOE Manual 473.2-2 states that performance tests must be used to realistically evaluate and verify the effectiveness of protective force programs. While some of the SECON measures, such as vehicle inspection checkpoints, have undergone some limited performance testing of their effectiveness, most DOE sites generally have not assessed the SECON level measures in place using the vulnerability assessment tools, such as computer modeling and full-scale force-on-force performance tests, that play such a key role in developing and verifying protective strategies at their sites. Consequently, the effectiveness of SECON measures against other aspects of the 2003 DBT, such as a larger group of well-armed terrorists, is largely unknown. In its comments on our report, DOE agreed to explore procedures to incorporate the evaluation of increased SECON levels into its vulnerability assessments. Second, increased SECON measures have been expensive. DOE sites estimate that it costs each site from $18,000 to nearly $200,000 per week in unplanned expenditures to implement the required SECON level 2 and 3 measures. Most of these expenses result from overtime pay to protective forces. The costs of the higher SECON levels, however, can be measured in more than just budget dollars. Specifically, a June 2003 DOE Inspector General’s report found that the large amounts of overtime needed to meet the higher SECON requirements have resulted in fatigue, reduced readiness, retention problems, reduced training, and fewer force-on-force performance tests for the protective forces. Additional protective forces have been hired and trained in an effort to provide some relief; however, the DOE Inspector General has found that the deployment of additional protective forces has been delayed by slow processing of the necessary security clearances. Third, the increased operational costs associated with the higher SECON levels can hinder or preclude sites from making investments that could improve their security over the long term. For example, according to a NNSA security official, because of the high costs of maintaining SECON measures, one site had to delay purchasing weaponry and ammunition for its protective forces to use to defeat commercially available armored vehicles that could be used by terrorists. Fourth, the sites did not complete the implementation of the Secretary’s October 3, 2001, denial directive because of confusion over its meaning and because of the projected high costs of implementation. Over the years, DOE has issued varying guidance on denial protection strategies and, as a result, the sites have approached denial protection from different perspectives. For example, some NNSA sites and operations have implemented the most stringent form of denial, which is now defined as denial of access. In contrast, other NNSA sites have plans in place to interrupt terrorists who have gained access to materials, now called a denial of task protection strategy. Most EM sites have practiced containment protection strategies augmented by recapture and recovery capabilities. For sites that did not already have a denial strategy in place, moving to a full denial of access strategy appears to be enormously expensive, with some sites estimating it would cost from about $30 million to $200 million to implement the directive completely. Moreover, the performance testing requirements of this directive have generally not been conducted because of the already large amounts of protective force overtime required by the higher SECON levels. For example, a NNSA security official at one site estimated it would have to conduct as many as 30 full-scale force-on-force performance tests each year to comply with the Secretary’s Directive. The 2003 DBT, however, has now replaced this directive by explicitly defining denial of access and denial of task protection strategies and when these strategies should be employed. Finally, while liaison with other agencies is important, DOE officials anticipate that any terrorist attacks on their facilities will be short and violent and be over before any external responders can arrive. In addition, because some DOE sites are close to airports and/or major flight routes, they may receive little warning of aircraft attacks, and U.S. military aircraft may have little opportunity to intercept these attacks. Under DOE Order 470.1, the DBT is intended to provide the foundation for all of DOE’s protective strategies. For example, DOE Order 473.2 states that protective forces must be trained and equipped to defeat the terrorist groups contained in the DBT. In the immediate aftermath of September 11, 2001, DOE officials realized that the then current DBT, issued in April 1999 and based on a 1998 intelligence community assessment, was obsolete. The September 11, 2001, terrorist attacks suggested larger groups of terrorists, larger vehicle bombs, and broader terrorist aspirations to cause mass casualties and panic than were envisioned in the 1999 DOE DBT. However, formally recognizing these new threats by updating the DBT was difficult because of debates over the size of the future threat, the cost to meet it, and the DOE policy process. The traditional basis for the DBT has been the Postulated Threat, which is conducted by the U.S. intelligence community, principally DOD’s Defense Intelligence Agency, and the security organizations of a number of different agencies, including DOE. For example, DOE closely based its 1999 DBT on the 1998 Postulated Threat assessment and adopted the same number of terrorists as identified by the 1998 Postulated Threat as its highest threat to its facilities. Efforts to revise the Postulated Threat began soon after the terrorist attacks of September 11, 2001. The intelligence community originally planned to complete the Postulated Threat by April 2002; however, the document was not completed and officially released until January 2003, about 9 months behind the original schedule. According to DOE and DOD officials, this delay was the result of other post September 11, 2001, demands placed on the intelligence community, as well as sharp debates among the organizations involved with developing the Postulated Threat over the size and capabilities of future terrorist threats and the resources needed to meet these projected threats. While waiting for the new Postulated Threat, DOE developed a number of draft documents that culminated in the final May 20, 2003, DBT. These documents included the following: December 2001—Interim Joint Threat Policy Statement. DOE and DOD worked on this joint draft document but abandoned this effort later in 2002 because neither agency wanted to act without the benefit of the Postulated Threat. January 2002—Interim Implementing Guidance. DOE’s Office of Security issued this guidance so that DOE programs could begin to plan and budget for eventual increases in the DBT. This interim guidance suggested that sites begin planning for an increased number of adversaries over the 1999 DBT. May 2002—Draft DBT. DOE produced its first official draft DBT and labeled it an interim product pending the release of the Postulated Threat. August 2002—Second Draft DBT. This draft introduced the graded threat approach, which is an important feature in the final DBT. December 2002—Third Draft DBT. April 2003—Fourth Draft DBT. This draft was the first to consider the final January 2003 Postulated Threat. May 2003—Final DBT. Like the participants responsible for developing the Postulated Threat, during the development of the DBT, DOE officials debated the size of the future terrorist threat and the costs to meet it. DOE officials at all levels told us that concern over resources played a large role in developing the 2003 DBT, with some officials calling the DBT the “funding basis threat,” or the maximum threat the department could afford. This tension between threat size and resources is not a new development. According to a DOE analysis of the development of prior DBTs, political and budgetary pressures and the apparent desire to reduce the requirements for the size of protective forces appear to have played a significant role in determining the terrorist group numbers contained in prior DBTs. Finally, DOE developed the DBT through the standard DOE review and comment process for developing policy as outlined in DOE Order 251.1A and DOE Manual 251.1-1A. This process emphasizes developing consensus and resolving conflicts and involving a wide number of DOE organizations and affected contractors. Once DOE formulates a proposed policy, it typically allows 60 days for review and comment and 60 days for issue resolution. While developing the 2003 DBT, DOE’s Office of Security distributed the draft DBTs to DOE program and field offices and invited them to provide comments. Field offices distributed the drafts to contractors, who were also invited to provide comments. DOE’s Office of Security considered these comments and often incorporated them into the next version of the DBT. DOE’s Office of Security also continued to coordinate with the other federal organizations that have similar assets, chiefly DOD and the Nuclear Regulatory Commission. Having followed this process for 21 months, the Deputy Secretary of Energy signed the revised DBT in May 2003. According to the Director of Policy in DOE’s Office of Security, the DBT was developed as fast as possible, given delays in completing the Postulated Threat and the constraints of the DOE policy system. He added that using the DOE policy process was difficult and time-consuming and inevitably added to delays in issuing the new DBT. Many officials in DOE’s program offices and sites, as well as contractor officials, also found the process to be laborious and not timely, especially given the more dangerous threat environment that existed after the September 11, 2001, terrorist attacks. During the 21 months it took to develop the DBT, DOE sites still officially followed the 1999 DBT, although their protective posture was augmented by implementing SECON level 2 and 3 measures. EM sites continued to conduct vulnerability assessments and develop Site Safeguards and Security Plans based on the 1999 DBT. In contrast, NNSA largely suspended the development of Site Safeguards and Security Plans pending the issuance of the new DBT, although NNSA did embark on a new vulnerability assessment process, called Iterative Site Analysis. NNSA performed Iterative Site Analysis exercises at a number of its sites. EM also conducted an Iterative Site Analysis at one site. Also during this period, DOE’s Office of Independent Oversight and Performance Assurance continued its inspections; however, it initially reduced the amount of force-on-force performance testing it conducted because of the high levels of protective force overtime caused by implementation of SECON level 2 and 3 measures. This office also planned to begin performance testing at levels higher than the 1999 DBT, but it had done so only once before the 2003 DBT was issued. Reflecting the post-September 11, 2001, environment, the May 2003 DBT, among other things, identifies a larger terrorist threat than did the previous DBT. It also mandates specific protection strategies and expands the range of terrorist objectives to include radiological, biological, and chemical sabotage. However, the threat identified in the new DBT, in most cases, is less than the terrorist threat identified in the intelligence community’s Postulated Threat. Key features of the 2003 DBT include the following: Expanded terrorist characteristics and goals. The 2003 DBT assumes that terrorist groups are the following: well armed and equipped; trained in paramilitary and guerrilla warfare skills and small unit tactics; highly motivated; willing to kill, risk death, or commit suicide; and capable of attacking without warning. Furthermore, according to the 2003 DBT, terrorists might attack a DOE or NNSA facility for a variety of goals, including the theft of a nuclear weapon, nuclear test device, or special nuclear material; radiological, chemical, or biological sabotage; and the on-site detonation of a nuclear weapon, nuclear test device, or special nuclear material that results in a significant nuclear yield. DOE refers to such a detonation as an improvised nuclear device. Increased size of the terrorist group threat. The 2003 DBT increases the terrorist threat levels for the theft of the department’s highest value assets—Category I special nuclear materials—although not in a uniform way. Previously, under the 1999 DBT, all DOE sites that possessed any type of Category I special nuclear material were required to defend against a uniform terrorist group composed of a relatively small number of individuals. Under the 2003 DBT, however, the department judges the theft of a nuclear weapon or test device to be more attractive to terrorists, and sites that have these assets are required to defend against a substantially higher number of terrorists than are other sites. For example, an NNSA site that, among other things, assembles and disassembles nuclear weapons, is required to defend against a larger terrorist group. Other NNSA sites, some of which fabricate nuclear weapons components, or EM sites that store excess plutonium, only have to defend against a smaller group of terrorists. However, the number of terrorists in the 2003 DBT is larger than the 1999 DBT number. DOE calls this a graded threat approach. Mandated specific protection strategies. In line with the graded threat approach and depending on the type of materials they possess and the likely mission of the terrorist group, sites must now implement specific protection strategies, such as denial of access, denial of task, or containment with recapture for their most sensitive facilities and assets. For example, one NNSA site is required under the new DBT to implement a denial of task strategy to prevent terrorists from stealing a nuclear weapon or test device. In contrast, other DOE sites are required to implement a containment with recapture strategy to prevent the theft of special nuclear material. However, if these sites have an improvised nuclear device concern, they will have to implement denial of access or denial of task strategies. Finally, sites will have to develop, for the first time, specific protection strategies for facilities, such as radioactive waste storage areas, wastewater treatment, and science laboratories, against the threat of radiological, chemical, or biological sabotage. Previously, in an April 1998 policy clarification, DOE’s Office of Security had stated that, assuming that baseline security requirements were met, radiological dispersal sabotage events were not considered attractive to terrorists. Addressed the potential for improvised nuclear device concerns. The new DBT establishes a team to report to the Secretary of Energy on each site’s potential for improvised nuclear devices. Based on the teams’ advice, the Secretary of Energy will have to designate whether a site has such a concern. This official designation should help address the general dissatisfaction with previous DOE policies for improvised nuclear devices, knowledge of which is carefully controlled and not shared widely with security officials. For example, some EM sites have had no information at all on their potential for this risk, and at least one NNSA site official believed that scenarios for such risks have not been fully characterized. Introduced aircraft threats and mitigation measures. In the 1999 DBT, DOE only acknowledged the risk for unspecified air attacks but did not lay out any protective measures to mitigate this risk. In the 2003 DBT, DOE considers aircraft as airborne improvised explosive devices. DOE’s new policy is to rely on other federal government agencies, such as the Departments of Homeland Security and Defense, to defeat such a threat. DOE sites are expected, however, to consider measures, such as how they handle and store their materials, to mitigate the consequences of an aircraft attack on existing facilities, and new DOE facility designs are expected to include features to mitigate the consequences of an attack. While DOE’s 2003 DBT makes some important advances, aspects of the DBT raise several important issues. First, while the May 2003 DBT identifies a larger terrorist group than did the previous DBT, the threat identified in the new DBT in most cases is less than the terrorist threat identified in the intelligence community’s Postulated Threat. The Postulated Threat applies to nuclear weapons sites, which the Postulated Threat defines as research and development facilities with nuclear weapons, components, or special nuclear material; weapons production facilities; sites for long-term storage of nuclear weapons; and nuclear weapons in transport. With respect to these sites, the Postulated Threat specified the following: There is a credible threat to U.S. facilities with nuclear or chemical weapons or biological agents. A well-organized terrorist group presents the greatest and most likely threat in most circumstances. Terrorists may use aircraft as weapons. Terrorists may use multiple vehicle bombs loaded with explosives. Terrorist groups would probably consist of a small to medium sized group of well-armed and trained members. A larger force is possible if the group thought this was necessary to attain an important strategic goal. Terrorist objectives include the theft of a weapon, detonation of a nuclear weapon in place, radiological sabotage, mass casualties, and/or public panic. In contrast to the Postulated Threat, DOE is preparing to defend against a significantly smaller group of terrorists attacking most of its facilities. Specifically, only for its sites and operations that handle nuclear weapons, is DOE currently preparing to defend against an attacking force that approximates the lower range of the threat identified in the Postulated Threat. For the other DOE sites that have Category I special nuclear material—all of which fall under the Postulated Threat’s definition of a nuclear weapons site—DOE is currently only preparing to defend against a smaller number terrorists—or approximately the same number contained in its DBT in the early 1980s. Second, and more critically, some of these sites may have improvised nuclear device concerns that, if successfully exploited by terrorists, could result in a nuclear detonation. Nevertheless, under the graded threat approach, DOE requires these sites only to be prepared to defend against a smaller force of terrorists than was identified by the Postulated Threat. DOE’s Office of Security cited subject matter expert opinion as support for this distinction. However, according to officials in DOE’s Office of Independent Oversight and Performance Assurance, sites with improvised nuclear device concerns should be held to the same requirements as facilities that possess nuclear weapons and test devices since the potential worst-case consequence at both types of facilities would be the same—a nuclear detonation. Some DOE officials and an official in DOD’s Office of the Assistant Secretary of Defense for Command, Control, Communications, and Intelligence disagreed with the overall graded threat approach, believing that the threat should not be embedded in the DBT by adjusting the number of terrorists that might attack a particular target. DOE Office of Security officials cited three reasons for why the department departed from the Postulated Threat’s assessment of the potential size of terrorist forces. First, these officials stated that they believed that the Postulated Threat only applied to sites that handled completed nuclear weapons and test devices. However, both the 2003 Postulated Threat, as well as the preceding 1998 Postulated Threat, state that the threat applies to nuclear weapons and special nuclear material without making any distinction between them. Second, DOE Office of Security officials believed that the higher threat levels contained in the 2003 Postulated Threat represented the worst potential worldwide terrorist case over a 10-year period. These officials noted that while some U.S. assets, such as military bases, are located in parts of the world where terrorist groups receive some support from local governments and societies, thereby allowing for an expanded range of capabilities, DOE facilities are located within the United States, where terrorists would have a more difficult time operating. Furthermore, DOE Office of Security officials stated that the DBT focuses on a nearer-term threat of 5 years. As such, DOE Office of Security officials said that they chose to focus on what their subject matter experts believed was the maximum, credible, near-term threat to their facilities. However, while the 1998 Postulated Threat made a distinction between the size of terrorist threats abroad and those within the United States, the 2003 Postulated Threat, reflecting the potential implications of the September 2001 terrorist attacks, did not make this distinction. Finally, DOE Office of Security officials stated that the Postulated Threat document represented a reference guide instead of a policy document that had to be rigidly followed. The Postulated Threat does acknowledge that it should not be used as the sole consideration to dictate specific security requirements and that decisions regarding security risks should be made and managed by decision makers in policy offices. However, DOE has traditionally based its DBT on the Postulated Threat. For example, the prior DBT, issued in 1999, adopted exactly the same terrorist threat size as was identified by the 1998 Postulated Threat. Finally, the department’s criteria for determining the severity of radiological, chemical, and biological sabotage may be insufficient. For example, the criterion used for protection against radiological sabotage is based on acute radiation dosages received by individuals. However, this criterion may not fully capture or characterize the damage that a major radiological dispersal at a DOE site might cause. For example, according to a March 2002, DOE response to a January 23, 2002, letter from Representative Edward J. Markey, a worst-case analysis at one DOE site showed that while a radiological dispersal would not pose immediate, acute health problems for the general public, the public could experience measurable increases in cancer mortality over a period of decades after an event. Moreover, releases at the site could also have environmental consequences requiring hundreds of millions to billions of dollars to clean up. Contamination could also affect habitability for tens of miles from the site, possibly affecting hundreds of thousands of residents for many years. Likewise, the same response showed that a similar event at a NNSA site could result in a dispersal of plutonium that could contaminate several hundred square miles and ultimately cause thousands of cancer deaths. For chemical sabotage standards, the 2003 DBT requires sites to protect to industry standards. However, we reported last year that such standards currently do not exist. Specifically, we found that no federal laws explicitly require chemical facilities to assess vulnerabilities or take security actions to safeguard their facilities against terrorist attack. Finally, the protection criteria for biological sabotage are based on laboratory safety standards developed by the U.S. Centers for Disease Control, not physical security standards. While DOE issued the final DBT in May 2003, it has been slow to resolve a number of significant issues that may affect the ability of its sites to fully meet the threat contained in the new DBT in a timely fashion. Fully resolving these issues may take several years and the total cost of meeting the new threats is currently unknown. Because some sites will be unable to effectively counter the higher threat contained in the new DBT for up to several years, these sites should be considered to be at higher risk under the new DBT than they were under the old DBT. In order to undertake the necessary range of vulnerability assessments to accurately evaluate their level of risk under the new DBT and implement necessary protective measures, DOE recognized that it had to complete a number of key activities. DOE only recently completed two of these key activities. First, in February 2004, DOE issued its Adversary Capabilities List, which is a classified companion document to the DBT, that lists the potential weaponry, tactics, and capabilities of the terrorist group described in the DBT. This document has been amended to include, among other things, heavier weaponry and other capabilities that are potentially available to terrorists who might attack DOE facilities. DOE is continuing to review relevant intelligence information for possible incorporation into future revisions of the Adversary Capabilities List. Second, DOE also only recently provided additional DBT implementation guidance. In a July 2003 report, DOE’s Office of Independent Oversight and Performance Assurance noted that DOE sites had found initial DBT implementation guidance confusing. For example, when the Deputy Secretary of Energy issued the new DBT in May 2003, the cover memo said the new DBT was effective immediately but that much of the DBT would be implemented in fiscal years 2005 and 2006. According to a 2003 report by the Office of Independent Oversight and Performance Assurance, many DOE sites interpreted this implementation period to mean that they should, through fiscal year 2006, only be measured against the previous, less demanding 1999 DBT. In particular, the 2003 report found that one NNSA site was planning to conduct certain operations starting in 2003 that involved special nuclear material using security plans that did not comply with even the 1999 DBT. Consequently, the Office of Independent Oversight and Performance Assurance recommended that the site suspend these planned operations until it had adequate security plans that reflected the new DBT. NNSA security officials concurred with this recommendation and postponed the site’s proposed operations. In response to this confusion, the Deputy Secretary issued further guidance in September 2003 that called for the following, among other things: DOE’s Office of Security to issue more specific guidance by October 22, 2003, regarding DBT implementation expectations, schedules, and requirements. DOE issued this guidance January 30, 2004. Quarterly reports showing sites’ incremental progress in meeting the new DBT for ongoing activities. Immediate compliance with the new DBT for new and reactivated operations. Other important DBT-related issues remain unresolved. First, as noted earlier, a special team created in the 2003 DBT, composed of weapons designers and security specialists, finalized its report on each site’s improvised nuclear device vulnerabilities. The results of this report were briefed to senior DOE officials in March 2004. Based on this team’s report, the Secretary may officially designate some sites as having an improvised nuclear device concern. If this designation is made, some sites may be required under the 2003 DBT to shift to a denial of access or denial of task protection strategy, which could be very costly. This special team’s report may most affect EM sites because their improvised nuclear device potential had not been explored until this review, and their formal protection strategy remains at the less demanding containment with recapture and recovery level. DOE officials have not identified when the Secretary will make these designations. Second, DOE’s Office of Security has not completed all of the activities associated with the new vulnerability assessment methodology it has been developing for over a year. DOE’s Office of Security believes this methodology, which uses a new mathematical equation for determining levels of risk, will result in a more sensitive and accurate portrayal of each site’s defenses-in-depth and the effectiveness of sites’ protective systems (i.e., physical security systems and protective forces) when compared with the new DBT. DOE’s Office of Security decided to develop this new equation because its old mathematical equation had been challenged on technical grounds and did not give sites credit for the full range of their defenses-in-depth. While DOE’s Office of Security completed this equation in December 2002, officials from this office believe it will probably not be completely implemented at the sites for at least another year for two reasons. First, site personnel who implement this methodology will require additional training to ensure they are employing it properly. DOE’s Office of Security conducted initial training in December 2003, as well as a prototype course in February 2004, and has developed a nine-course vulnerability assessment certification program. Second, sites will have to collect additional data to support the broader evaluation of their protective systems against the new DBT. Collecting these data will require additional computer modeling and force-on-force performance testing. Because of the slow resolution of some of these issues, DOE has not developed any official long-range cost estimates or developed any integrated, long-range implementation plans for the May 2003 DBT. Specifically, neither the fiscal year 2003 nor 2004 budgets contained any provisions for DBT implementation costs. However, during this period, DOE did receive additional safeguards and security funding through budget reprogramming and supplemental appropriations. DOE used most of these additional funds to cover the higher operational costs associated with the increased SECON measures. DOE has gathered initial DBT implementation budget data and has requested additional DBT implementation funding in the fiscal year 2005 budget: $90 million for NNSA, $18 million for the Secure Transportation Asset within the Office of Secure Transportation, and $26 million for EM. However, DOE officials believe the budget data collected so far has been of generally poor quality because most sites have not yet completed the necessary vulnerability assessments to determine their resource requirements. Consequently, the fiscal year 2006 budget may be the first budget to begin to accurately reflect the safeguards and security costs of meeting the requirements of the new DBT. Reflecting these various delays and uncertainties, in September 2003, the Deputy Secretary changed the deadline for DOE program offices, such as EM and NNSA, to submit DBT implementation plans from the original target of October 2003 to the end of January 2004. NNSA and EM approved these plans in February 2004. A DOE Office of Budget official told us that current DBT implementation cost estimates do not include items such as closing unneeded facilities, transporting and consolidating materials, completing line item construction projects, and other important activities that are outside of the responsibility of the safeguards and security program. For example, EM’s Security Director told us that, for EM to fully comply with the DBT requirements in fiscal year 2006 at one of its sites, it will have to close and de-inventory two facilities, consolidate excess materials into remaining special nuclear materials move consolidated Category I special nuclear material, which NNSA’s Office of Secure Transportation will transport, to another site. Likewise, the EM Security Director told us that to meet the DBT requirements at another site, EM will have to accelerate the closure of one facility and transfer special nuclear material to another facility on the site. The costs to close these facilities and to move materials within a site are borne by the EM program budget and not by the EM safeguards and security budget. Similarly, the costs to transport the material between sites are borne by NNSA’s Office of Secure Transportation budget and not by EM’s safeguards and security budget. A DOE Office of Budget official told us that a comprehensive, department-wide approach to budgeting for DBT implementation that includes such important program activities as described above is needed; however, such an approach does not currently exist. The department plans to complete DBT implementation by the end of fiscal year 2006. However, most sites estimate that it will take 2 to 5 years, if they receive adequate funding, to fully meet the requirements of the new DBT. During this time, sites will have to conduct vulnerability assessments, undertake performance testing, and develop Site Safeguards and Security Plans. Consequently, full DBT implementation could occur anywhere from fiscal year 2005 to fiscal year 2008. Some sites may be able to move more quickly and meet the department’s deadline of the end of fiscal year 2006. For example, one NNSA site already has developed detailed plans and budgets to meet the new DBT requirements. While this site may be already close to meeting the new DBT requirements, other DOE sites are at higher risk to the threats specified under the 2003 DBT than they were under the old 1999 DBT. For example, the Office of Independent Oversight and Performance Assurance has concluded in recent inspections that at least two DOE sites face fundamental and not easily resolved security problems that will make meeting the requirements of the new DBT difficult. For other DOE sites, their level of risk under the new DBT remains largely unknown until they can conduct the necessary vulnerability assessments. Because some sites will be unable to effectively counter the threat contained in the new DBT for a period of up to several years, these sites should be considered to be at higher risk under the new DBT than they were under the old DBT. DOE took a series of immediate actions in response to the terrorist attacks of September 11, 2001. While each of these actions have been important, in and of themselves, we believe they are not sufficient to ensure that all of DOE’s sites are adequately prepared to defend themselves against the higher terrorist threat present in a post September 11, 2001 world. Rather, DOE must press forward with a series of actions to ensure that it is fully prepared to provide a timely and cost effective defense. First, DOE needs to know the effectiveness of its most immediate response to September 11, 2001—the move to higher SECON levels. The higher SECON levels, while increasing the level of visible deterrence, have come at a significant cost in budget dollars and protective force readiness. We believe that DOE needs to follow its own policies and use its well- established vulnerability assessment methodology to evaluate the effectiveness of these additional security measures. Second, because the September 11, 2001, terrorist attacks suggested larger groups of terrorists with broader aspirations of causing mass casualties and panic, we believe that the DBT development process that was used requires reexamination. While DOE may point to delays in the development of the Postulated Threat as the primary reason for the almost 2 years it took to develop a new DBT, DOE was also working on the DBT itself for most of that time. We believe the difficulty associated with developing a consensus using DOE’s traditional policy-making process was a key factor in the time it took to develop a new DBT. During this extended period, DOE’s sites were only being defended against what was widely recognized as an obsolete terrorist threat level. Third, we are concerned about two aspects of the resulting DBT. We are not persuaded that there is sufficient difference, in its ability to achieve the objective of causing mass casualties or creating public panic, between the detonation of an improvised nuclear device and the detonation of a nuclear weapon or test device at or near design yield that warrants setting the threat level at a lower number of terrorists. Furthermore, while we applaud DOE for adding additional requirements to the DBT such as protection strategies to guard against radiological, chemical, and biological sabotage, we believe that DOE needs to reevaluate its criteria for terrorist acts of sabotage, especially in the chemical area, to make it more defensible from a physical security perspective. Finally, because some sites will be unable to effectively counter the threat contained in the new DBT for a period of up to several years, these sites should be considered to be at higher risk under the new DBT than they were under the old DBT. Consequently, DOE needs to take a series of actions to mitigate these risks to an acceptable level as quickly as possible. To accomplish this, it is important for DOE to resolve a number of DBT and DBT-related issues and go about the hard business of a comprehensive department-wide approach to implementing needed changes in its protective strategy. Because the consequences of a successful terrorist attack on a DOE site could be so devastating, we believe it is important for DOE to inform the Congress about what sites are at high risk and what progress is being made to reduce these risks to acceptable levels. In order to strengthen DOE’s ability to meet the requirements of the new DBT, as well as to strengthen the department’s ability to deal with future terrorist threats, we are making the following seven recommendations to the Secretary of Energy: Evaluate the cost and effectiveness of existing SECONs and how they are implemented using DOE’s vulnerability assessment methodology. Review how the DBT is developed to determine if using the current policy- making approach is appropriate given the dynamic post-September 11, 2001, security environment. Reexamine the current application of the graded threat approach to sites that may have improvised nuclear device concerns. Reexamine the criteria established in the May 2003 DBT to determine levels of risk from radiological, biological, and chemical sabotage to ensure that they are appropriate from a security standpoint. Ensure that all remaining DBT and DBT related-issues, such as the designation of improvised nuclear device concerns and the new vulnerability assessment methodology, are completed on an expedited schedule. Develop and implement a department-wide, multiyear, fully resourced implementation plan for meeting the new DBT requirements that includes important programmatic activities such as the closure of facilities and the transportation of special nuclear materials. Report regularly to relevant congressional oversight committees on: (1) the status of DBT implementation as reflected by the required quarterly DBT implementation progress reports and (2) which sites and facilities are currently considered to be at high risk under the new DBT and what steps are being taken to mitigate these risks to acceptable levels. We provided DOE with a draft of the classified version of this report for review and comment. In its written comments, DOE said it was committed to the development and promulgation of an accurate and comprehensive DBT policy. DOE did not comment specifically on our recommendations other than to say that the department would consider them as part of its Departmental Management Challenges for 2004. DOE has identified the DBT as a major departmental initiative within the National Security Management Challenge. In an enclosure attached to its comments, DOE also provided some additional technical information that we incorporated where appropriate. DOE’s letter commenting on our draft report is presented in appendix I. We are sending copies of this report to the Secretary of Energy, the Director of the Office of Management and Budget, and appropriate congressional committees. We also will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please call me at (202) 512-3841. Major contributors to this report are listed in appendix II. In addition to the individuals named above, Jonathan Gill, Chris Pacheco, Andrea Miller, Chris Abraham, Jill Berman, Carol Hernstadt Shulman, Joyce Evans, and Gail Traynham also made key contributions to this report. | A successful terrorist attack on Department of Energy (DOE) sites containing nuclear weapons or the material used in nuclear weapons could have devastating consequences for the site and its surrounding communities. Because of these risks, DOE needs an effective safeguards and security program. A key component of an effective program is the design basis threat (DBT), a classified document that identifies the potential size and capabilities of terrorist forces. The terrorist attacks of September 11, 2001, rendered the then-current DBT obsolete. GAO examined DOE's response to the September 11, 2001, terrorist attacks, identified why DOE took almost 2 years to develop a new DBT, analyzed the higher threat in the new DBT, and identified the remaining issues that need to be resolved in order for DOE to meet the threat contained in the new DBT. DOE took a series of actions in response to the terrorist attacks of September 11, 2001. While each of these has been important, DOE must press forward with additional actions to ensure that it is fully prepared to provide a timely and cost effective defense. DOE took immediate steps to improve physical security in the aftermath of the September 11, 2001, terrorist attacks. DOE's most visible effort involved moving to higher levels of security readiness, known as security condition (SECON) levels. While this effort has increased the visible deterrence at DOE sites, it has been expensive and has resulted in fatigue, retention problems, and less training for most sites' protective forces. In addition, the effectiveness of these increased SECON levels generally have not been assessed using the vulnerability assessment tools, such as computer modeling and full-scale force-on-force exercises, that DOE routinely uses to develop protective force strategies for its sites. Development of the new DBT took almost 2 years because of (1) delays in developing an intelligence community assessment--known as the Postulated Threat--of the terrorist threat to nuclear weapon facilities and (2) DOE's lengthy comment and review process for developing policy. In addition, during the DBT development process, there were sharp debates within DOE and other government organizations over the size and capabilities of future terrorist threats and the availability of resources to meet these threats that contributed to the delay. While the May 2003 DBT identifies a larger terrorist threat than did the previous DBT, the threat identified in the new DBT in most cases is less than the threat identified in the intelligence community's Postulated Threat, on which the DBT has been traditionally based. The new DBT identifies new possible terrorist acts such as radiological, chemical, or biological sabotage. However, the criteria that DOE has selected for determining when facilities may need to be protected against these forms of sabotage may not be sufficient. DOE has been slow to resolve a number of significant issues, such as issuing additional DBT implementation guidance, developing DBT implementation plans, and developing budgets to support these plans, that may affect the ability of its sites to fully meet the threat contained in the new DBT in a timely fashion. Consequently, DOE's deadline to meet the requirements of the new DBT by the end of fiscal year 2006 is probably not realistic for some sites. |
HHS is the primary organization within the federal government that is devoted to protecting the health of Americans. It provides essential human services, such as ensuring food and drug safety and assisting needy families. HHS administers more grant dollars than all other federal agencies combined, providing over $200 billion of the more than $350 billion in federal funds that were awarded to states and other entities in fiscal year 2002, the most recent year for which these data are available. For fiscal year 2005, HHS had a budget of $581 billion and a workforce of over 67,000 employees. To accomplish its mission, HHS is comprised of 12 component agencies and several staff offices that cover a wide range of activities—including conducting and sponsoring medical and social science research, guarding against the outbreak of infectious diseases, assuring the safety of food and drugs, and providing health care services and insurance. The Office of the Secretary consists of several staff divisions and offices, including the Office of the Assistant Secretary for Budget, Technology, and Finance. The HHS Office of the Chief Information Officer (CIO) is located within this staff office (see fig. 1). Information technology investments play a critical role in helping HHS carry out its diverse mission. According to the President’s most recent budget, HHS expects to spend about $5 billion in IT in fiscal year 2006, making the department’s IT investment budget the third largest in the federal government. As figure 2 illustrates, approximately $3 billion is designated as grants to states for investments for Medicaid programs and other purposes, such as child support enforcement systems. Approximately $2 billion is for discretionary investment spending, of which 89 percent is used to fund IT investments for component agencies; 7 percent is invested in HHS enterprisewide initiatives; and 4 percent is used to fund other initiatives, including Office of the Inspector General IT investments. Table 1 provides additional information about the component agencies and their estimated IT budget for fiscal year 2006. HHS’ investments reflect the diversity of the department’s missions and operating environments. For example, HHS currently has several enterprisewide IT initiatives that enable stakeholders to advance the causes of better health, safety, and well-being for American people. These initiatives include: Unified Financial Management System, a new core financial system, to help management monitor budgets, conduct operations, evaluate program performance, and make financial and programmatic decisions. As a core financial system, it will interface with an estimated 110 other HHS information systems. The Office of the Assistant Secretary for Public Health Emergency Preparedness maintains a command center where it can coordinate the response to public health emergencies from one centralized location. This center is equipped with satellite teleconferencing capability, broadband Internet hookups, and analysis and tracking software. In addition, HHS’s component agencies have several projects and systems that are critical to the effective implementation of HHS’s mission, including the following: The Food and Drug Administration’s Automated Drug Information Management System is to be developed as a fully electronic information management system that will receive, evaluate, and disseminate information about investigational and marketing submissions for human drugs and therapeutic biologics. The National Institutes of Health’s major IT initiative, the Clinical Research Information System, is a comprehensive effort to modernize the systems that support clinical care and the agency’s collection of research data for the intramural clinical research programs. The Centers for Disease Control and Prevention’s major IT initiative, Public Health Information Network, is a national initiative to implement a multiorganizational business and technical architecture for public health information systems. In January 2004, we reported on a broad view of the government’s implementation of investment management practices at 26 major departments and agencies, including HHS. We also reported—and HHS acknowledged—that there were serious weaknesses in investment management. Notably, the department had not yet established selection criteria for project investments or a requirement that investments support work processes that have been simplified or redesigned. In addition, the department did not have decision-making rules to guide oversight of IT investments, review projects at major milestones, or systematically track corrective actions. Accordingly, we made several recommendations, including that HHS revise its investment management policy and require PIRs to address validating benefits and costs. In response to our recommendations, the department has been modifying several of its investment management policies, including its capital planning and investment control guidance and its governance policies. More recently, in June 2005, we reported that the HHS IT Investment Review Board had conducted only budgetary reviews of the Centers for Disease Control and Prevention’s Public Health Information Network and some of its initiatives, until this past February, when HHS initiated steps for better monitoring of system development projects. We concluded that until management implements a systematic method for IT investment reviews, it will have difficulty minimizing risks while maximizing returns on these critical public health investments. HHS has several groups and individuals involved in managing both the enterprisewide and component agency IT investments. They are involved from reviewing and approving a proposed IT project, through the process of budgeting for it, monitoring it through implementation, and evaluating it at its conclusion. The composition, roles, and responsibilities of these individuals and groups are described below: Information Technology Investment Review Board (ITIRB)—Chaired by HHS’s CIO, this board is responsible for selecting, controlling, and evaluating all departmental IT investments. Members include the Deputy Assistant Secretary for Budget, Finance, Performance and Planning; the Directors for Acquisition Management Policy and Human Resources; and the component agency CIOs. The board is supported by an executive secretary who is responsible for, among other things, managing the flow of IT investment documentation, scheduling meetings, and assisting the members in preparing for their meetings. Currently, this board reviews all enterprisewide investments and delegates responsibilities for component agency investments to each individual component agencies investment review boards in accordance with departmental policies and procedures. CIO Council—Also chaired by the HHS CIO and comprised of component agency CIOs, this board advises the HHS ITIRB on the technical soundness of all IT investments that require departmental review and provides recommendations regarding, among other things, technical aspects of affordability, soundness of design, risk, and compliance with architectural and security standards. Critical Partners—Comprised of departmental officials from various functional areas, including enterprise architecture, security and privacy, acquisition management, finance, budget, human resources, and e-government; this group is responsible for ensuring that most investments comply with the HHS policy in each of the functional areas and for advising the HHS ITIRB and individual IT investment managers on issues in their areas of expertise. Each review results in a determination whether the investment is approved, conditionally approved, or not approved. A not approved result is flagged for executive review. Business Case Quality Review Team—Comprised of component agency officials, this group evaluates the justifications for IT investments—both formal business cases and information documented in the department’s portfolio management tool’s Select forms—against the criteria used by the Office of Management and Budget’s to evaluate business cases agencies submit to the office as part of the formulation of the federal budget and provides recommendations for improving these justifications. Capital Planning and Investment Control (CPIC) Reengineering/Portfolio Management Tool (PMT) Implementation Team—Chaired by the Office of the CIO officials with representatives from the Critical Partners and the Business Case Quality Review Team, this group advises the board on issues regarding investment management policies and procedures and the implementation of the department’s portfolio management tool. Investment Managers—Responsible for managing investments in accordance with approved cost, schedule, and performance baselines, and for maintaining information on project status, control, performance, risk, and corrective actions. The department has defined a three-phase process for managing investments that involves selecting proposed projects and reselecting ongoing projects (select phase), controlling ongoing projects through development (control phase), and evaluating projects that have been deployed (evaluate phase). The department retains direct management of HHS enterprisewide IT investments and delegates considerable authority for other investments to component agencies. Specifically, the department selects ongoing and new component agency investments through the process for selecting enterprisewide IT investments described below. Controlling and evaluating component agency IT investments are delegated to the component agencies, which are required by the department to follow a process similar to the one described below. Each phase of the process for enterprisewide investments is comprised of multiple steps that set out requirements needed for the HHS ITIRB to make the decision to move forward with the project. The purpose of the select phase is to ensure that HHS chooses the projects that best support its mission and applies resources to the most important and valuable investments. The select phase is also intended to help the department justify budget requests by demonstrating sound business cases and project plans. To select investments, HHS has established two separate components—investment screening for new investment proposals and investment scoring and screening for ongoing investments. During the new investment screening, the investment manager is expected to develop a project prospectus, which identifies a specific business need and preliminary, high-level system requirements. A high-level determination of resource and schedule requirements is also to be conducted as part of the business need identification activities. Approval of the project prospectus by the HHS ITIRB signifies that the agency agrees that the need is critical enough to proceed to the next step in which the business case is developed. During business case development, the investment manager is required to develop the business case, which establishes the lifecycle cost, schedule, benefits, and performance baselines and includes an analysis for each investment to identify alternatives that may satisfy the needs of the department. In addition, the investment managers sign a document called the accountability agreement form to accept responsibility for reporting on the project status in achieving performance baselines throughout the remaining phases of the investment management process. After the project is initially approved by the HHS ITIRB, the business cases and Select forms for most IT investments are updated annually as part of the budget formulation process. (The Select forms are a collection of forms with HHS’s portfolio management tool that capture investment data to justify funding and ensure adequate project planning during the select phase.) The first step within the annual budget formulation process requires that all component agencies use the Select forms to report the project cost estimates that best represent the level of funding required to meet program or business needs. At this point, the Critical Partners and the Business Case Quality Review Team score and rank the Select forms using the department’s portfolio management tool to create a single HHS portfolio as well as component agency portfolios to provide recommendations to the component agencies for making final adjustments to their portfolio ranking. Once the component agencies have made the appropriate changes, the Office of the CIO develops prioritized IT portfolios for HHS as a whole as well as each component agency to present to the HHS ITIRB. The departmental board and CIO Council review and comment on the prioritized portfolio and submit it to the Secretary’s Budget Council for input into their budget deliberations. The Secretary’s Budget Council then makes recommendations to the Secretary regarding HHS and component agencies’ budgets. Finally, the department submits its approved Secretary’s IT budget to the Office of Management and Budget for inclusion in the President’s Budget. Once selected for inclusion in the department’s IT portfolio, each project is to be managed by an investment manager and reviewed by the ITIRB on a quarterly basis throughout the end of development. The board performs reviews of projects that deviate from predetermined budget, schedule, or performance milestones established in the business case and works with the investment managers to develop a correction action plan. The ITIRB must also decide whether to continue to fund the project; rebaseline the scope, schedule, or budget; or to terminate the project. Once a project has been fully implemented, the HHS ITIRB is to conduct annual reviews of all HHS enterprisewide steady state investments—that is, investments in operations and maintenance—to determine whether they continue to meet the business needs. In addition, investments that have recently completed implementation or a significant phase are to undergo PIRs to evaluate actual development events against project management plans and to identify lessons learned that can be applied to current and future investments. Figure 3 illustrates HHS’s investment management process phases and steps. The highlighted steps represent the activities that the department conducts for both enterprisewide and component agency investments. The ITIM framework is a maturity model composed of five progressive stages of maturity that an agency can achieve in its investment management capabilities. It was developed on the basis of our research into the IT investment management practices of leading private- and public- sector organizations. In each of the five stages, the framework identifies critical processes for making successful IT investments. The maturity stages are cumulative; that is, in order to attain a higher stage the agency must have institutionalized all of the critical processes at the lower stages, in addition to the higher stage critical processes. The framework can be used to assess the maturity of an agency’s investment management processes and as a tool for organizational improvement. The overriding purpose of the framework is to encourage investment processes that increase business value and mission performance, reduce risk, and increase accountability and transparency in the decision process. We have used the framework in several of our evaluations, and a number of agencies have adopted it. These agencies have used ITIM for purposes ranging from self-assessment to redesign of their IT investment management processes. ITIM’s five maturity stages represent steps toward achieving stable and mature processes for managing IT investments. Each stage builds on the lower stages; the successful attainment of each stage leads to improvement in the organization’s ability to manage its investments. With the exception of the first stage, each maturity stage is composed of “critical processes” that must be implemented and institutionalized in order for the organization to achieve that stage. These critical processes are further broken down into key practices that describe the types of activities that an organization should be performing to successfully implement each critical process. It is not unusual for an organization to be performing key practices from more than one maturity stage at the same time, but efforts to improve investment management capabilities should focus on implementing all lower stage practices before addressing higher stage practices. In the ITIM framework, Stage 2 critical processes lay the foundation for sound IT investment processes by helping the agency to attain successful, predictable, and repeatable investment control processes at the project level. Specifically, Stage 2 encompasses building a sound investment management foundation by establishing basic capabilities for selecting new IT projects. It also involves developing the capability to control projects so that they finish predictably within established cost and schedule expectations and the capability to identify potential exposures to risk and put in place strategies to mitigate that risk. The basic selection processes established in Stage 2 lays the foundation for more mature selection capabilities in Stage 3, which represents a major step forward in maturity, in which the agency moves from project-centric processes to a portfolio approach, evaluating potential investments by how well they support the agency’s missions, strategies, and goals. Stage 3 requires that an organization continually assess both proposed and ongoing projects as parts of a complete investment portfolio—an integrated and competing set of investment options. It focuses on establishing a consistent, well-defined perspective on the IT investment portfolio and maintaining mature, integrated selection (and reselection), control, and evaluation processes, which are to be evaluated during PIRs. This portfolio perspective allows decision makers to consider the interaction among investments and the contributions to organizational mission goals and strategies that could be made by alternative portfolio selections, rather than to focus exclusively on the balance between the costs and benefits of individual investments. Stages 4 and 5 require the use of evaluation techniques to continuously improve both the investment portfolio and the investment processes in order to better achieve strategic outcomes. At Stage 4 maturity, an organization has the capacity to conduct IT succession activities and, therefore, can plan and implement the deselection of obsolete, high-risk, or low-value IT investments. An organization with Stage 5 maturity conducts proactive monitoring for breakthrough information technologies that will enable it to change and improve its business performance. Organizations implementing Stages 2 and 3 have in place the selection, control, and evaluation processes that are required by the Clinger-Cohen Act of 1996. Stages 4 and 5 define key attributes that are associated with the most capable organizations. Figure 4 shows the five ITIM stages of maturity and the critical processes associated with each stage. As defined by the model, each critical process consists of “key practices” that must be executed to implement the critical process. In order to have the capabilities to effectively manage IT investments, an agency, at a minimum, should, (1) build an investment foundation by putting basic, project-level control and selection practices in place (Stage 2 capabilities) and (2) manage its projects as a portfolio of investments, treating them as an integrated package of competing investment options and pursuing those that best meet the strategic goals, objectives, and mission of the agency (Stage 3 capabilities). These practices may be executed at various organizational levels of the agency, including at the component level. However, overall responsibility for their success remains at the department level. Therefore, at a minimum, the department should effectively oversee component agencies’ IT investment management processes. HHS has executed 24 of the 38 key practices that the ITIM framework requires to build a foundation for IT investment management (Stage 2) and 8 of the 27 key practices required to manage investments as a portfolio (Stage 3). However, the department has only provided limited oversight of component agencies’ ITIM processes. Until HHS implements and oversees a stable investment management process throughout the department, it will lack essential management controls over all of its IT investments, and it will be unable to ensure that it is appropriately selecting, managing, and evaluating the mix of investments that will maximize returns to the organization, taking into account the appropriate level of risk. At the ITIM Stage 2 level of maturity, an organization has attained repeatable, successful IT project-level investment control processes and basic selection processes. Through these processes, the organization can identify expectation gaps early and take the appropriate steps to address them. According to the ITIM, critical processes at Stage 2 include (1) defining IT investment board operations, (2) identifying the business needs for each IT investment, (3) developing a basic process for selecting new IT proposals and reselecting ongoing investments, (4) developing project-level investment control processes, and (5) collecting information about existing investments to inform investment management decisions. Table 2 describes the purpose of each of these Stage 2 critical processes. In the federal government, the agency head and the CIO are responsible for effectively managing information technology. The agency head, through the department-level CIO, is responsible for providing leadership and oversight for foundational critical processes by ensuring that written policies and procedures are established, repositories of information are created that support investment decision making, resources are allocated, responsibilities are assigned, and all the activities are properly carried out where they may be most effectively executed. In a large and diverse organization such as HHS, it is especially critical that the CIO create this structure and framework to ensure that the organization is effectively managing its investments at every level. This means that the CIO must ensure that component agencies have investment management processes in place that adequately support the department’s investment management process to make certain that funds are being expended on component agency investments that will fulfill mission needs. Because of the management attention that has been given to IT investment management, the department has put in place over half of the key practices needed to establish the investment foundation. The department has satisfied all of the key practices associated with ensuring that projects and systems support organizational needs and meet users’ needs. It has satisfied most of the key practices associated with identifying and collecting investment information, selecting new proposals and reselecting ongoing investments, and instituting the department’s investment review board. However, because of its limited involvement in overseeing component agency investments, the department has not executed any of the key practices related to providing investment oversight. Table 3 summarizes the status of HHS’s critical processes for Stage 2 and shows how many key practices HHS has executed in managing its IT investments. The establishment of decision-making bodies or boards is a key component of the IT investment management process. At the Stage 2 level of maturity, organizations define one or more boards, provide resources to support the boards’ operations, and appoint members who have expertise in both operational and technical aspects of proposed investments. The boards should operate according to a written IT investment process guide that is tailored to the organization’s unique characteristics, thus ensuring that consistent and effective management practices are implemented across the organization. The organization selects board members to ensure that they are knowledgeable about policies and procedures for managing investments. Organizations at the Stage 2 level of maturity also take steps to ensure that executives and line managers support and carry out the decisions of the investment board. According to the ITIM, organizations should (1) use an investment management guide as an authoritative document to initiate and manage investment processes and (2) provide a comprehensive foundation for the policies and procedures that are developed for all of the other related processes. (The complete list of key practices is provided in table 4.) The department has executed 5 of the 8 key practices for this critical process. The department established an IT investment review board as its corporate-level investment board that consists of senior officials, including the CIO and the Deputy Assistant Secretaries for Budget, Finance, and Performance & Planning. The board is adequately resourced, with most support being provided by the Office of the CIO, whose responsibilities include developing and modifying the department’s criteria for selecting, controlling, and evaluating potential and existing IT investments. In addition, the CIO Council reviews the enterprisewide investments for technical soundness and provides its recommendations to the board. The Critical Partners and Business Case Quality Review Team provide additional support to the board by reviewing and scoring most of their IT investments. To ensure that the board’s decisions are carried out for enterprisewide investments, the ITIRB approves an accountability agreement document and business case that identify the benefits, costs, and schedule for the approved investments. The board then monitors the investments through the end of development. HHS requires the component agencies to follow a similar process in accordance with departmental policies and procedures. We verified that an accountability agreement document was signed and the business case identified performance expectations for the two enterprisewide IT investments we reviewed—Public Key Infrastructure and Enterprise Architecture initiatives. Additionally, the board has oversight of the development and maintenance of the documented IT investment process through the CPIC Reengineering/PMT Implementation Team, who provides investment management policy change recommendations to the board for approval. Although HHS has implemented these key practices, it does not have a comprehensive organization-specific process guide to direct the operations of the investment board. While the Information Resources Management policy, guidelines, and standard operating procedures provide general guidance on the organization’s investment management process, they do not reflect the current investment management process. Moreover, they do not constitute an IT investment process guide because they do not sufficiently define the investment process. Specifically, the policies and procedures do not include information on the roles of the key players such as the CIO Council, Critical Partners, Business Case Quality Review Team, or the component agency investment review boards. In addition, they do not identify the manner in which investment board’s processes are to be coordinated with other key organizational plans and processes (such as the budget formulation process). HHS has recently drafted a revised investment management policy addressing many of these weaknesses; however, it has not been finalized, and HHS officials could not provide a final issuance date. Without a comprehensive investment management process guide, the department lacks the assurance that IT investment activities will be coordinated and performed in a consistent and cost- effective manner. Moreover, while HHS has established an IT investment board, the board does not have business representation (that is, mission representation) from component agencies. Instead, Chief Information Officers represent the component agencies. According to HHS’s CIO, the membership of the board is adequate for carrying out the investment activities it currently performs—primarily focusing on enterprisewide IT investments. However, because allocating resources among major IT investments may require fundamental trade-offs among a multitude of business objectives, portfolio management decisions are essentially business decisions, and therefore require sufficient business representation on the board. Until the department adjusts its board membership to include business representation from component agencies, it will not have assurance that it includes those executives who are in the best position to make the full range of decisions needed to enable the agency to meet its mission most effectively, particularly as it begins to execute its full range of responsibility. Finally, the HHS ITIRB is not operating according to its assigned authority and responsibility. The department’s investment management policy and the HHS ITIRB’s charter state that the board has oversight responsibility for both enterprisewide and a defined set of component agency IT investments, including projects that are high risk, crosscutting, and require review by the Office of Management and Budget. However, the board currently oversees only enterprisewide IT investments. According to HHS officials, the department has delegated authority to the component agencies to conduct investment reviews; however, the board does not have a mechanism in place for ensuring that component agencies are conducting such reviews in accordance with department policies and procedures. Until the board operates according to its assigned authority, it cannot ensure that component agency investments are properly aligned with the organization’s objectives or reviewed by the appropriate board. Table 4 shows the rating for each key practice required to institute the investment board. Each of the “executed” ratings shown below represents instances where, on the basis of the evidence provided by HHS officials, we concluded that the specific key practices were executed by the organization. Defining business needs for each IT project helps to ensure that projects and systems support an organization’s business needs and meet users’ needs. This critical process ensures that an organization’s business objectives and its IT management strategy are linked. According to the ITIM, effectively meeting business needs requires, among other things, (1) documenting business needs with stated goals and objectives; (2) identifying specific users and other beneficiaries of IT projects and systems; (3) providing adequate resources to ensure that projects and systems support the organization’s business needs and meet users’ needs; and (4) periodically evaluating the alignment of IT projects and systems with the organization’s strategic goals and objectives. (The complete list of key practices is provided in table 5.) The department has in place all of the key practices for meeting business needs. Specifically, HHS has policy and procedures that call for business needs to be identified in the business case or the portfolio management tool’s Select forms for both proposed and ongoing enterprisewide and component agency IT projects. Resources devoted to ensuring that IT projects and systems support the organization’s business needs and meet users’ needs include the Business Case Quality Review Team, the Critical Partners, the portfolio management tool, and detailed procedures and associated templates for developing business cases. HHS’s specific business mission, with stated goals and objectives, is defined in the HHS Strategic Plan for fiscal years 2004 through 2009. Further, HHS defines and documents business needs for both proposed and ongoing enterprisewide and component agency IT projects, and identifies users and other beneficiaries during its selection activities. In addition, according to HHS IT officials, end users participate in project management throughout the IT project’s life cycle. For the four projects we reviewed, we verified that business needs and specific users and other beneficiaries were identified and documented in the business case or in the Select forms within HHS’s portfolio management tool. In addition, end users are involved in project management throughout the life cycle of the enterprisewide investments. For example, users of HHS’s Public Key Infrastructure and Enterprise Architecture initiatives participate in project management through integrated project teams, which meet approximately once a month and are comprised of representatives from the component agencies. Because the department has executed all of the key practices associated with identifying business needs, it has increased confidence that its IT projects will meet both business needs and users’ needs. Table 5 shows the rating for each key practice required to meet business needs and summarizes the evidence that supports these ratings. Selecting new IT proposals and reselecting ongoing investments require a well-defined and disciplined process to provide the agency’s investment boards, business units, and developers with a common understanding of the process and the cost, benefit, schedule, and risk criteria that will be used both to select new projects and to reselect ongoing projects for continued funding. According to the ITIM, this critical process requires, among other things, (1) making funding decisions for new proposals according to an established process; (2) providing adequate resources for investment selection activities; (3) using a defined selection process to select new investments and reselect ongoing investments; (4) establishing criteria for analyzing, prioritizing, and selecting new IT investments and for reselecting ongoing investments; and (5) creating a process for ensuring that the criteria change as organizational objectives change. (The complete list of key practices is provided in table 6.) HHS has executed 7 of the 10 key practices associated with selecting an investment. For example, resources devoted to selection activities include the Critical Partners, Business Case Quality Review Team, and portfolio management tool, which contains several forms for selecting IT projects and systems. HHS also has detailed procedures for using its portfolio management tool and developing business cases. The criteria for analyzing, prioritizing, selecting and reselecting new and ongoing investments address the President’s Management Agenda, HHS strategic goals, and IT strategic goals, value, and risk. They are incorporated into the department’s portfolio management tool and are reviewed by the investment review board and adjusted within the tool annually at the beginning of each budget cycle to reflect organizational objectives. This year, HHS added additional criteria—a quality score. HHS uses its annual budget formulation process to select both enterprisewide and component agency proposed and ongoing IT investments. We verified that the four projects we reviewed were reselected by the department using the annual budget formulation process. Although HHS has the above strengths, the department has not executed any of the practices associated with documenting policies and procedures. Specifically, HHS has not fully documented its process for selecting new IT proposals and reselecting ongoing IT investments. Although a number of documents address investment selection, they are not linked to provide decision makers with a clear understanding of the selection and reselection processes. In addition, they do not define the roles and responsibilities for all key players involved in these processes. Moreover, although the HHS Office of the CIO works directly with the department’s Office of the Budget, HHS does not have policies and procedures documenting the integration of funding with the process of selecting and reselecting investments. Until the department fully documents policies and procedures for selecting new IT proposals and reselecting ongoing IT investments, the department will not be adequately certain that it is consistently and objectively selecting and reselecting investments that best meet the needs and priorities of the department. Table 6 shows the rating for each key practice required to select an investment and summarizes the evidence that supports these ratings. An organization should effectively oversee its IT projects throughout all phases of their life cycles. Its investment board should observe each project’s performance and progress toward predefined cost and schedule expectations as well as each project’s anticipated benefits and risk exposure. This does not mean that a departmental board, such as the ITIRB, should micromanage each project to provide effective oversight; rather it means that the departmental board should be actively involved in all IT investments and proposals that are high cost or high risk or have significant scope and duration and at a minimum, should, have a mechanism for maintaining visibility of other investments. The board should also employ early warning systems that enable it to take corrective actions at the first sign of cost, schedule, and performance slippages. According to the ITIM, effective project oversight requires, among other things, (1) having written policies and procedures for management oversight; (2) developing and maintaining an approved management plan for each IT project; (3) making up-to-date cost and schedule data for each project available to the oversight boards; (4) having regular reviews by each investment board of each project’s performance against stated expectations; and (5) ensuring that corrective actions for each underperforming project are documented, agreed to, implemented, and tracked until the desired outcome is achieved. (The complete list of key practices is provided in table 7.) The department has not executed any of the seven key practices associated with effective project oversight, primarily because of its limited role in overseeing component agency IT investments. Specifically, while the department has documented standard operating procedures and instructional memorandums for oversight of enterprisewide IT investments, they are not comprehensive in that they do not specify the board’s responsibilities for investment oversight; procedural rules for the ITIRB operations and decision making during project oversight; or policies and procedures for overseeing component agency IT investments. The HHS ITIRB is currently performing regular reviews of enterprisewide IT projects and systems against stated expectations through reports that are available to decision makers on the HHS Intranet. However, the department is not regularly reviewing component agency investments that are high risk, crosscutting, and require review by the Office of Management and Budget, although their policy calls for it. The board also does not have a mechanism for maintaining visibility of other component agency investments. The department delegates oversight of these investments to the component agencies but believes it is nonetheless effectively overseeing component agency investments through (1) reviews of these investments as part of the annual Critical Partner and Business Case Quality reviews performed during the annual selection process and the use of (2) earned value management data. Although the annual reviews may provide insight into the status of investments, they are not frequent enough to allow for timely identification of problems. Moreover, while HHS officials told us that staff responsible for collecting earned value management data on component agency investments share significant concerns about the data with the ITIRB, they did not have formal documentation clearly supporting this issue. In addition, formal procedures for elevating issues to the board have not been developed. In the absence of effective board oversight, HHS executives will not have the information they need to determine whether component agency projects are being developed on schedule and within budget. In addition, the department will run the risk that underperforming component agency projects will not be identified in time for corrective actions to be taken. We verified that HHS provided oversight for the two enterprisewide investments, but had delegated oversight activities for the two component agency investments we reviewed. Table 7 shows the rating for each key practice required to provide investment oversight and summarizes the evidence that supports these ratings. To make good IT investment decisions, an organization must be able to acquire pertinent information about each investment and store that information in a retrievable format. During this critical process, an organization identifies its IT assets and creates a comprehensive repository of investment information. This repository provides information to investment decision makers to help them evaluate the potential impacts and opportunities created by proposed or continuing investments. It can provide insights into major IT cost and management drivers and trends. The repository can take many forms and need not be centrally located, but the collection method should, at a minimum, identify each IT investment and its associated components. This critical process may be satisfied by the information contained in the organization’s current enterprise architecture, augmented by additional information—such as financial information and information on risk and benefits—that the investment board may require to ensure that informed decisions are being made. According to the ITIM, effectively managing this repository requires, among other things, (1) developing written policies and procedures for identifying and collecting the information; (2) assigning responsibilities for ensuring that the information being collected meets the needs of the investment management process; (3) identifying IT projects and systems and collecting relevant information to support decisions about them; and (4) making the information easily accessible to decision makers and others. (The complete list of key practices is provided in table 8.) HHS has executed 5 of the 6 key practices for capturing investment information. For example, the department has several documents that define the policies and procedures for identifying and collecting investment information in its repositories and also assign responsibility to the HHS CIO for ensuring that the information collected during project and systems identification meets the needs of the investment management process. HHS maintains a portfolio management tool, which serves as the primary repository for identifying and collecting information about both department and component agency IT projects and systems. The department’s portfolio management tool is easily accessible to decision makers at both the department and component level and the Office of the CIO has provided decision makers with various training manuals and guidance memorandums. In addition, the department also identifies and collects information about enterprisewide IT investments using its Intranet. Further, the department recently began collecting earned value information through spreadsheets on major HHS IT investments that compares planned and actual cost and schedule information. These repositories are easily accessible to the board members. The key practice HHS has not executed has to do with the captured investment information not yet being used by the HHS ITIRB to fully support decisions about component agency investments. For example, the earned value investment data received from each component agency has not been used by the HHS ITIRB for control and evaluation decisions. According to agency officials, the department has recently begun monitoring the earned value data to identify investments that report cost and schedule variances and these officials acknowledge a need to formalize the process for doing so. Until HHS’s decision makers use the information in the repository to fully support the investment management process, it will be unable to effectively evaluate the impacts and opportunities created by proposed or continuing investments. Table 8 shows the rating for each key practice required to capture investment information and summarizes the evidence that supports these ratings. Once an agency has attained Stage 2 maturity, it needs to implement critical processes for managing its investments as a portfolio (Stage 3). An IT investment portfolio is an integrated, agencywide collection of investments that are assessed and managed collectively based on common criteria. Managing investments as a portfolio is a conscious, continuous, and proactive approach to allocating limited resources among an organization’s competing initiatives in light of the relative benefits expected from these investments. Taking an agencywide perspective enables an organization to consider its investments comprehensively, so that collectively the investments optimally address the organization’s missions, strategic goals, and objectives. Managing IT investments as a portfolio also allows an organization to determine its priorities and make decisions about which projects to fund and continue to fund based on analyses of the relative organizational value and risks of all projects, including projects that are proposed, under development, and in operation. Although investments may initially be organized into subordinate portfolios—based on, for example, business lines or life cycle stages—and managed by subordinate investment boards; they should ultimately be aggregated into this enterprise-level portfolio. According to the ITIM framework, Stage 3 maturity includes (1) defining the portfolio criteria, (2) creating the portfolio, (3) evaluating the portfolio, and (4) conducting postimplementation reviews. Table 9 summarizes the purpose of each critical process in Stage 3. HHS has executed 8 of the 27 key practices required by Stage 3. For example, the department’s core IT portfolio selection criteria, including cost, benefit, schedule, and risk are approved by the HHS ITIRB. In addition, the investment board examines the mix of new and ongoing investments and their respective data and analyses to select investments to fund. However, many key practices still need to be executed before HHS can effectively manage its IT investments from a portfolio perspective. For example, HHS has not addressed any of the key practices related to evaluating the portfolio or conducting PIRs. Until HHS fully implements the critical processes associated with managing its investments as a complete portfolio, it will not have the data it needs to make informed decisions about competing investments. Table 10 summarizes the status of HHS’s critical processes for Stage 3, showing how many associated key practices it has executed. To manage IT investments effectively, an organization needs to establish rules or “portfolio selection criteria” for determining how to allocate scarce funding to existing and proposed investments. Thus, developing an IT investment portfolio requires defining appropriate cost, benefit, schedule, and risk criteria with which to evaluate individual investments in the context of all other investments. To ensure that the organization’s strategic goals, objectives, and mission will be satisfied by its investments, the criteria should have an enterprisewide perspective. Further, if an organization’s mission or business needs and strategies change, criteria for selecting investments should be reexamined and modified as appropriate. Portfolio selection criteria should be disseminated throughout the organization to ensure that decisions concerning investments are made in a consistent manner and that this critical process is institutionalized. To achieve this result, project management personnel and others should be aware of the criteria and address the criteria in funding submissions for projects. Resources required for this critical process typically include the time and attention of executives involved in the process, adequate funding, and supporting tools. (The complete list of key practices is provided in table 11.) The department has executed 5 of the 7 key practices for this critical process. For example, responsibility has been assigned to the HHS Lead Capital Planner for managing the development and modification of the IT portfolio selection criteria, and adequate resources have been committed for portfolio selection activities, including the Critical Partners, portfolio management tool project manager, and the Office of the CIO staff. Moreover, the project management personal and other stakeholders are aware of the portfolio selection criteria that are embedded into the department’s portfolio management tool and also contained within policies and procedures. Finally, the HHS ITIRB approves the core IT selection criteria, including cost, benefit, schedule, and risk criteria, based on the organization’s mission, goals, strategies, and priorities. Beginning in fiscal year 2004, HHS began scoring and ranking approximately 80 percent of its IT investments against alignment, value, and risk criteria in order to determine a priority score, which is the sum of alignment, value, and risk criteria scores, weighted for relative importance. Similarly, for the fiscal year 2007 budget formulation process, HHS began collecting investment information on the business case quality, Critical Partner reviews, and cost and schedule variance to determine a quality score, which is the sum of the business case quality, Critical Partner reviews, and cost and schedule variance scores, weighted for relative importance. The HHS ITIRB evaluates and annually adjusts its portfolio selection criteria within the portfolio management tool. Despite these important steps in defining portfolio selection criteria, weaknesses remain. The department has not developed policies or procedures for modifying the portfolio selection criteria to reflect changes to HHS mission, goals, strategies, and priorities. In addition, the HHS ITIRB began reviewing the IT portfolio selection criteria this year. However, the process for modifying portfolio selection criteria is not institutionalized because the process to do so was only used once and there are no documented policies and procedures to ensure that it will be used again. Until HHS defines and implements the practices required for defining the portfolio criteria definition, it will not have the tool it needs to select investments that support its mission, organizational strategies, and business priorities. Table 11 shows the rating for each key practice required to define portfolio selection criteria and summarizes the evidence that supports these ratings. At Stage 3, organizations create a portfolio of IT investments to ensure that IT investments are analyzed according to the organization’s portfolio selection criteria and to ensure that an optimal IT investment portfolio with manageable risks and returns is selected and funded. According to ITIM, creating the portfolio requires organizations to, among other things, document policies and procedures for analyzing, selecting, and maintaining the portfolio; provide adequate resources, including people, funding, and tools for creating the portfolio; and capture the information used to select, control, and evaluate the portfolio and maintain it for future reference. In creating the portfolio, the investment board must also (1) examine the mix of new and ongoing investments, and their respective data and analyses and select investments for funding and (2) approve or modify the performance expectations for the IT investments they have selected. (The complete list of key practices is provided in table 12.) HHS has executed 3 of the 7 key practices associated with creating the portfolio. Beginning in fiscal year 2004, the department began to create a portfolio by using its portfolio management tool to collect cost, benefit, schedule, risk, strategic alignment, and enterprise architecture information on investments accounting for 80 percent of the dollar value of the HHS IT investment portfolio. Each component agency’s IT portfolio is displayed in priority order along with where each investment falls within the overall IT portfolio. Further, according to HHS IT officials, the agency has adequate resources for portfolio selection activities, including the Critical Partners, the portfolio management tool project manager, and the Office of the CIO staff. These officials also stated that HHS ITIRB members are also knowledgeable about the process of creating a portfolio. Nevertheless, HHS has a number of significant weaknesses in the way it creates a portfolio. First, it does not have policies and procedures that sufficiently address this critical process. Although the department has policies and procedures for creating IT portfolio selection criteria, they lack policies and procedures for using these criteria to analyze, select, and maintain the investment portfolio. Second, even though the HHS ITIRB has quarterly reviews to compare project and system performance with expectations for enterprisewide IT investments, the board is not provided with information comparing the performance of component agency investments against expectations. In addition, the board approves or modifies the performance expectations for the enterprisewide IT investments it has selected, but does not regularly approve or modify the performance expectations for component agency IT investments or ensure that this is done. Moreover, as previously mentioned, investment information has not been used to fully support control and evaluate decisions for component agency investments. Unless HHS defines and implements the practices for creating a comprehensive portfolio of IT investments, it will not be able to determine whether it has selected the mix of investments that best meets its needs considering resource and funding constraints. Table 12 shows the rating for each key practice required to create a portfolio and summarizes the evidence that supports these ratings. This critical process builds upon the Stage 2 critical process, Providing Investment Oversight, by adding the elements of portfolio performance to an organization’s investment control capacity. Compared with less mature organizations, Stage 3 organizations will have the foundation they need to control the risks faced by each investment and to deliver benefits that are linked to mission performance. In addition, a Stage 3 organization will have the benefit of performance data generated by Stage 2 processes. Executive- level oversight of risk management outcomes and incremental benefit accumulation provides the organization with increased assurance that each IT investment will achieve the desired results. (The complete list of key practices is provided in table 13.) HHS has not executed any of the seven key practices for evaluating a portfolio. It has yet to develop policies and procedures that address performance oversight from a portfolio perspective. Moreover, while the department annually reviews its portfolio as part of its selection process, it does not evaluate the investment portfolio on a continuing basis to assess its performance. Finally, the results of Providing Investment Oversight reviews from Stage 2 are important to this critical process. However, as previously mentioned, while the HHS ITIRB has oversight of enterprisewide investments, it does not regularly review a defined set of component agencies’ investments and maintain visibility of other investments. Although the department’s portfolio management tool has the ability to summarize performance metrics for each investment and quickly understand the status of each investment and any potential emerging problem area, the tool is currently only being used on an ad hoc basis to make portfolio oversight decisions. Defining and implementing processes to evaluate the performance of its entire portfolio would provide HHS with greater assurance that it is controlling the risks and achieving the benefits associated with the mix of investments it has selected. Table 13 shows the rating for each key practice required to evaluate the portfolio and summarizes the evidence that supports these ratings. The purpose of a PIR is to evaluate an investment after it has completed development (that is, after its transition from the implementation phase to the operations and maintenance phase) in order to validate actual investment results. This review is conducted to (1) examine differences between estimated and actual investment costs and benefits and possible ramifications for unplanned funding needs in the future and (2) extract “lessons learned” about the investment selection and control processes that can be used as the basis for management improvements. Similarly, PIRs should be conducted for investment projects that were terminated before completion, to readily identify potential management and process improvements. (The complete list of key practices is provided in table 14.) HHS has not executed the six key practices for conducting PIRs. Although its policy calls for postimplementation reviews of IT investments that have recently completed implementation of the entire investment or a significant phase of the investment, the department does not have specific procedures for conducting such reviews, including specifying who conducts and participates in the PIR, what information is presented in a PIR, or how results are to be disseminated to decision makers. To date, HHS has conducted closeout reviews of two enterprisewide investments following their implementation; however, while these reports do cover investment cost expectations, they cannot be considered PIRs because the reports do not address general conclusions, lessons learned, or schedule deviations. Unless PIRs are conducted on a regular basis, HHS will not be able to effectively evaluate the results of its IT investments to determine whether continuation, modification, or termination of an IT investment would be necessary in order to meet stated HHS mission objectives. Table 14 shows the rating for each key practice required to conduct PIRs and summarizes the evidence that supports these ratings. The ability of a department-level CIO to effectively oversee IT investment management processes throughout the agency depends on the existence of appropriate management structures with adequate authorities and sufficient guidance. Under the Clinger-Cohen Act of 1996, the CIO of each agency is responsible for effectively managing all of the agency’s IT resources. To comply with the act, HHS designates its CIO to be responsible for ensuring that the component agencies are defining and implementing effective investment management processes that are appropriately aligned with the department’s processes. Although each component agency has staff responsible for gathering, maintaining, and analyzing IT investment information, the HHS Office of the CIO has the responsibility to define and implement overall HHS IT investment management practices, and monitor component agency investment management practices to ensure a cohesive departmental process and the capability exists to carry out the process. In accordance with this, the department’s investment management policies and guidelines state that the component agencies are to establish and manage investment management processes and governance structures that are aligned with the department’s policies and procedures. However, as mentioned in previous sections, the department’s investment management policies and procedures have several weaknesses. For example, HHS does not have a set of documented procedures that provide decision makers with a clear understanding of the selection and reselection process. Moreover, HHS currently has no structured mechanism in place to ensure that the component agencies are adhering to the department’s policies and procedures. According to HHS officials, the CIO has the authority to audit a component agencies IT investment management process. However, they were unable to provide us evidence of having performed any such audits. These officials also stated that the department’s portfolio management tool is another method that will enable HHS to oversee component-level investment management processes. However, since not all component agencies are using the portfolio management tool to individually make select, control, and evaluate decisions, its usefulness in this regard is limited. Until the department develops a mechanism for ensuring that component agencies define and implement investment management processes that align with those of the department, it is running the risk that effective processes are being institutionalized at both the department and the component agency level. In addition, the department will be unable to ensure that it is optimizing its investments in IT and effectively assessing and managing the risks of these investments. HHS has initiated several efforts to improve its investment management process. Specifically, it has drafted a revised investment management guide that addresses the weaknesses with current guidance that we identify in this report. In addition, in February 2005, HHS incorporated capabilities into its portfolio management tool to enhance performance of control and evaluate functions. Specifically, the tool now has the capabilities to produce (1) scorecards to provide data for each investment in a portfolio, allowing cross investment comparisons on data elements collected; (2) investor maps to provide a graphical depiction of a portfolio in terms of up to six data categories, with the ability to show target and actual values; and (3) a workbook module to track the identification and resolution of issues that may arise regarding the management of an investment or set of investments. Although HHS has initiated these efforts, they only fully address 2 of the 14 Stage 2 key practices the department did not execute. The draft investment management guidance, when finalized, will address weaknesses associated with one of the key practices for instituting the investment board by reflecting the current management process, including information on the roles of key working groups involved in the organization’s IT investment processes, and identifying the manner in which investments board’s processes are to be coordinated with other key organizational plans and processes. The guidance will also address the integration of the funding and selection processes, a key practice the department has not executed that is associated with selecting an investment. The enhanced portfolio management tool capabilities will enhance the department’s ability to oversee investments’ performance and position the board to perform portfolio evaluation activities, but they will not fully address any of the weaknesses we identify. HHS has not coordinated these and additional efforts that would address the weaknesses we identify in this report in a comprehensive plan that (1) specifies measurable goals, objectives, and milestones; (2) specifies needed resources; (3) assigns clear responsibility and accountability for accomplishing tasks; and (4) is approved by senior management. We have previously reported that such a plan is instrumental in helping agencies coordinate and guide improvement efforts. Until HHS develops a plan that would allow for the systematic prioritization, sequencing, and evaluation of improvement efforts, the agency risks not being able to effectively establish the mature investment management processes that result in greater certainty about the outcomes of future IT investments. Because of the attention that has been given to investment management, HHS has established several of the practices needed to effectively manage its investments. These practices have strengthened the department’s basic capabilities for selecting and controlling projects and begun to equip the department with the capabilities it needs to make informed decisions about competing investments. However, several significant weaknesses remain in the foundational practices needed to manage individual investments, the portfolio-level investments needed to manage investments as a collection, and in the level of guidance and oversight provided to component agency investment management processes. These weaknesses hamper the department’s ability to ensure that it is managing the mix of investments that will maximize returns to the organization, taking into account the appropriate level of risk. Critical to HHS’s success, going forward will be the development of an implementation plan that (1) is based on an assessment of strengths and weaknesses; (2) specifies measurable goals, objectives, and milestones; (3) specifies needed resources; (4) assigns clear responsibility and accountability for accomplishing tasks; and (5) is approved by senior management. Although the department has initiated improvement efforts, it has not developed a comprehensive plan to guide these and other efforts needed to improve its investment management process. Without such a plan and procedures for implementing it, it is unlikely that the department will effectively establish mature investment management capability. As a result, HHS will continue to be challenged in its ability to make informed and prudent investment decisions in managing its annual multibillion-dollar IT budget. To strengthen HHS’s investment management capability and address the weaknesses discussed in this report, we recommend that the Secretary of the Department of Health and Human Services direct the Chief Information Officer to develop and implement a plan for improving the department’s IT investment management processes. The plan should address the weaknesses described in this report, beginning with those we identified in our Stage 2 analysis and continuing with those we identified in our Stage 3 analysis. The plan should, at a minimum, provide for accomplishing the following: Develop comprehensive guidance and additional supporting guidance that defines and describes the complete investment management process, unifies existing processes enterprisewide, reflects changes in processes as they occur; define the operations and decision-making processes of the HHS investment review board and other management entities, such as the component agencies, involved in managing IT investments. Ensure that HHS’s investment review board’s membership includes business representation of its component agencies as it begins to execute its full range of responsibilities. Develop well-defined and disciplined written procedures that outline the process for selecting new IT proposals, reselecting ongoing IT investments, and integrating funding with the process of selecting an investment. Establish a process for the investment board to regularly review and track the performance of a defined set of component agency IT systems against expectations, and take corrective actions when these expectations are not being met; and establish a mechanism for maintaining visibility into other investments. Develop and implement policies and procedures for modifying IT portfolio selection criteria. Develop policies and procedures for using the portfolio selection criteria to create its portfolio. Develop, review, and modify criteria for assessing portfolio performance at regular intervals to reflect current performance expectations. Define and implement processes for carrying out PIRs for all IT investments. We also recommend that the HHS Secretary direct the CIO to ensure that the plan draws together ongoing efforts and additional efforts that are needed to address the weaknesses identified in this report. The plan should also (1) specify measurable goals, objectives, and milestones; (2) specify needed resources; (3) assign clear responsibility and accountability for accomplishing tasks; and (4) be approved by senior management. Finally, to improve the department oversight of its component agency investment management process, we are recommending that the HHS Secretary direct the HHS CIO to establish a mechanism for ensuring component agencies define and implement investment management processes that are aligned with those of the department. The Department of Health and Human Services’s Inspector General provided written comments on a draft of this report (reprinted in app. II). In these comments, HHS generally agreed with our findings and recommendations and stated that the report represented a fair assessment of the department’s progress in IT investment management. The department added that it will leverage the report in its efforts to improve its investment management processes. HHS expressed differing perspectives on the inclusion of component agency business representation on the investment review board and the performance of postimplementation reviews. Specifically, regarding business representation on the board, the department commented that it used a hierarchy of investment reviews (with the first review occurring at the component agency) combined with ITIRB members representing mission support areas, such as Finance, Acquisition, and Human Resources, to provide a structure for making the business decisions regarding the department’s investments. We disagree with the department that this arrangement provides an adequate structure for managing the department’s investments. Because allocating resources among major IT investments may require fundamental trade-offs among a multitude of business objectives, portfolio management decisions are essentially business decisions, and therefore require sufficient business representation on the board. CIOs and executives responsible for mission- support functions do not constitute sufficient business representation because, by virtue of their responsibilities, they are not in the best position to make business decisions. Portfolio management decisions are better made by executives with business line decision-making authority. Regarding PIRs, HHS commented that it was currently informally performing them by conducting closeout reviews of recently implemented investments and annual reviews of systems in operations and maintenance. PIRs are conducted to determine whether cost, benefit, schedule, and risk expectations that were set for investments were achieved and develop lessons learned about the investment selection and control processes that can be used as the basis for management improvements. However, neither the closeout reviews, nor the reviews of systems in operations and maintenance, are addressing all these elements. Specifically, as we stated in our report, the closeout reviews do not address schedule deviations, determine whether the benefits were achieved, or identify lessons learned. In addition, the reviews of projects in operations and maintenance do not capture the benefits realized or identify lessons learned. Commenting on departmental-level oversight of component agency investments, HHS stated that it agrees with our recommendation to improve its oversight of component agency investments. It stated that it would use a number of mechanisms to do this, including performing audits to ensure alignment of component agency’s processes with those of the department, using earned value management data to identify potential performance problems with most investments, and directly reviewing investments determined to be of high priority. We agree with HHS that these steps would help address some of the weaknesses in project oversight that we identify in this report. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from the date of this report. At that time, we will send copies to other interested congressional committees, the Secretary of Health and Human Services, and other interested parties. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. Should you or your offices have questions on matters discussed in this report, please contact me at (202) 512-9286 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. The objectives of our review were to (1) assess the Department of Health and Human Services’s capabilities for managing its IT investments and (2) determine any plans HHS might have for improving those capabilities. To address our first objective, we reviewed the results of the department’s self-assessment of Stages 2 and 3 practices using our ITIM framework and validated and updated the results of the self-assessment through document reviews and interviews with officials. We reviewed written policies, procedures, and guidance and other documentation providing evidence of executed practices, including HHS’s Capital Planning and Investment Control Policy and Guidelines, standard operating procedures, portfolio management tool training manuals, and various instructional memorandums. We also reviewed the HHS ITIRB meeting materials, including quarterly status reports, meeting minutes, and records of decisions. We did not assess progress in establishing the capabilities found in Stages 4 and 5 because the department acknowledged that it had not executed any of the key practices in higher maturity stages. In addition, we conducted interviews with officials from the Office of the CIO, whose main responsibility is to oversee and ensure that HHS’s IT investment management process is implemented and followed to determine the level of oversight and guidance the department is providing to its component agencies. We also interviewed the Centers for Medicare & Medicaid’s Director for Investment Tracking and Assessment to determine the level of investment management guidance and oversight that is provided by the department. As part of our analysis, we selected two HHS enterprisewide and two component agency IT projects as case studies to verify that the critical processes and key practices were being applied. The projects selected (1) are recognized as major systems, (2) were in different life cycle phases, (3) represent a mix of headquarters and component agency investments, (4) support different functional areas, and (5) required different levels of funding. The four projects are described below: HHS Public Key Infrastructure—This project supports digital signatures and other public key-enabled security services; it is intended to be the underlying architecture to support secure transmissions of electronic communication, such as encrypted email, by linking a digital key to a specific person, and issues and manages digital certificates. The intent of the project is to provide an identity proofing process that is both fast and certificate authority neutral. It is an agencywide strategic initiative that provides security services. The project is a major enterprisewide investment and is in the operations and maintenance phase. The project has a planned completion date of July 2011 and is estimated to spend $7.7 million for fiscal year 2006. HHS Enterprise Architecture Initiative—This initiative is to provide the overall framework for planning and managing the technology-supported information assets of HHS and give the department the ability to identify data and process redundancies and inefficiencies in its information systems. The program’s objectives focus on development of operational policies and support that enable identification, analysis and ongoing management of the business, and information and related technology architectures. It is to provide leadership, direction, and support to HHS’s component agencies in planning and implementing information systems to support required business processes. As of fiscal year 2005, the initiative is a major enterprisewide program investment and is estimated to spend $15.0 million for fiscal year 2006. National Institutes of Health’s Electronic Research Administration— This initiative is the National Institutes of Health’s infrastructure for conducting interactive electronic transactions for the receipt, review, monitoring, and administration of grant awards to biomedical investigators worldwide. It is to provide the technology capabilities for the agency to efficiently and effectively perform grants administration functions. The system is to provide end-to-end support of the grants administration process, including receipt of applications, review and selection of grantees, financial and progress reporting, issuance of final reports and grant dole-out, invention reporting, and interface with accounting systems. It is a major component agency investment and is expected to have a useful life of 13 years. The project is estimated to spend $42.1 million for fiscal year 2006. Food and Drug Administration’s Mission Accomplishment and Regulatory Compliance Services—This program is a comprehensive redesign and reengineering of core mission-critical systems at the agency, including the Field Accomplishments and Compliance Tracking System and the Operation and Administration Support System. The first of these systems is to support the investigation, tracking of compliance, and laboratory operations related to domestic operations under the agency’s purview; the second is to primarily support the review and decision-making process of products imported into the United States. Both are legacy systems that execute on client-server platforms; while currently viable, the current systems cannot address many of the business needs due to the exponential growth in functionality on a rigid platform that was not designed to support the extent of change that has been required. The Mission Accomplishment and Regulatory Compliance Services is a major component agency investment and is expected to move to production in September 2007 and have a useful life of 10 years. The project is estimated to spend $10.2 million for fiscal year 2006. For these projects, we reviewed project management documentation, such as business cases, status reports, and meeting minutes. We also interviewed officials from the Office of the CIO for the two component agency investments and the project managers for the two HHS enterprisewide projects. We compared the evidence collected from our document reviews and interviews to the key practices in ITIM. We rated the key practices as “executed” on the basis of whether the agency demonstrated (by providing evidence of performance) that it had met the criteria of the key practice. A key practice was rated as “not executed” when we found insufficient evidence of a practice during the review or when we determined that there were significant weaknesses in HHS’s execution of the key practice. In addition, HHS was provided the opportunity to produce evidence for key practices rated as “not executed.” To address our second objective, we obtained and evaluated documents showing what management actions had been taken and what initiatives had been planned by the agency. This documentation included the Policy Advisory Board charter, draft investment management policies and procedures, as well as procedures and guidance for control and evaluate functionalities within HHS’s portfolio management tool. We also interviewed officials from the Office of the CIO to determine efforts undertaken to improve IT investment management processes. We conducted our work at HHS headquarters in Washington, D.C., from January through September 2005, in accordance with generally accepted government auditing standards. In addition to the person named above, Neil Doherty, Joanne Fiorino, Sabine Paul, Nik Rapelje, Niti Tandon, and Amos Tevelow made key contributions to this report. | The Department of Health and Human Services (HHS) is one of the largest federal agencies, the nation's largest health insurer, and the largest grant- making agency in the federal government. The department manages over 300 programs that serve to improve the health and well-being of the American public and is comprised of several component agencies covering a wide range of activities including conducting and sponsoring medical and social science research, guarding against the outbreak of infectious diseases, assuring the safety of food and drugs, and providing health care services and insurance. It also manages and funds a variety of information technology (IT) initiatives ranging from those facilitating the payment of claims for Medicare and Medicaid services to those supporting health surveillance and communications. In fiscal year 2006, the department plans to spend over $5 billion on information technology--the third largest IT expenditure in the federal budget. As we agreed with Congress, our objectives were to (1) assess the department's capabilities for managing its IT investments and (2)determine any plans the department might have for improving those capabilities. To address these objectives, we analyzed documents and interviewed agency officials to (1)validate and update HHS's self-assessments of key practices in the framework and (2)evaluate HHS's plans for improving its capabilities. Because of the management attention that has been given to IT investment management, HHS has established over half of the foundational practices needed to manage its IT investments individually and about 30 percent of the key practices needed to effectively manage its portfolio of investments. For example, HHS has implemented many of the practices required to ensure that (1) projects support business needs and meet users' requirements, (2) a well-defined and disciplined process is used to select IT investments, (3) investment information is captured in a repository for decision makers, and (4) IT portfolio selection criteria are developed and maintained. However, critical weaknesses remain in several areas. Specifically, HHS lacks: (1) business representation on its senior IT investment review board of component agencies to carry out its full scope of responsibilities, (2) an established process for the IT investment board to regularly review a defined set of the component agencies' IT investments and maintain visibility of other investments, (3) criteria for assessing portfolio performance or regularly reviewing the performance of the organization's investment portfolio, and (4) processes for conducting post-implementation reviews (PIR) of its IT investments. The department also does not have a structured mechanism in place for ensuring that component agencies define and implement investment management processes that are aligned with those of the department. Until the department fully establishes all foundational and portfolio-level practices and establishes a mechanism to ensure that component agencies define and implement processes that are aligned with those of the department, executives cannot be assured that they are appropriately selecting, managing, and evaluating the mix of investments that will maximize returns to the organization, taking into account the appropriate level of risk. HHS has initiated steps to improve its investment management process; however, these steps do not fully address the weaknesses we identify in this report, nor are they coordinated along with other needed improvement efforts into a plan that (1) is based on an assessment of strengths and weaknesses; (2) specifies measurable goals, objectives, and milestones; (3) specifies needed resources; (4) assigns clear responsibility and accountability for accomplishing tasks; and (5) is approved by senior management. Without such a plan and procedures for implementing it, the department risks being unable to effectively establish mature investment management capabilities. As a result, executives may not be able to make informed and prudent investment decisions in managing the department's annual multibillion-dollar IT budget. |
To determine what would be sufficient to meet the nation’s demand for infrastructure services such as efficient and safe mobility and clean water is not simple. The investment requirements depend on (1) the supply of service—what facilities exist, their condition and maintenance, how efficiently they are operated, and how services might be provided other than through capital spending—and (2) the demand for such services by the public, which can be influenced, in part by the price charged for infrastructure services and the state of the economy. Infrastructure investment estimates can vary greatly depending on the extent to which such factors are considered in investment calculations. For example, the investment required to maintain or rehabilitate existing facilities could differ significantly from the investment required to meet a specified level of service. Moreover, focusing on the provision of service, rather than the condition of a structure or facility, can lead to the consideration of less costly, noncapital alternatives to meeting the demand for infrastructure. For example, to meet a specified level of service on roads, such as keeping traffic flowing at the speed limit, decisionmakers might consider changing the timing of traffic lights rather than building new lanes. Furthermore, future investment needs are not a predetermined reality and can be affected by more efficient use of existing infrastructure. For example, technological improvements can increase the efficiency of infrastructure. In addition, pricing strategies can affect the use of infrastructure— relatively higher fees can encourage users to economize on their consumption. The seven agencies we reviewed develop information on infrastructure investment requirements because of their roles in financing and developing infrastructure. (See fig. 1.) ARC, the Army Corps, EPA, FAA, FHWA, and FTA provide funding for transportation, water supply, and wastewater treatment infrastructure that is owned, operated, and maintained by others. GSA and the Army Corps are directly responsible for acquiring and maintaining federal office buildings and dams and flood- control structures, respectively. At the request of the Congress, EPA, FAA, FHWA, and FTA periodically prepare long-term infrastructure investment estimates. Every 5 years, ARC prepares an estimate of the cost to complete the Appalachian Development Highway System, which it finances by distributing federal funds to states within Appalachia. GSA maintains information on the investment needs for public buildings and the Army Corps maintains information on the investment needs for water resources (inland and deep draft navigation, flood control, and shore protection), hydropower, water supply and wastewater treatment. The investment estimates developed by the seven agencies will be funded, at least in part, by federal financing. The Army Corps’ estimate only includes the federal portion of investment. GSA’s estimate for investment in public buildings will be financed entirely with federal funds. Figure 2 shows the spending trends from fiscal years 1990 to 1999 for the seven agencies. Spending (in constant 2000 dollars) ranges from an average of $150 million per year for ARC to an average of $20.6 billion per year for FHWA. Although these seven agencies have made large investments in public infrastructure, state and local governments and the private sector play important roles in financing significant portions of some infrastructure, such as water treatment and supply and transportation. Preparing investment estimates is a capital decisionmaking activity by federal agencies. In 1998, we identified the practices of leading government and private-sector organizations in capital decisionmaking.During that review, we found that conducting a comprehensive needs assessment is an important first step in an organization’s decisionmaking process for infrastructure because it allows an organization to (1) consider its overall mission, (2) identify the resources needed to fulfill both immediate requirements and anticipated future needs on the basis of results-oriented goals and objectives that flow from the organization’s mission, and (3) consider both capital and noncapital approaches to addressing these goals. The following leading practices relate to developing and using investment estimates: conduct a comprehensive assessment of the resources needed to meet an agency’s mission and results-oriented goals and objectives; establish a baseline inventory of existing assets, evaluate their condition, determine if they are performing as planned, and identify excess capacity; consider alternative ways to address needs, including noncapital use cost-benefit analysis as a primary method to compare alternatives and select economically justified investments; rank and select infrastructure projects for funding based on established criteria; budget infrastructure projects in useful segments; develop a long-term capital plan that defines capital asset decisions; establish procedures to review data developed by others and using independent reviews of data and methods to further enhance the quality of estimates. The leading practices we identified reflect requirements that the Congress and Office of Management and Budget (OMB) have placed on federal agencies that are aimed at improving federal agencies’ capital decisionmaking practices. These requirements relate to aspects of investment estimates, such as developing cost information, measuring the benefits of proposed investments, and using investment estimates as a first step toward acquiring infrastructure. These requirements include, for example, the Chief Financial Officers’ Act of 1990, which required the development of accounting and financial systems to report cost information and that the principles used in accounting for program costs be consistent with those used in developing program budgets. In addition, the Government Performance and Results Act of 1993 (Results Act) required agencies to develop mission statements, long-range strategic goals and objectives, and annual performance plans. The Results Act emphasized identifying and measuring outcomes, including benefits. In addition, the Congress enacted the Federal Acquisition Streamlining Act of 1994 to improve the federal acquisition process. Title V of the act was designed to foster the development of (1) measurable cost, schedule, and performance goals and (2) incentives for acquisition personnel to reach these goals. To help agencies integrate and implement these and other requirements, OMB added a section to its annual budget preparation guidance (Circular A-11) requiring agencies to provide OMB with information on major capital acquisitions and to submit a capital asset plan and justification. This guidance is supplemented by OMB’s Capital Programming Guide, which provides detailed steps on planning, budgeting, acquiring, and managing infrastructure and other capital assets. The steps in OMB’s guide include the concepts covered by our 10 leading practices. The seven agencies we reviewed produce investment estimates for water resources, hydropower, water supply, wastewater treatment, airports, highways, mass transit, and public buildings. Some estimates—for water resources, hydropower, and public buildings—are developed for federal spending; the other estimates are developed for spending from federal, state, and local sources. The estimates cannot easily be compared because they were developed using different methods, time periods, and funding sources. A fundamental reason that the estimates were prepared differently and lack comparability is that they are developed and used for different purposes. Some agencies use the information to determine the financial resources needed to manage and/or repair their own assets, and other agencies develop estimates at the request of the Congress to provide general information to decisionmakers or to help direct funding to recipients of federal assistance. The seven agencies identified investment amounts that vary from GSA’s estimate of $4.58 billion over 1 to 5 years to repair public buildings to FHWA’s estimate of $83.4 billion each year over 20 years to preserve and improve the nation’s highways. The investment estimates are summarized in table 1. The investment estimates cannot be easily compared or simply “added up” to produce a national estimate of all infrastructure investment needs because of differences in the methods used, time periods covered, and funding sources. For example, EPA used engineering-based approaches to develop costs for its drinking water and wastewater treatment estimates. By contrast, FHWA developed a computer model to forecast the future condition of and improvements to highway segments that uses cost-benefit analysis as the primary criteria for including improvements in its overall investment estimate. In addition, the estimates involve differing time periods. For example, FAA’s estimate of airport infrastructure investment covers 5 years, while ARC and the Army Corps produce estimates of undefined time periods for highways and water resource projects, respectively. Some agencies prepared their estimates in constant year dollars—ARC’s estimate is in 1995 dollars—while other agencies, such as GSA, presented their estimates in current dollars. The estimates also include differing funding sources: estimates by the Army Corps and GSA include only the costs to the federal government, while estimates by the other five agencies include total costs to federal, state, and local organizations. Each of the seven agencies used data from various localities, states, or agency regional offices and aggregated those data to produce a national estimate for infrastructure investment. Each agency’s process for developing its investment estimate is summarized below. They are described in detail in appendix II. The Army Corps estimated that $38 billion in federal funds was required to complete water resources and hydropower infrastructure projects already under construction as of March 30, 2001. Infrastructure projects included in this estimate were initially identified by local governments, groups, and/or private citizens, who requested assistance from the local Army Corps district office. According to an Army Corps official, regional Army Corps personnel evaluate the requests and determine both the seriousness of the problems and the need for immediate solutions. Project costs are estimated by engineers and other professionals using existing industry data. The agency also uses cost-benefit analysis to determine which projects are economically justified and would assist the agency in reaching its goals, such as environmental protection and flood mitigation. The evaluation and cost estimate is sent to the agency’s headquarters, and selected projects are submitted for funding as part of the Department of Defense’s annual budget. FAA estimated that $35.1 billion in federal and nonfederal funds was required for airport infrastructure from 1998 to 2002. Data for the investment estimate come primarily from airport plans—such as airport master plans and layouts--which include proposals and cost estimates for specific infrastructure projects at individual airports. FAA officials in field offices review each project, and approved projects are entered into a FAA database. FAA officials in headquarters review the database for anomalies in the data, then add up the estimated cost of each project to produce an overall investment estimate. Because this estimate is not a spending plan, FAA has reported that it makes no attempt to prioritize the projects or determine if the benefits of specific projects would exceed their cost. This estimate is prepared and submitted to the Congress biennially, as required by statute. FTA also used local sources of data to estimate an investment of $10.8 billion to $16.0 billion per year for mass transit systems (such as buses and railcars) from 1998 to 2017, depending on whether the condition and performance of mass transit systems would be maintained or improved. The estimates cover both federal and nonfederal shares of costs. FTA used data from local urban transit agencies to determine the age and condition of mass transit infrastructure and then estimated the cost of either maintaining or improving that infrastructure. FTA used an estimate developed by its Transit Economic Requirements Model. The model performed a benefit-cost analysis to determine if replacing an asset was economically justified. The model then aggregated the cost of all the infrastructure projects that were justified by benefit-cost analysis to determine the total investment estimate for the nation’s mass transit systems. FTA uses this estimate to provide general support for its budget and information on changes in mass transit systems. ARC estimated that it would cost $8.5 billion from state and federal sources to complete the Appalachian Development Highway System. To do this, it relied on state highway officials within Appalachia, who determined the estimated cost to complete individual highway corridors within their particular state that are part of the highway system. These estimates used engineering structural criteria to estimate the cost of constructing highway corridors. The estimate included costs for project design, environmental mitigation, rights of way access, and construction. ARC officials provided instructions to the states for computing this estimate and reviewed the estimates by comparing the costs to the costs of similar highway projects within that state and to FHWA’s data on construction costs. The costs were not adjusted for inflation. ARC then aggregated the data from each state to produce an overall estimate of the cost to complete the entire highway system. ARC uses this estimate as the basis for allocating funds appropriated for the Appalachian Development Highway System. Specifically, ARC calculates each state’s percentage share of the total cost to complete the highway system and distributes funding to each state accordingly. In May 2001, GSA’s data indicated that $4.58 billion in federal funds was required over the next 5 years to meet the repair and alteration needs of public buildings. GSA estimated that an additional $250 million to $300 million was required annually over the next 5 years to construct new border stations and federal office buildings, and $500 million annually was required over 5 to 7 years to construct new courthouses. Investment projects are identified by regional offices, which are expected to determine the best way to meet the agencies’ space requirements. The cost data for projects that have estimated costs of between $10,000 and less than $1.99 million are developed using engineering criteria and are derived from various sources, including contractors, safety inspectors, and senior- level building management staff. Projects that have estimated costs greater than $1.99 million are evaluated by headquarters officials and ranked in order of priority. GSA’s cost data are used as input in determining funding priorities. EPA estimated that $150.9 billion in federal, state, and local funds was needed for capital investment in drinking water facilities between 1999 and 2018. Only costs eligible for funding under the Drinking Water State Revolving Fund were included. These costs were not adjusted for inflation. To develop the estimate, EPA surveyed all of the large water systems in the United States as well as a sample of the medium water systems. In addition, EPA conducted site visits to 599 small systems and extrapolated data from these surveys and site visits to compute the total investment estimate. The surveys and supporting cost documentation for medium and large systems were submitted to states for review and were subsequently reviewed by EPA. The agency uses the results of this estimate to allocate monies to the states for the Drinking Water State Revolving Fund based on each state’s share of the total investment amount. In 1996, EPA estimated that $139.5 billion in federal and state funds was needed between 1996 and 2016 for water pollution control, primarily for capital investment in already-existing wastewater treatment facilities. Only costs eligible for funding under Title VI of the Clean Water Act were included in the estimate. These costs were not adjusted for inflation. EPA developed the estimate from a nationwide database of wastewater treatment facilities that is periodically updated by surveying the states. The states provided revised estimates of capital investment needs from their documented plans, which were supplemented by costs modeled by EPA when the state lacked this information. In addition, EPA modeled the costs for each state for combined sewer overflows and activities to control stormwater runoff and nonpoint sources of pollution. The Congress has used this information as one consideration in appropriating funds for capitalization grants to the states, through the Clean Water State Revolving Fund loan program. According to EPA, the estimate is also used to assist in program planning and evaluation. In May 2000, FHWA issued investment estimates for highways for the years 1998 to 2017. These estimates ranged from $50.8 billion per year for cost-beneficial improvements that would maintain the current physical condition of highways to $83.4 billion per year for all improvements that would improve pavement condition and reduce highway users’ travel costs. The estimates included both federal and nonfederal portions of funding and were in constant 1997 dollars. To determine the estimates, FHWA used data from a statistically drawn national sample of 125,000 highway segments as well as information from the states on forecasts such as travel growth. FHWA officials reviewed the data submitted by the states, looked for anomalies or unusual patterns, and asked the states to correct serious flaws and improve some data submissions. FHWA used a computer model to simulate the effects of infrastructure improvements on a sample of highway data and used a benefit-cost analysis to identify economically justified highway improvements. FHWA’s estimate is used by legislative and executive branch offices to obtain general information on the nation’s overall need for investment in highways. The federal agencies we reviewed all had procedures for developing their infrastructure investment estimates that reflect some leading practices that we identified, although some agencies followed more leading practices than others. However, following the leading practices does not ensure a quality investment estimate and each estimate had limitations associated with the quality of the data used in developing it. The strengths and limitations of each investment are summarized in appendix II.Correcting such limitations will improve the quality and reliability of the agencies’ investment estimates. None of the agencies we reviewed had procedures for all eight of the leading practices. Not following a leading practice does not necessarily represent a deficiency on the part of an agency because, in many cases, when these practices are not applied by the federal agency, they are implemented at the state or local level. For example, for EPA’s drinking water investments, six of eight practices are undertaken at the local or state level, according to agency officials. The Army Corps had the highest conformance to the leading practices, with procedures that reflected six of the eight practices, such as establishing an inventory of assets and their condition and using cost-benefit analysis to select among investment alternatives. Among the seven agencies, FHWA and FTA came closest to conducting comprehensive assessments of the investments needed to meet results-oriented agency goals—the estimates were results oriented by focusing on the amounts needed to maintain or improve the performance of highways and transit systems; but the estimates did not consider alternative, noncapital ways to address investment needs. The remaining five agencies developed estimates that are summations of the costs of projects eligible to receive federal funding or projects identified by the Congress and others, rather than comprehensive estimates of investments needed to achieve outcomes. All seven agencies had procedures that called for reviewing data developed by states and others and four agencies considered alternative noncapital ways to address unmet investment requirements. By comparison, the agencies were less likely to follow practices such as developing a long-term capital plan, using cost-benefit analysis as the primary method to compare alternative investments, ranking and selecting projects for funding based on established criteria, and budgeting for projects in useful segments. Figure 3 shows each agency’s level of conformance to the leading practices. An important first practice of leading organizations is to conduct a comprehensive assessment or analysis of program requirements by identifying and documenting the resources needed to meet the organization’s results-oriented goals and objectives that flow from the organization’s mission. This type of assessment is results-oriented in that it determines what is needed to obtain specific outcomes—such as improved mobility on highways or reduced flight delays at airports—rather than identifying the resources needed on a project-by-project basis. Furthermore, placing the focus on results drives an organization to consider alternative, noncapital ways to fulfill program requirements. Until recently, agencies have not been required to relate their planned infrastructure spending to their missions and goals, so evaluating these plans has presented a challenge to agencies and the Congress. This situation changed with the enactment of the Government Performance and Results Act of 1993 and corresponding revisions to OMB Circular A-11. Since then, federal agencies—including the seven we reviewed—were required to develop mission statements, long-range strategic goals and objectives, and annual performance plans and to link annual performance plans to capital planning efforts. The benefit of conducting a needs assessment linked to achieving objectives is that managers will be able to determine what is needed to obtain specific outcomes rather than what is needed to maintain or expand existing capital stock. Although each agency we reviewed prepared estimates directly related to their mission, no agency prepared a comprehensive assessment of the resources (and strategies) to achieve mission-focused outcomes. For example, an evaluation of the effectiveness of ARC’s highway construction estimate is directly related to its mission, which is to enhance economic development in Appalachia. However, ARC’s investment estimate is a compilation of cost estimates to construct specific highway corridors, rather than a comprehensive determination of the resources needed to meet its mission. The investment estimates by FTA and FHWA come closest to being comprehensive assessments of resources needed to meet results-oriented goals. Both agencies focus on the resources needed to achieve specific outcomes—maintaining or improving the performance of the nation’s mass transit systems and highways. Performance includes factors related to the quality of service such as congestion on highways and waiting times and reliability of transit service. FTA examines how these outcomes could be achieved by maintaining or improving existing transit facilities and assets and by constructing new systems to meet forecasted capacity needs. FHWA’s estimate models improvements to maintain or improve existing highways—it excludes new construction. However, these estimates do not comprehensively consider alternatives to meeting investment needs—for example, neither estimate considers alternative, noncapital ways to address investment needs. The remaining agencies do not prepare estimates to achieve outcomes. Rather, they prepare investment estimates that are summations of projects’ costs: projects eligible to receive federal funding (EPA and FAA) and projects identified by others, including the Congress, local communities, and other federal agencies (ARC, the Army Corps, and GSA). Leading organizations establish an inventory of current assets and their condition and determine if the assets are performing as planned. By routinely assessing the condition of assets and facilities, decisionmakers can evaluate the capabilities of current assets and plan for their replacement. In addition, OMB’s Capital Programming Guide instructs agencies to evaluate the capacity of their existing assets for major programs, to determine if they are performing as planned. OMB’s instructions cover assets funded by federal grants for capital investment as well as those owned by federal agencies. Inventory information on infrastructure assets—including their condition and performance—can assist decisionmakers in identifying excess infrastructure capacity that is draining its resources. This is particularly important for federal buildings and facilities. For example, in 1998, the National Research Council reported that the number of excess federal facilities appeared to be increasing as agencies realigned their missions in response to changing circumstances. Of the agencies we reviewed, the Army Corps, FTA, and GSA maintain inventory and evaluation information on assets. For example, the Army Corps collects information on the condition of equipment at hydropower plants, particularly turbines, and uses this information to determine repair and rehabilitation needs. GSA maintains an inventory that identifies the location, type, and availability of its buildings. According to agency officials, GSA maintains separate information on the condition of these buildings. In addition, GSA has programs that oversee the disposal of excess and surplus real property. In contrast, ARC, EPA, FAA, FHWA do not maintain inventories, but some of them rely on inventories kept by state or local agencies. For example, states maintain highway inventories and provide input to FHWA’s investment estimate. According to EPA officials, local communities maintain inventories of their water and wastewater infrastructure. Over the next few years, new financial reporting standards released by the Government Accounting Standards Board will require the financial statements of state and local governments to disclose information on capital infrastructure assets, such as their physical condition. Leading organizations consider a wide range of alternative approaches to satisfy their needs, including noncapital alternatives, before choosing to purchase or construct facilities or other capital assets. OMB incorporated this practice in its Capital Programming Guide, which suggests that federal agencies select alternatives to acquiring new capital assets to achieve the same programmatic goals whenever practicable and more cost beneficial. OMB also suggests that agencies consider options such as meeting objectives through regulation or user fees, using human capital rather than capital assets, and applying grants or other means beyond a direct service provision supported by capital assets. Army Corps, EPA (wastewater), FAA, and GSA indicated that efforts are made at their agencies to identify noncapital solutions to some of their investment needs and, where feasible, to implement those rather than acquiring new capital assets. However, it is not clear how routinely agencies follow this practice or, in some cases, what value is added by this practice. For example, according to EPA officials, decisions on pursuing noncapital ways to address infrastructure needs are often conducted at the local level. For example, a public water system may choose to implement a water conservation plan as an alternative to adding additional storage and treatment capacity to a system. Some leading organizations use cost-benefit analysis as a tool to ensure that the organization’s investment will obtain the greatest benefits for the least cost. A cost-benefit analysis, which OMB suggests for federal agencies, compares the costs and benefits of alternative investments in order to identify those investments that are economically justified (greatest net benefits) and achieve agency goals at the least cost. The types of analysis can range from a complete cost-benefit analysis—which includes full life-cycle costs, estimating and discounting cash flows, and determining the return on the investment based on a specified discount rate—to an analysis that compares alternatives and recommends the most cost-effective (least-cost) option for achieving a specific goal. Three agencies we reviewed—Army Corps, FHWA, and FTA—conduct cost-benefit analyses of proposed projects and use the results as a main factor in developing their investment estimates. For example, FHWA’s computer model, which is used to determine future investment requirements, simulates the effects of infrastructure improvements for highway segments and compares the relative benefits and costs associated with alternative improvement options. Only improvements for which the benefits exceed the cost are included in the overall estimate. According to EPA officials, cost-benefit analyses are done at the local level for drinking water and wastewater investment, as utility managers consider projects needed for public health and water quality purposes. In addition, according to an ARC official, states conduct cost-benefit analyses to help determine the routes of new highways. Leading organizations have defined processes for ranking potential infrastructure investments in order to find those that are the most cost effective for achieving organizational goals over the long-term, and for selecting and budgeting those projects for full up-front funding or funding in useful segments. The organizations implement these practices by establishing a framework for reviewing and approving decisions concerning infrastructure and other capital assets, ranking and selecting projects on the basis of established criteria and technical analyses, and preparing long-term plans for infrastructure and capital development. OMB’s guidance to federal agencies on the ranking and selecting of infrastructure investments advises them to consider the availability and affordability of the investment, and whether the costs and benefits of the new asset will merit their inclusion in the agency’s portfolio of proposed assets that are considered for funding. For the agencies that we reviewed, the Army Corps, FAA, and GSA have processes in place to rank and select investment projects for funding. For example, GSA staff assess the merits of proposed projects with the aid of computer-based software that uses five weighted criteria—including economic return, project risk, and project urgency—to rank projects that are competing for funding. In some cases—ARC, EPA, FHWA, and some FTA projects—state, local or other federal entities are responsible for determining which investment projects to fund. For example, officials with ARC and EPA told us that capital projects funded by their agencies are ranked and selected by the state agency or entity in charge of a particular project. Hence, while ARC provides funding for highways, the state departments of transportation prioritize and rank the highway investment needs for their particular state. Similarly, individual states rank drinking water projects that are funded through EPA based on a priority system that focuses on public health, compliance, and the economic needs of the community. FHWA and FTA projects that are funded by formula grants are also prioritized at the state or local level. A strategy that has proven useful to organizations in dealing with the problems posed by full funding in a capped budget environment is to budget for projects in useful segments. This means that when a decision has been made to undertake a specific capital project, funding sufficient to complete a useful segment of the project is provided in advance. OMB has defined a useful segment as a component that either (1) provides information that allows the agency to plan the capital project, develop the design, and assess the benefits, costs, and risks before proceeding to full acquisition (or canceling the acquisition) or (2) results in a useful asset for which the benefits exceed the costs even if no further funding is appropriated. For the agencies we reviewed, investment estimates, particularly those that involve the construction or rehabilitation of an asset, are often based on the full cost of projects. In two cases—FTA and GSA projects that exceed a dollar threshold—the projects are funded based on their full costs and the funds are spent over a period of years. However, funding for other federal agencies’ investment projects are often made for only part of the estimated cost or part of a usable asset—a part that would not be usable if no further funding were provided. Such incremental funding is usually sufficient to cover obligations estimated to be incurred in one fiscal year. Incrementally funding infrastructure projects could affect the quality and reliability of investment estimates if the full estimated costs of projects are not made apparent at the time that initial funding decisions are made. For example, most of the Army Corps’ multiyear water resource projects are funded at each phase. For instance, in fiscal year 1986, the Army Corps estimated the federal share of work it intended to do on a water resources’ project in Petersburg, WV at $14 million. The estimate did not include over $600,000 appropriated to the Army Corps between fiscal years 1986 and 1989 to study the proposed project. The agency requested and received funds each fiscal year for various phases of the work until fiscal year 1997, by which time the total federal share of the work had increased to $20.4 million, due to inflation and cost overruns. As another example, ARC officials told us that prior to fiscal year 1999, it had been difficult for the agency to develop realistic estimates for the cost of completing the highways under its jurisdiction because funding was limited and was only sufficient to construct a few sections each year. Although ARC’s funding for fiscal year 1999 was revised to guarantee a minimum level of funds each year, the amount does not fully fund the states’ highway investment estimates. Leading organizations use capital plans, which generally cover multiyear periods, to establish priorities for implementing organizational goals and objectives and to manage resources and debts over the long-term. The capital plans are updated either annually or biennially, depending on the changing needs of the organizations or, in the case of federal agencies, legislative and/or executive requirements. Developing a long-term capital plan enables an organization to review and refine a proposed project’s scope and cost estimates over several years, which helps to reduce cost overruns. While out-year cost estimates are preliminary, they help to provide decisionmakers with an overall sense of a project’s funding needs. As a project moves closer to the year of implementation, its scope becomes more clearly refined and cost estimates also can be refined to more accurately reflect actual project costs. Among the agencies we reviewed, the Army Corps prepares a long-term capital plan to document specifically planned projects, plan for resource use over the long-term, and establish priorities for implementation. FAA prepares a long-term plan that is an aggregate of local airport plans; priorities for implementation are established during the annual budget process and are not part of the long-term capital plan. In the case of GSA, its long-term capital plan for courthouse construction is prepared by the Judicial Conference of the United States. However, the Conference’s role in this process is limited because it does not have independent authority to lease, construct, plan, or design space. In addition, we reported in 2000 that GSA lacks a multiyear capital plan for repairs and alterations. For the other agencies we reviewed, capital plans may be developed at the state or local levels or by other federal entities. For example, in the case of ARC and FHWA, the states are responsible for developing long-term capital plans for their highway and other transportation needs. According to EPA officials, local water and wastewater utilities develop capital improvement plans for infrastructure needs, the results of which are used by EPA in developing its estimates. The agencies we reviewed use data from a variety of sources, including states, municipalities, and contractors, to determine how much it will cost to acquire, construct, repair, or maintain federal and public infrastructure. The quality of the cost estimates prepared by federal agencies depends heavily on the quality of this data. By reviewing capital investment data prepared by others, agencies can enhance the quality of their investment estimates. In addition, an independent review of the data and methods used to develop the estimates can further enhance quality and help ensure that investment decisions are supported by quality information. All agencies we reviewed have procedures for reviewing the data provided by outside sources. FHWA also had independent reviews to critique and refine the methods used to produce the estimate. For example, FHWA’s computer model for developing its estimate was reviewed by transportation and economic experts to both assess and improve it. In June 1999, the experts found that FHWA has strengthened the model over time and that recent refinements have increased its applicability and credibility. In addition, FTA has under way a review of its methodology for determining transit investment estimates. Nonetheless, officials at ARC, EPA, FAA, FHWA, FTA, and GSA acknowledged that the data used to develop their investment estimates might not have been sufficiently comprehensive or accurate. For example, EPA reported that its most recent investment estimates for drinking water supply, issued in February 2001, were derived from a 1999 nationwide survey of the documented needs of community water systems. However, EPA officials stated that the estimates might understate water supply needs because some water systems submitted cost estimates covering a 2- to 5-year period, rather than the 20-year period requested by EPA. Inaccurate data and assumptions can affect the quality of investment estimates. For example, the National Academy of Sciences found that flawed data created unsound economic assumptions in the Army Corps’ draft feasibility study for the navigation system infrastructure on the Upper Mississippi River and Illinois Waterway. This resulted in inadequate forecasts of future events, such as the level of barge shipping rates and grain demand, which compromised the integrity of analysis and led to an overstatement of the level of investment. Since this project is not yet under construction, it is not part of the Army Corps’ $38 billion investment estimate. In addition to promoting inaccurate estimates of investment needs, erroneous data can affect agencies’ assessments of repair and maintenance needs for existing infrastructure. For example, in March 2000, we reported that GSA’s database of needed repairs and alterations had numerous problems, such as repairs that were not included in the database, some repairs that were included but were already in progress or completed, some incorrect reporting of data, and some cost estimates for repairs that were not current. Since the review, GSA has taken steps to improve the quality of the data used to manage its inventory of buildings. Some perspective is called for in reviewing the investment estimates by the seven agencies. First, for the most part, these investment estimates are totals for the entire infrastructure network—involving all levels of government and the private sector. The federal government’s role in financing these amounts should be recognized; and, in some cases, this role might be small compared to other levels of government or the private sector. Second, these investment estimates can change significantly over time with changes in the efficiency of delivering infrastructure services or pricing strategies that alter the demand for services. For example, the consolidation of smaller water systems or the introduction of user charges can reduce the need to expand or replace infrastructure. Third, these investment estimates focus on the condition of facilities rather than the performance outcomes that can be expected from the investments. The passage of GPRA signaled a shift in federal focus from inputs (such as the condition of highways and airports) to outcomes (such as improved mobility). In the infrastructure area, we caution against relying on measures of need based primarily on the condition of facilities and instead suggest comparing the costs and benefits of alternative approaches for reaching outcomes, including noncapital alternatives. We provided a draft of this report to ARC, EPA, DOT, GSA, and the Department of Defense (DOD) for review and comment. EPA said that the report clearly distinguishes between federal agencies that directly invest in infrastructure and agencies, such as EPA, that manage programs to fund infrastructure. GSA stated that the database used to derive the estimate for public building repair and alteration costs is continually changing as work items are tracked from identification to completion and that the database does not represent investment needs, rather it provides input to decisions that determine funding priorities. We revised this report to indicate that the data are used as input to funding decisions. GSA also stated that the draft report did not acknowledge that the agency uses cost- benefit analysis as one criterion in making investment decisions and that GSA has made progress in improving the accuracy of its data. We did not make any changes to this report based on these comments because both items were already included. Written comments by EPA and GSA and our responses to GSA’s comments appear in appendices III and IV. ARC, EPA, DOT, and GSA provided technical clarifications, which we included in this report where appropriate. DOD had no comments on this report. We conducted our review between December 2000 and July 2001 in accordance with generally accepted government auditing standards. We are sending copies of this report to the Secretaries of the Departments of Transportation and Defense; the Administrator, EPA; the Commissioners of GSA and ARC; and the Director, Office of Management and Budget. Copies will also be made available to others upon request. If you or your staff have any questions about this report, please call me at (202) 512-2834. Key contacts and major contributors to this report are listed in appendix V. Our report focuses on infrastructure investment estimates compiled by six federal agencies—the U.S. Army Corps of Engineers, Environmental Protection Agency (EPA), Federal Aviation Administration (FAA), Federal Highway Administration (FHWA), Federal Transit Administration (FTA), and General Services Administration (GSA)—and the Appalachian Regional Commission (ARC). For these agencies and selected types of infrastructure (see table 2), we addressed the following objectives: (1) What are the agencies’ estimates for infrastructure investment and how do the estimates compare in terms of how they are developed and used? (2) To what extent do the agencies’ procedures for developing the estimates embody practices of leading government and private-sector organizations? To identify agencies’ infrastructure investment estimates, we obtained and analyzed the most recent estimates reported by ARC, EPA, FAA, FHWA, and FTA. For the Army Corps, we used estimates for water resources and hydropower that the agency prepared in March 2001. The Army Corps does not develop investment estimates for water supply and treatment. For GSA, we obtained information from the Inventory Reporting Information System (IRIS)—a computerized database of information on building repairs and alterations. According to GSA database managers, the data we used were representative of the repair and alteration needs contained in IRIS as of May 2, 2001. GSA’s estimate for building construction was developed by GSA staff for the building priorities identified by the Judicial Conference of the United States and other federal agencies. We did not independently verify the agencies’ investment estimates, but we did rely on past reviews by us and others that examined the soundness and completeness of the methodology and/or data used to develop the estimates. We incorporated findings from these reviews as appropriate. To obtain information on the procedures agencies used to develop these investment estimates and how agencies used the estimates, we interviewed officials from ARC; the Army Corps’ Planning and Policy Division, and Programs, Formulation and Evaluation Branch; EPA’s Office of Ground Water and Drinking Water and Office of Wastewater Management; the Department of Transportation’s FAA, FHWA, and FTA; and GSA’s Office of Portfolio Management. We reviewed agencies’ documentation of the procedures used to develop the estimates, but we did not verify whether these procedures were followed. We also relied on our past reviews of FAA, FHWA, and GSA for information on how these agencies develop and use the investment estimates. To accomplish our second objective, we used leading practices contained in our report Executive Guide: Leading Practices in Capital Decision- Making. We also reviewed laws and related guidance issued by the Office of Management and Budget (OMB), including the Government Performance and Results Act of 1993, the Federal Acquisition Streamlining Act of 1994, the Clinger-Cohen Act of 1996, OMB’s Executive Order 12893 (Jan. 26, 1994), OMB Circular A-11, and OMB’s Capital Programming Guide. In addition, we reviewed the Statement of Federal Financial Accounting Standards, No. 6, Accounting for Property, Plant, and Equipment. We also reviewed reports by the U.S. Advisory Commission on Intergovernmental Relations, National Council on Public Works Improvement, the President’s Commission on Capital Budgeting, and the National Academy of Sciences. We compared the procedures used by each agency to develop infrastructure investment estimates with the leading practices, which are listed in figure 4. The first seven practices were identified in our executive guide. The eighth practice—establish procedures to review data developed by others and use independent reviews of data and methods to further enhance the quality of estimates—was identified as a result of information collected during this review. We found that each estimate relied to some extent on data provided by others, such as the states. In past reviews, we have noted problems with the consistency of data that is collected from states and other sources. For example, DOT collects information on pavement condition from the states. Our review of the statistic used to indicate pavement condition demonstrated that states reported no information on 7 percent of the miles on the National Highway System, varied in their approaches to measuring and reporting the statistics, and did not follow uniformly DOT’s guidance for making these measurements. The independent review of data and analytical methods can enhance the quality of estimates. For example, given the uncertainty associated with predicting future impacts of regulatory alternatives, we recommended rigorous and independent peer review to enhance the analyses. Furthermore, it has been noted by others that over time FHWA has continuously improved its model to estimate investment needs.FHWA has used an independent review of the model to help make improvements. In 1997, the Appalachian Regional Commission (ARC) estimated that it would cost $8.5 billion (in current 1995 dollars) to complete the Appalachian Development Highway System (ADHS), a 3,025 mile system of highways that is designed to bring economic development to Appalachia. The amount is for initial construction only—it does not include maintenance, retrofits, or safety improvements to completed segments of the highway system. According to ARC officials, this estimate is probably understated due to the limited amount of detailed information available in 1997 and because the estimate was prepared before obtaining public input or identifying and addressing environmental or historic preservation concerns about specific highway corridors. ARC plans to issue an updated estimate in 2002. The major strengths and limitations of ARC’s estimate are summarized in figure 5. We have not done prior work related to ARC’s investment estimate or data used for the estimate. The amount of information we present concerning the estimate does not imply that it is better or worse than others. To produce an estimate for the highway system, each of the 13 states within Appalachia estimated the cost to complete the system within their state, and ARC aggregated the estimates. ARC and the Federal Highway Administration (FHWA) distributed an instruction manual to each state that detailed the methods and criteria for arriving at the estimate. Each state’s department of transportation, in conjunction with the local FHWA office, then prepared a detailed estimate of the cost to complete the unfinished portions of the highway system within their state. The states produced these cost estimates using preliminary or final plans, specifications, and estimates to the extent they were available. At a minimum, ARC’s instructions indicated that the states should have preliminary layouts of the proposed road and all major structures and interchanges so that reasonably accurate estimates could be made for items such as construction and paving. In addition, qualified appraisers were used to help determine the cost of rights-of-way and any relocation expenses. ARC and FHWA reviewed states’ estimates to ensure uniformity and accuracy. They assessed the reasonableness of the cost estimates by comparing them to the costs of similar highway projects within the state and FHWA’s data on construction costs. In addition, major changes to the scope and location of highways and the amount of the estimate had to be reviewed and approved by ARC, according to agency officials. ARC then totaled the estimates from each state and calculated the total cost to complete the highway system. Many of the estimates were made before the highway segments had undergone the planning process mandated by the National Environment Policy Act and, therefore, may be understated. This planning process includes obtaining input from the public, federal and state agencies, and historical societies and assessing any environmental or historic preservation concerns. Some states will not go through this planning process until a particular highway corridor is the next construction project. As a result, many of the highway estimates that ARC relied upon were made before this planning process occurred. As states go through this process, construction costs can rise dramatically if new concerns, such as environmental issues and historic preservation, result in changes to highway routes or even legal cases to determine the routes. ARC uses the estimate to distribute funding made available through the Highway Trust Fund to each state based on each state’s percentage share of the remaining highway system. For example, according to the latest estimate, 15.3 percent of the cost to complete the highway system is for highways within West Virginia. As a result, ARC gives 15.3 percent of its annually appropriated monies for the highway system to West Virginia. Each state sets its priorities for completing the highway system within its state with the funds received from ARC. Appalachian Development Highway System: 1997 Cost to Complete Report, Appalachian Regional Commission, Aug. 1997. Appalachian Development Highways: Economic Impact Studies., Wilbur Smith Associates, July 1998. Instruction Manual for Preparation and Submission: 1997 Estimate of Cost to Complete the Appalachian Development Highway System, Appalachian Regional Commission, Sept. 1996. The U.S. Army Corps of Engineers estimated that, as of March 30, 2001, it had about $38 billion in unmet water resources (inland and deep draft navigation, flood control, and shore protection) and hydropower infrastructure investment requirements for its civil works program. This estimate includes only projects that are already under construction. Of that amount, about $37 billion is for the construction of new water resource projects, $217 million is for the major rehabilitation of water resource projects, $400 million is for the major rehabilitation of hydropower plants, and $182 million is for other work at hydropower plants. In addition to the $38 billion, the Army Corps estimated that in fiscal year 2002, it would require $835 million to perform critical operations and maintenance work on water resources and related land projects, and $80 million in critical maintenance on the Mississippi River and tributaries’ projects. The Army Corps does not develop an investment estimate for water supply and wastewater treatment requirements. Instead, the Congress and local interests estimate water supply requirements for individual projects, and local governments are responsible for determining wastewater investment requirements. According to Army Corps officials, the amount estimated for water resources and hydropower investments might be inadequate because it does not consider increases in the cost of completing a project over time due to changing economic conditions. Those officials stated that it takes an average of 12 years for the Army Corps to complete most projects. During this time, increases in inflation and the costs of labor and material could result in higher project costs than anticipated. In addition, their are concerns that the quality of the estimate may be affected by inaccurate data and assumptions. For example, the National Academy of Sciences found that flawed data created unsound economic assumptions in the Army Corps’ draft feasibility study for the navigation system infrastructure on the Upper Mississippi River and Illinois Waterway. This resulted in inaccurate forecasts of future events, such as the level of barge shipping rates and grain demand, which compromised the integrity of analysis and led to an overstatement of the level of investment needed. As a result of problems with this draft feasibility study, the Army Corps plans to redo it. Since this project is not yet under construction, it is not part of the Army Corps’ $38 billion investment estimate. The major strengths and limitations of the Army Corps’ estimate are summarized in figure 6. We have not done prior work related to the Army Corps’ investment estimate or data used for the estimate. The amount of information we present concerning the estimate does not imply that it is better or worse than others. The water resources and hydropower estimates were developed by aggregating the funds required to construct and rehabilitate specific projects. Initially, Army Corps’ district offices submit lists of proposed water resource problems (projects) in their area—including those identified by local governments, organizations, and private citizens—to division commanders who assess the projects based on several criteria.The criteria include (1) whether a project is in accord with the agency’s current policy; (2) the urgency of resolving the problem; (3) geographic distribution; (4) the economic viability of the recommended plan; (5) local support for the project; (6) the possibility of nonfederal participation in the project; (7) the scheduled project completion date; and (8) the impact on fish, wildlife, and/or wetlands. A prioritized list of projects is submitted to the Assistant Secretary of the Army for Civil Works, who further screens the projects based on conformance to the administration’s priorities and political sensitivity. The projects selected by the Assistant Secretary are included in the Department of Defense’s budget submission to OMB and, if approved, in the President’s budget submission to the Congress. Ultimately, the Congress determines which projects to fund. Funded projects undergo several lengthy reviews by the Army Corps, including a feasibility study to investigate and recommend solutions to water resources problems. The costs of such studies and other non- construction costs are not part of the Corps’ overall investment estimate. The estimate for hydropower investment is based on the Army Corps’ inspections, tests, and evaluations of that equipment to determine service condition. If the results of those assessments show trends of unexpected deterioration, management decides whether the problem can be corrected by routine repairs or whether it is a capital need that requires rehabilitation or major repairs. The investment estimate for water supply is derived by local interests who engage the services of architectural and/or engineering firms to determine the costs for water supply projects. The local interests can also request assistance from an Army Corps’ field office in establishing a project’s cost. The local interests, rather than the Army Corps, relay that figure to the Congress. The Army Corps uses the water resources and hydropower investment estimates to determine the financial resources needed to manage, repair, and rehabilitate the assets under its jurisdiction and for new construction. The Army Corps uses the water supply estimate to provide planning, design, and construction assistance to projects sponsored by local interests when specifically directed, authorized, and funded by the Congress. Inland Navigation System Planning: The Upper Mississippi River- Illinois Waterway, Report of the National Research Council (Feb. 28, 2001). The Congress required the Environmental Protection Agency (EPA) to survey public water systems that are eligible for assistance from the Drinking Water State Revolving Fund (DWSRF) about their capital investment needs every 4 years. EPA’s second survey, issued in February 2001, estimated that $150.9 billion (in current 1999 dollars) was needed from 1999 to 2018. Of that amount, $31.2 billion was needed to comply with existing and proposed regulations of the Safe Drinking Water Act. The major strengths and limitations of EPA’s estimate are summarized in figure 7. We have not done prior work related to EPA’s investment estimate or data used for the estimate. The amount of information we present concerning the estimate does not imply that it is better or worse than others. In contrast to EPA’s estimate, the Water Infrastructure Network (WIN)—a consortium of 21 industry, municipal, and nonprofit associations— estimated that investment needs for drinking water will average about $24 billion per year through 2019 (expressed in constant 1997 dollars). Of the $24 billion estimated by WIN, $19 billion is for capital investment and $5 billion represents financing costs. EPA’s estimate was derived from a nationwide survey mailed to medium and large-sized water systems. All of the nation’s largest systems (serving more than 40,000 people) and a random sample of medium systems (serving more than 3,300 people and fewer than 40,000 people) were included in the survey. The water systems were asked to submit documentation of the purpose and scope of each project so that EPA could verify that the projects met the eligibility criteria for funding by the DWSRF. EPA also required that each project cost be supported by documentation indicating that the cost had undergone an adequate degree of professional review. The systems returned the completed questionnaires and supporting documentation to the states for review. The states had the option of providing supplemental documentation on the project or its cost. The states then forwarded the completed questionnaires to EPA for review. EPA reviewed the project components that were included in cost estimates, modeled costs for projects that lacked cost documentation, and deleted projects that were ineligible for funding under the DWSRF. The infrastructure demands of small systems were obtained through site visits to approximately 599 systems, with at least 6 systems selected in each state. EPA conducted an additional 100 site visits to assess the demands of not-for-profit noncommunity water systems. The survey was designed to provide state-level estimates of medium and large systems and national-level estimates of small systems with a precision target of 95 percent +/- 10 percent. A precision target of 95 percent +/- 30 percent was established for the not-for-profit noncommunity water systems. The estimates, however, might understate water supply needs because some systems submitted cost estimates covering 2 to 5 years rather than the 20-year period requested by EPA. Further uncertainties exist with the estimate because the water supply survey excluded costs arising solely from population growth. EPA uses the results of the most recent survey to allocate monies from the DWSRF to the states, basing each state’s allocation on its share of the total national investment amount, with a minimum allotment of 1 percent of available funds. Each state develops a priority system for funding projects based on public health criteria specified in the 1996 Safe Drinking Water Act. In addition, EPA uses the survey as a tool for allocating the Tribal Set- Aside (up to 1.5 percent of the DWSRF annual appropriation) to American Indian and Alaskan native village water systems. Congressional Budget Office, Statement of Perry Beider before the Subcommittee on Environment and Hazardous Materials, Committee on Energy and Commerce, U.S. House of Representatives (Mar. 28, 2001). EPA Office of Water, Drinking Water Infrastructure Needs Survey: Second Report to Congress (EPA 816-R-01-004, Feb. 2001). Water Infrastructure Network, Clean & Safe Water for the 21st Century: A Renewed National Commitment to Water and Wastewater Infrastructure (undated, available from the American Water Works Association). EPA periodically reports to the Congress on the nation’s investment needs for municipal water pollution control facilities, primarily wastewater treatment facilities. In the 1996 clean water needs survey report, EPA estimated that $139.5 billion (in current 1996 dollars) was needed over the years 1996 to 2016 to satisfy water pollution control needs. The total included $44.0 billion for wastewater treatment, $10.3 billion for upgrading existing wastewater collection systems, $21.6 billion for new sewer construction, and $44.7 billion for controlling combined sewer overflows. The investment estimate included costs for facilities used in conveyance, storage and treatment, and recycling and reclamation of municipal wastewater. In addition, the estimate included the costs for structural and nonstructural measures to develop and implement stormwater and nonpoint source pollution programs. The overall investment estimate did not include costs that were ineligible for federal assistance under Title VI of the Clean Water Act, such as house connections to sewers and costs to acquire land that is not a part of the treatment process. The estimate did not include information on private wastewater treatment facilities and those serving Indian tribes and Alaskan native villages. More recently, EPA estimated that the amount may be closer to $220 billion because some needed work probably had not been documented and reported by the states. EPA expects to submit its next clean water needs survey report to the Congress in August 2002. The major strengths and limitations of EPA’s investment estimate for wastewater facilities are summarized in figure 8. We have not done prior work related to EPA’s investment estimate or data used for the estimate. The amount of information we present concerning the estimate does not imply that it is better or worse than others. EPA maintains a database of cost and technical information on publicly owned wastewater treatment facilities, which is used to develop the investment estimate. The database included about 16,000 wastewater treatment facilities and 21,000 sewage collection systems in 1996, when the last estimate was made. The database includes information on individual facilities’ projects and programs that target documented water quality or public health problems. EPA periodically requests information from the states to update this database. In the 1996 clean water needs survey, the states were asked to identify projects to build or expand treatment facilities to accommodate the capacity required by the existing population over the next 20 years. EPA also requested the states to update the documentation for projects already in the database with estimated costs greater than $5 million if the documentation was dated prior to 1990. EPA reviewed all documentation submitted by the states to ensure compliance with its established criteria. Generally, documentation—such as capital improvement plans—was acceptable if it included details concerning the proposed project, such as a definition of the problem, a description of the solution, and cost estimates. If the documentation lacked cost estimates, EPA estimated the cost using models. However, the documentation provided to EPA sometimes covered only a 5-year period— not the 20-year period asked for. Therefore, EPA officials believe the estimates are conservative. In addition, EPA modeled states’ costs for combined sewer overflows (releases of raw sewage from systems that convey sewage and stormwater in the same pipes) and activities to control stormwater runoff and nonpoint sources of pollution. Furthermore, EPA reported that it believes the investment estimates were understated for sanitary sewer overflows (releases of raw sewage from sanitary sewer collection systems) and that it was developing updated cost estimates separately from the 1996 clean water needs survey. According to EPA, the clean water needs survey is also used to assist the federal government and the states in program planning, policy evaluation, and program management and to inform the Congress of the magnitude of the needs. Private firms, public interest groups, and trade associations use the survey information in marketing, cost estimating, and policy formation. EPA 1996 Clean Water Needs Survey (CWNS) Report to Congress (EPA 832-R-97-003, Sept. 1997). In 1999, the Federal Aviation Administration (FAA) submitted to the Congress its most recent investment estimate for the nation’s airports— $35.1 billion (in constant 1998 dollars) for the years 1998 to 2002. A significant portion of this estimate is for projects that will bring existing airports up to current design standards (37 percent), develop passenger terminal buildings (16 percent), or add capacity to congested airports (13 percent) at 3,561 airports and proposed airports in the United States. The estimate only includes projects that are eligible for funding under FAA’s Airport Improvement Program. The major strengths and limitations of the estimate are summarized in figure 9. In a previous report, we reviewed the database used to develop the estimate, and we have included the results of that review in our analysis. The amount of information we present concerning the estimate does not imply that it is better or worse than others. FAA determined its investment estimate using the National Plan of Integrated Airport Systems (NPIAS) database. NPIAS includes the estimated cost of individual infrastructure investment projects requested by airports. The projects originate primarily from airport plans, including master plans. Airport officials may consider noncapital alternatives to address unmet infrastructure requirements when producing these plans. For example, officials may consider altering operational procedures or practices to allow more airplanes to use one runway instead of requesting funds for an additional runway. If noncapital alternatives do not exist, airport officials request funding for a capital project within their plans, which contain specific proposals and cost estimates for each project. FAA officials in field offices review each project within each airport plan to determine if the project is eligible and justified. A project is eligible if it qualifies for federal funds under the Airport Improvement Program. A project is justified based on a judgmental decision by FAA district officials. For example, one airport in central Texas proposed adding four new runways to the airport, which FAA officials considered unjustified because the amount of air traffic served by the airport was insufficient to merit the additional runways. Projects and plans that are approved by FAA at the district level are then entered into the NPIAS database, along with the estimated costs. FAA officials in Washington then review the data in NPIAS and ensure that district officials have included only projects that are eligible for federal funding and are justified. FAA officials add up the estimated cost of these projects and produce an overall investment estimate. FAA submits the estimate to the Congress, as required by statute. Report to Congress: National Plan of Integrated Airport Systems (NPIAS) (1998-2002). U.S. Department of Transportation, Federal Aviation Administration (Mar. 12, 1999). Airport Development Needs: Estimating Future Costs (GAO/RCED-97-99, Apr. 7, 1997). Airport Financing: Funding Sources for Airport Development (GAO/RCED-98-71, Mar. 12, 1998). In May 2000, FHWA submitted to the Congress its most recent biennial estimate of a range of investment needs for the nation’s highway and bridge investment needs. First, it estimated that $83.4 billion per year over 20 years (1998 to 2017) in highway investment would be economically justified based on the benefits of the investment exceeding the cost. Second, it estimated that $50.8 billion per year over 20 years would be needed to maintain the current physical condition of the nation’s highways. Third, it estimated that $53.9 billion a year over 20 years would be needed to maintain the current cost to users (such as travel-time costs, vehicle-operating costs, and crash costs). These estimates of highway investment cover all public road mileage—3.95 million miles in 1997. The major strengths and limitations of FHWA’s investment estimate are summarized in figure 10. In a June 2000 report, we reviewed FHWA’s model used to develop these estimates, and we have included the results of that review in our analysis. The amount of information we present concerning the estimate does not imply that it is better or worse than others. FHWA developed the estimate using data from a statistically drawn national-level sample of about 125,000 highway segments throughout the United States. This sample included data on a variety of highway conditions, including pavement roughness, traffic levels, and lane width. The states also provided FHWA with forecasts on such matters as travel growth. FHWA staff reviewed the data submitted by the states and looked for anomalies or unusual patterns. FHWA asked the states to correct serious flaws and improve data submission for minor flaws. Finally, FHWA division offices periodically reviewed state data collection procedures to ensure consistency among states. The corrected information was inputted into FHWA’s Highway Performance Monitoring System. FHWA primarily used the Highway Economic Requirements System (HERS) model to determine future investment requirements. It assessed the current condition of the sample highway segments and then projected future condition and performance of the segments based on expected changes in factors such as traffic volume. Based on this information, the model simulated the effects of infrastructure improvements for the highway segments and compared the relative benefits and costs associated with alternative improvement options. While FHWA’s model analyzes these sample sections individually, the model is designed to provide estimates of investment requirements valid at the national level and does not provide improvement recommendations for individual highway segments. FHWA acknowledges that some HERS data, particularly emissions data, varies in quality. To reach a total estimate for highway investment requirements, FHWA supplements results of this model with external adjustments to account for (1) classes of highways not included in either the statistical sample or the model and (2) certain types of capital investment. FHWA acknowledges that the estimates are not based on benefit-cost analysis and are less rigorous than the HERS results. The model currently does not directly consider new roads or system enhancements—improvements primarily related only to safety, traffic operations, or environmental enhancements—as part of its analysis. According to FHWA’s most recent estimate, FHWA assumed those types of improvements will consume the same overall percentage of highway capital investment as they have in the past. FHWA has had transportation and economic experts review the model to assess and improve it. In June 1999, the experts found that FHWA has strengthened the model over time and that recent refinements have increased its applicability and credibility. Also, FHWA staff and consultants continually look for ways to improve the model. For example, FHWA officials told us they plan to eliminate a computational shortcut for their next estimates, which they plan to issue in 2002. FHWA used this shortcut to approximate the lifetime benefits associated with an improvement. In addition, the National Cooperative Highway Research Program is reviewing FHWA’s methodology for determining its investment estimate. We found the model was reasonable despite some limitations. First, the model cannot completely reflect changes occurring among all highways in the transportation network at the same time, since the model analyzes each highway segment independently. Second, the model cannot estimate the full range of uncertainties within which its estimates vary because it is not designed to completely quantify the uncertainties associated with its methods, assumptions, and data. In making estimates, the model relies on a variety of estimating techniques and hundreds of variables, all of which are subject to some uncertainties. For example, we have reported that pavement roughness data reported by the states to FHWA are not completely comparable, partly because the states use different devices to measure roughness. Third, FHWA uses two different approaches in compiling the estimate. The benefit-cost analysis used in the HERS model is not comparable to the analysis used to estimate investment needs for roads outside the sample and other kinds of road projects. Nevertheless, FHWA combines these estimates and characterizes both as economically justified. The highway estimate provides federal officials a source of information for decisionmaking concerning investments. In particular, legislative and executive branch officials use the estimate to obtain general information on the nation’s need for infrastructure investments. Also, some groups may use the estimate in discussions about the level of federal funding for highways. 1999 Status of the Nation’s Highways, Bridges, and Transit: Conditions & Performance. U.S. Department of Transportation, Federal Highway Administration and Federal Transit Administration (May 2, 2000). Highway Infrastructure: FHWA’s Model for Estimating Highway Needs Is Generally Reasonable, Despite Limitations (GAO/RCED-00-133, June 5, 2000). In May 2000, the Federal Transit Administration (FTA) submitted its biennial estimate to the Congress on the nation’s mass transit systems, including buses, rail cars, and ferries. The report covered 1998 to 2017 and estimated investment requirements under four scenarios, depending on whether the condition and/or the performance of existing mass transit systems was maintained or improved. FTA estimated that the average cost for these four scenarios ranged from $10.8 billion to $16.0 billion per year (in constant 1997 dollars). This estimate is based on incomplete data and imprecise predictions, which limit the usefulness of the estimate. The major strengths and limitations of FTA’s investment estimates are summarized in figure 11. We have not done prior work related to FTA’s investment estimate or data used for the estimate. The amount of information we present concerning the estimate does not imply that it is better or worse than others. FTA’s first scenario—“Maintain Conditions and Performance”—estimates the capital investment needed to maintain the average condition of mass transit assets over the 20-year period and add new capacity to maintain current vehicle usage levels as passenger travel increases. The second scenario—“Maintain Conditions and Improve Performance”—estimates the investment needed to maintain the average existing conditions of mass transit assets and to improve the service coverage and/or frequency of mass transit service. The third scenario—“Improve Conditions and Maintain Performance”—estimates the investment needed to bring the average condition for each major asset type to “good” while maintaining current vehicle usage levels as transit passenger travel increases. The fourth scenario—“Improve Conditions and Improve Performance”— estimates the investment needed to bring the average condition for each major asset type to “good” and also improve service quality by increasing the area covered by mass transit and/or increasing the frequency of mass transit service. The estimates for these four scenarios taken from the Department of Transportation’s 1999 Conditions and Performance report, are shown in table 3. FTA developed its investment estimates using the National Transit Asset Inventory. This inventory includes information on the age of buses and railcars and on maintenance facilities. FTA estimates the condition of buses and rail cars based on their age, using data gathered by the agency over time from surveys of the condition of vehicles. The information for this database is collected by every transit agency in an urbanized area that receives federal assistance, according to an agency official. FTA used the Transit Economic Requirements Model to determine the future infrastructure and asset needs for transit. This computerized model predicts the changes that will occur to transit infrastructure and vehicles over time and the investments needed to maintain or improve current conditions and performance of mass transit systems. To forecast needs for the condition of assets, the model includes aggregate data on the condition of assets based on a 1 to 5 scale and uses a benefit-cost analysis to determine if the benefit of replacing an asset (and thus improving its condition) outweighs the cost of the replacement. If the benefit outweighs the cost, the project is added to the final cost as reported by the model. To forecast needs for the performance of assets, FTA uses predictions of the number of future passengers developed by Metropolitan Planning Organizations (MPO). FTA uses this information to determine future capacity needs. Capacity needs are expressed as either more frequent service or a new system. The Transit Cooperative Research Program is reviewing FTA’s methodology for determining its investment estimates. According to FTA, the results of this review will be considered in developing FTA’s next estimates in 2002. Missing data and imprecise predictions limit the accuracy of the investment estimates. For example, the database lacks future travel forecasts for the New York City area. In addition, according to FTA, some MPOs submit data that vary in quality. According to FTA, the agency also does not have complete information on the condition of fare collection systems, stations, and maintenance facilities that are part of the mass transit systems. Finally, according to FTA, it is difficult to predict the growth in travel over time. According to FTA, the investment estimate is used to provide broad, general support for FTA’s budget and to help tie the budget to the levels of performance discussed in the estimate. The estimate provides information to FTA and the Congress on changes in the condition and performance of mass transit. Finally, the estimate serves as a baseline for performance goals mandated under the Government Performance and Results Act and identifies the performance goals FTA is likely to achieve in the future. 1999 Status of the Nation’s Highways, Bridges, and Transit: Conditions and Performance, U.S. Department of Transportation: Federal Highway Administration and Federal Transit Administration (May 2, 2000). GSA’s data indicated that, as of May 2, 2001, it would cost $4.58 billion for repairs and alterations of public buildings. This estimate included both items currently needed and future work items to be undertaken over the next 5 years. Examples of repairs and alterations include repairs to major building components, such as electrical, heating, ventilation, and air conditioning systems; fire alarm and sprinkler systems; and other fire and life safety items. In fiscal year 2001, GSA estimated that an additional $250 million to $300 million was needed annually over the following 5 years for new building construction for border stations and federal office buildings. In addition, in fiscal year 2001, the Judicial Conference estimated $500 million was needed annually over the following 5 to 7 years to construct new courthouses. The major strengths and limitations of these investment estimates are summarized in figure 12. In previous reports, we reviewed the database used to develop the estimate of repairs and alterations, and we have included the results of that review in our analysis. The amount of information we present concerning the estimate does not imply that it is better or worse than others. GSA develops cost estimates when it determines that repairs and alterations are needed. GSA’s overall estimate for repairs and alterations was derived from information contained in the Inventory Reporting Information System (IRIS), a database of projects. The projects are identified and entered into the database at the regional level. The projected cost data are derived from various sources, including contractors, safety inspectors, and building engineers. The database includes current and future projects. Work items in the database may be updated daily by regional office staff as new work is identified and completed work is deleted. GSA’s process for developing investment estimates for new construction projects also begin, with evaluations at the regional level. GSA regional staff evaluate existing facilities, the availability of sites for new construction, and the disposition of old facilities. Using a computer model, GSA’s regional staff compare the cost of construction to the cost of leasing space. The computer model uses cost estimates based on benchmark values that are specified for locations around the country. The regional offices submit their recommendations for construction projects along with the computer analysis and other data to GSA headquarters for review. GSA identifies projects that make up its overall investment estimate as prospectus-level or nonprospectus-level: prospectus-level projects have estimated costs of $1.99 million or more, and nonprospectus-level projects have estimated costs greater than $10,000 and less than $1.99 million.GSA prioritizes prospectus-level investment projects as preparation for the annual budget process. The regions identify proposed projects and submit the proposals, along with supporting data, to GSA headquarters for review and funding consideration. There, headquarters staff and the capital investment panel assess the merits of each proposed project and rank the projects with the aid of computer-based software called “Expert Choice.” The model uses five weighted criteria to rank the projects that are competing for funding. These criteria consider, in weighted order, (1) economic return—whether the project will generate additional revenue for the Federal Building Fund, the source of funds for GSA’s repair and alterations and construction projects; (2) project risk—whether the project will begin in the planned fiscal year and use the authorized funding; (3) project urgency—whether the project will correct building conditions that are unsafe or involve severe deterioration; (4) community planning—whether the project will protect the building’s historic significance and positively impact the local community; and (5) customer urgency—the project will have a positive impact on the tenant agencies’ operations or mission. GSA officials, however, stated that the Expert Choice model is not the sole basis for decisions; the model is not intended to replace the professional judgment and knowledge of staff. During the assessment process, each project is assigned a numerical score and then ranked in order of priority. The projects with the higher scores usually became candidates for funding. For fiscal year 2001, GSA assessed the merits of 27 repair and alterations design projects proposed by its regional staff and selected 12 to recommend for funding. In 2000, we reported problems with the quality of data contained in IRIS. For example, we found that not all repairs were included, some repairs that were included were already in progress or completed, some data were incorrectly reported, and some cost estimates for repairs were not current. In addition, the projects that make up the estimate for repairs and alterations are expressed in unadjusted dollars; in some cases the year that the estimate was made is not included in the database. We have also reported that the lack of a multiyear plan for repairs and alterations affects the agency’s ability to make investment decisions. In response to recommendations made in our previous reports, GSA is engaged in several activities intended to improve its IRIS database and enhance the management of its inventory of buildings. For example, the agency is undertaking efforts to validate the quality and consistency of IRIS, such as revising work item codes to be more descriptive. In addition, the agency has significantly reduced the number of overdue work items in the database, thereby improving the quality of the database, according to GSA officials. GSA is also implementing a building condition assessment survey, which provides automated cost estimates using industry-accepted software. GSA began a pilot program in one region in 2000 and has expanded the program to its other 10 regions. The agency expects to complete initial building condition assessments on the entire inventory of buildings by the end of September 2001. It plans to review information gathered in these assessments and enter new work items into the IRIS database. Existing work items will be updated with the results of the condition assessments. GSA’s IRIS database is used as input in determining funding priorities. For prospectus-level projects, GSA’s headquarters staff review the estimates submitted by the regions, apply the Expert Choice model and professional judgment, and then select projects for inclusion in the agency’s budget proposal that is sent to OMB. For nonprospectus-level repairs and alterations projects, GSA’s headquarters staff allocate a portion of all funds for repairs and alterations to each regional office based on regional priorities. Federal Buildings: Funding Repairs and Alterations Has Been A Challenge—Expanded Financing Tools Needed (GAO-01-452, Apr. 12, 2001). Federal Buildings: Billions Are Needed for Repairs and Alterations (GAO/GGD-00-98, Mar. 30, 2000). General Services Administration: Many Building Security Upgrades Made but Problems Have Hindered Program Implementation (GAO-T- GGD-98-141, June 4, 1998). Federal Buildings: Actions Needed to Prevent Further Deterioration and Obsolescence (GAO/GGD-91-57, May 13, 1991). The following are GAO’s comments on GSA’s letter dated July 6, 2001. 1. We revised the report to indicate that the database is used as input to funding decisions. 2. We note in the report that GSA uses economic benefits as one criterion ranking and selecting projects for funding. 3. We note in the report that GSA is engaged in activities intended to improve the quality and consistency of its database on repairs and alterations and provide examples of those activities. Other key contributors to this report were Phillis Riley, Sharon Dyer, John Shumann, Christine Bonham, Catherine Colwell, Michael Curro, William Dowdal, Timothy Guinane, Trina Lewis, Lisa Turner, and Alwynne Wilbur. | A sound public infrastructure plays a vital role in encouraging a more productive and competitive national economy and meeting public demands for safety, health, and improved quality of life. The federal government has spent an average of $149 billion (in constant 1998 dollars) annually since the late 1980s on the nation's infrastructure. Little is known, however, about the comparability and reasonableness of individual agencies' estimates for infrastructure needs. This report discusses infrastructure investment or "needs" estimates compiled by seven agencies--the U.S. Army Corps of Engineers, the Environmental Protection Agency (EPA), the Federal Aviation Administration (FAA), the Federal Highway Administration (FHWA), the Federal Transit Administration (FTA), the General Services Administration (GSA), and the Appalachian Regional Commission (ARC). GAO focuses on the following infrastructure areas: water resources (inland and deep draft navigation, flood control, and shore protection), hydropower, water supply, wastewater treatment, airports, highways, mass transit, and public buildings. GAO found that the agencies' estimates for infrastructure investments ranged from GSA's calculation of $4.58 billion (in current dollars) over one to five years to repair public buildings to FHWA's estimate of $83.4 billion (in constant 1997 dollars) per year over 20 years to improve highways. The estimates prepared by the Army Corps (for water resources and hydropower) and GSA are for federal spending; the other estimates are for spending from federal, state, and local sources. Each of the seven agencies developed their investment estimate using data from localities, states, or agency regional offices. The estimates, however, were developed using different analytical procedures. The investment estimates cannot be easily compared or simply "added up" to produce a national estimate of infrastructure investment needs because of differences in the methods used, time periods covered, and spending sources. Each of the seven agencies has procedures for developing infrastructure investment estimates that reflect eight practices used by leading government and private sector organizations. No agency has procedures for all eight leading practices. |
The School Improvement Grants (SIG) program was created in 2002 and funds reforms in the country’s lowest-performing schools with the goal of improving student outcomes, such as their standardized test scores and graduation rates. Congress greatly increased SIG program funding, from $125 million available in fiscal year 2007—the first year the program was funded—to $3.5 billion in fiscal year 2009, although funding for the last two fiscal years has been $506 million a year. Before awarding formula grants to states, Education reviews each state’s application and approves the state’s proposed process for competitively awarding SIG subgrants to eligible entities, including school districts. As a part of its application, each state is required to identify and prioritize eligible schools and ensure that school districts with the persistently lowest-achieving schools receive SIG funding. School districts must then apply to states to implement one of seven intervention models in each SIG school, each with specific requirements for reform interventions, during a grant period between 3 and 5 years. Two of these intervention models, which are being implemented in the large majority of current SIG schools, require schools to extend learning time as part of the required whole-school reform strategies. SIG requirements define increased learning time as using a longer school day, week, or year to significantly increase the total number of school hours. This additional time is to be used for (1) instruction in academic subjects, including English, reading or language arts, mathematics, science, foreign languages, civics and government, economics, arts, history, and geography; (2) instruction in other subjects and provision of enrichment activities that contribute to a well-rounded education, such as physical education, service learning, and experiential and work-based learning opportunities; and (3) teachers to collaborate, plan, and engage in professional development within and across grades and subjects. The 21st Century Community Learning Centers program (21st Century) is meant to support local communities in establishing or expanding community learning centers that provide opportunities for academic enrichment during non-school hours, such as before- and after-school and summer school programs, and related services to students’ families. The program provides formula grants to state educational agencies (SEA), which subsequently offer competitive subgrants to eligible entities, including school districts, community-based organizations, and other public or private entities. SEAs must provide an assurance in their applications for 21st Century grants that they will make awards only to eligible entities that propose to serve students who primarily attend schools eligible to operate a schoolwide program under Title I of the Elementary and Secondary Education Act of 1965, as amended (ESEA), which are schools with at least 40 percent of students from low-income families. In addition to activities designed to help students meet state and local academic achievement standards, program funds may be used to provide activities that complement and reinforce a student’s regular school-day program, such as art and music education activities, recreational activities, telecommunications and technology education programs, expanded library hours, literacy programs, and drug and violence prevention activities. In fiscal year 2012, Education began to invite SEAs to waive specific ESEA provisions, which require community learning centers to carry out their activities during non-school hours or periods when school is not in session. The waiver allows grantees to use 21st Century program funds to conduct authorized activities during an extended school day, week, or year in schools that provide extended learning time for all students in the school. Education’s guidance for the 21st Century program defines “expanded learning time” as “additional instruction or educational programs for all students beyond the state-mandated requirements for the minimum number of hours in a school day, days in a school week, or days or weeks in a school year.” According to Education guidance, these activities can include supplemental academic enrichment activities that support a well-rounded education for students and increase collaboration and planning time for teachers, or for partnering with outside organizations, such as nonprofit organizations, that have demonstrated experience in improving student achievement. The vast majority of funding for K-12 public schools nationwide comes from state and local sources. For example, in school year 2009-2010, U.S. public schools received about 87 percent of their funding from state and local sources—43 percent from state sources and 44 percent from local sources. Federal funding has generally comprised about 9 percent of public schools’ funding from school year 2002-2003 through 2008- 2009. In school year 2009-2010, federal funding for public schools comprised about 13 percent, which was slightly higher than in previous years due in part to the American Recovery and Reinvestment Act of 2009. For public schools nationally, Education’s Title I, Part A of ESEA, is among the programs providing the largest amounts of federal funding. The School Improvement Grants (SIG) program is the only program administered by Education that provides funds specifically to establish extended learning time in a school, according to Education officials. Between school year 2010-2011, when changes were made to SIG to require extended learning time in certain instances, and 2014-2015, the last year SIG data are available, nearly 1,800 schools or approximately 94 percent of SIG schools had chosen either of two school improvement models that require extended learning time, out of the four available models (see fig. 1). The schools implementing either of those two models had to extend learning time alongside other key reforms such as developing new teacher and principal evaluation systems. However, these 1,800 schools represent a small fraction of the nearly 90,000 K-12 public schools nationwide. Further, among the lowest-performing U.S. public schools, less than 7.5 percent are receiving SIG funds, according to a 2012 Institute of Education Sciences study. Although only about 1,800 schools have received assistance through SIG grants, this assistance can be vital to a school in helping to implement key reforms such as extended learning time. According to an Education report on the 2012-2013 school year, the average 3-year grant was $2.6 million as of that year. Education found that 68 percent of SIG schools in school year 2010-2011 (referred to as Cohort 1), 79 percent in school year 2011-2012 (referred to as Cohort 2), and 83 percent of SIG schools in school year 2012-2013 provided extended learning time. This is to be expected given that, as previously noted, most school districts have chosen a SIG model that requires extended learning time. On average, Cohorts 1 and 2 had 76 and 96 more hours, respectively, than all public schools in school year 2011-2012, according to Education’s analysis of SIG grant reports. Additionally, a 2014 Institute of Education Sciences study found that 71 percent of SIG schools surveyed had extended learning time, while only 60 percent of non-grantee schools had taken steps to increase learning time. However, Education officials also told us some schools increased learning time by very little, some by as little as 10 minutes a day. In contrast to SIG, where most SIG schools are implementing extended learning time, as of July 2015, Education officials reported that only a fraction of the 21st Century learning centers it funds—about 69 out of 10,000 nationwide—are supplementing local extended learning time initiatives by taking advantage of waivers of ESEA requirements that allow them to provide 21st Century program activities and services during an extended school day. Therefore, as required by statute, more than 99 percent of the centers continue to use these subgrants to serve students outside school hours, which according to Education officials is typically in after-school programs where attendance is voluntary. The average subgrant per center was about $113,000 per school year. Regardless of whether subgrantees conduct activities during extended learning time or not, centers are to follow the same program rules about what types of activities can be funded. Further, Education said schools cannot use these funds to establish new programs to extend the school day. Rather, they can be used only to supplement other programs and activities for all students in schools that had extended the normal school day, week, or year. Education reported in its fiscal year 2015 budget justification that using 21st Century funds in this way could improve 21st Century program performance by minimizing or eliminating participation problems that many schools and other providers have experienced. For example, Education reported that nearly half of students attending 21st Century activities attended for fewer than 30 days in school year 2012-2013. Two extended learning time providers we interviewed said that changing from voluntary after-school activities to programs during the extended school day, which is generally mandatory for all students, helped reach underserved students who may not have chosen to attend if the activities were offered after school. Since 2012, 27 states have applied for ESEA waivers to allow 21st Century grantees to conduct authorized activities during the school day to support extended learning time, and all were approved by Education. However, as of July 2015, only 11 of those states have schools that have implemented extended learning time programs. With regard to states’ limited use of these waivers, Education officials said that while the waivers have been available since mid-2012, extended learning time can be expensive to implement and it takes time for states to plan and collaborate with districts, schools, and community organizations. Additionally, Education officials said states are not required to report the number of eligible entities using this waiver flexibility, so these numbers may be low. Education is currently collecting data on conducting activities during the school day in schools with extended learning time for its 21st Century programs for the first time, and officials told us they plan to report detailed figures in February 2016 on the number of hours and locations of all 21st Century programs in schools with extended learning time. Education officials told us that most Education funding streams are designed for use during the school day–regardless of the length of the school day or year–and as such, schools may use these funds during extended learning time, consistent with other program requirements. For example, schools may use funds from Title I, Part A of ESEA to serve all students in schools that operate a schoolwide program in which they serve a high-concentration of students from low-income families. Education officials said that these funds could be used to supplement local funding to extend the school day for all students in those schools. Officials added that Charter School Grants may support eligible charter schools with or without extended learning time programs. Charter School Grants are primarily designed to assist in the planning, program design, and initial implementation of charter schools, either through the creation of new charter schools or the replication or the expansion of existing high- quality charter schools with demonstrated records of success. In another example, Education officials told us that Full-Service Community Schools grants, which support partnerships among schools, school districts, and community-based organizations for coordinated academic, social, and health services can be used to provide those services during extended learning time. Representatives from several organizations we interviewed also identified ways that Education programs can contribute to extended learning time. For example, a representative from a teachers’ union we interviewed told us the union worked with a school that used Title I funds to hire a site coordinator who helped the school manage implementation of a longer school day. Representatives from extended learning time organizations also told us that Title II funding, which is used for teacher development, was helpful for schools with extended learning time because it allowed them to use the longer day for more professional development. A representative from an organization representing school superintendents said school districts may be using Title VI funds, which provide assistance to rural schools, to extend learning time. Education officials cautioned that schools that use federal funds during extended learning time must meet all applicable requirements for those funds, including requirements related to allowable costs and any statutory “supplement not supplant” requirement. In general, this requirement precludes a grantee from using federal funds for activities that it would conduct, in the absence of such federal funds, with state or local funds. On average, we estimate that K-12 public schools nationwide have a 6.7- hour school day and a 179-day school year, according to our analysis of Education’s School and Staffing Survey (SASS) data for the 2011-2012 school year, the most recent data available (see fig. 2). Specifically, an estimated 65 percent of schools have school days that are from 6.5 to 7.5 hours. Further, K-12 public schools that have the most time in school have an estimated average of 1,341 hours in a school year, which is about 137 more hours (or about four more weeks) than the estimated national average of 1,204 hours, according to our analysis of Education’s SASS data for the 2011-2012 school year. The schools with the most time are going well beyond the hours required per year by most states. Specifically, a 2015 Education Commission of the States study found that most states require between 900 and 1,080 hours per year for all grades. The 9,000 schools in the top 10 percent of schools with the most school hours tend to add time by lengthening the school day more so than adding days to the school year. Specifically, according to our analysis of SASS data, schools in the top 10 percent of schools with the most time typically have a school day that is estimated to be one hour longer than all other schools. Further, representatives from the three extended learning time organizations we spoke with told us that most schools extended the day as opposed to adding days in the year. One representative told us that the typical schools they partner with have about three additional hours per day and operate Monday through Friday until 6 p.m. Charter schools generally include more hours in a school year than traditional K-12 public schools (see fig. 3). Specifically, charter schools have an estimated average of 1,285 hours in a school year compared to an average of 1,209 hours for traditional public schools, according to our analysis of Education’s SASS data for the 2011-2012 school year. Further, a larger proportion of charter schools fall into the category of schools with the most time (30 percent) as compared to 9 percent of traditional public schools. Specifically, of the estimated 2,754 charter schools nationwide, 867 are in the top 10 percent of schools with the most hours. As we have previously reported, charter schools typically operate with more autonomy than traditional public schools. Consequently, charter schools may have fewer barriers than traditional public schools to extending their hours. Similarly, the time that K-12 public schools spend on learning differs by region of the country and by school setting (e.g., urban or rural). Schools in the South have an estimated average of 1,253 learning hours per year, in the Northeast schools reported an estimated 1,197 hours, in the Midwest an estimated 1,192 hours, and the West an estimated 1,146 hours. The additional hours means that schools in the South provide, on average, approximately 3 weeks more learning time per year than schools in the West. The average number of school hours also differs according to a school’s setting. An estimated 7 percent of suburban schools were in the top 10 percent of schools with the most learning time, compared to 10 to 12 percent of schools in urban, small town, and rural settings. Conversely, there is little difference in annual school hours by grade level—such as elementary, middle or high school (see fig. 5). For example, for 90 percent of all public K-12 schools nationwide, middle schools have the most learning time, with an estimated 1,213 hours in a school year. The difference is relatively small, however; middle schools have an estimated 38 more hours per year than elementary schools and an estimated 36 more hours than for high schools. For the 10 percent of schools with the most learning time, high schools have the most time, with an estimated 1,428 hours in a school year, but again, the difference is small. High schools have an estimated 17 hours more time than elementary schools and an estimated 10 hours more time than middle schools. One extended learning time provider we spoke with works to extend the learning day for low-income middle school students by providing academic support and project-based apprenticeships. Another extended learning time provider told us its extended learning time model is typically for kindergarten through eighth grades. Our analysis of SASS data for the 2011-2012 school year shows that K- 12 public schools with the most time have a larger proportion of students receiving free or reduced price lunch—an estimated 61 percent—than all other schools whose student body population comprises 51 percent of these students (see fig. 6). Further, public K-12 schools in the top 10 percent of schools with the most time have a larger proportion of African American and Hispanic students—an estimated 9 percent more African- American students and 7 percent more Hispanic students—than all other schools. One explanation for this could be that, according to our analysis, charter schools serve higher percentages of low income, African American, and Hispanic students than traditional public schools. African Americans account for approximately 27 percent and Hispanics for 28 percent of the charter school student body population nationwide compared to 13 and 21 percent in traditional public schools, respectively. Further, among public schools in the top 10 percent of schools with the most learning time, charters serve a higher percentage of African American and Hispanics students compared to traditional public schools. Specifically, African Americans account for approximately 37 percent of students in these schools and Hispanics account for 44 percent, compared with 19 and 25 percent in traditional public schools, respectively. Further, one of Education’s largest grant programs can be used to supplement local initiatives during extended school days or years and targets schools with a high concentration of low-income students. Specifically, Title I, Part A of ESEA provides funds to school districts and schools that serve high concentrations of low-income students. Further, representatives of extended learning time organizations we interviewed told us that the schools they work with tend to extend the school day to improve achievement for students who are low-income and minority who may not have access to the same enrichment activities as their more socioeconomically advantaged peers. One representative from an extended learning time organization told us that its programs seek to close gaps in opportunity and educational achievement for disadvantaged communities. Public K-12 schools with the most hours in a school year use this additional time for different purposes, including more instruction in math and literacy, and more time for the arts and physical education. For example, during the 2011-2012 school year, eighth-grade students in schools with the most time (the top 10 percent) are estimated to have spent an additional 1.3 hours per week, on average, in English language arts, math, and science instruction compared to eighth-graders in all other schools. Further, third-grade students in schools with the most time spent an estimated additional 48 minutes per week, on average, in music, art, and physical education classes than third-graders in all other schools, according to our analysis of SASS data. Representatives from the three extended learning organizations we spoke with told us that their programs provide students more time for instruction in academic subjects and for enrichment activities, such as a debate club and project-based learning activities in science. Teachers in the schools with the most time spend more time providing instruction to students as compared to all other schools. Specifically, teachers in schools with the most time (the top 10 percent) teach an estimated 1.3 more hours per week compared to all other schools, with a weekly total of 31.2 hours compared to 29.8 hours. On the other hand, there was no significant difference in the hours spent in a school year on professional development between schools with the most time and all other schools. Further, according to a representative from one extended learning time organization with whom we spoke, schools often partner with external participants or providers of after-school programs to provide academic instruction or enrichment activities during the school day. We are not making recommendations in this report. We provided a draft of this report to the Department of Education for comment. Education provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees, the Secretary of Education, and other interested parties. In addition, the report will be available at no charge on GAO’s web site at http://www.gao.gov. If you or your staff should have any questions about this report, please contact me at (617) 788-0580 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix I. In addition to the contact named above, Elizabeth Sirois (Assistant Director), Sheranda Campbell (Analyst-in-Charge), and Lucas Alvarez made significant contributions to this report. Assistance, expertise, and guidance were provided by Susan Aschoff, James Rebbe, Kirsten Lauber, John Mingus, Carl Barden, and James Bennett. | In recent years, a key strategy for improving student outcomes has been to extend learning time by lengthening the school day or year. In 2010, Education made significant changes to its SIG program, funded at about $506 million in fiscal year 2015, including requiring schools to extend learning time in certain instances. In 2012, Education began to invite waiver requests from states to use funds from its $1.2 billion 21st Century program to conduct authorized activities during extended learning time. Little is known about how much time public K-12 students spend in school. An explanatory statement accompanying Public Law 113-235 required GAO to report on learning time. In this report, GAO examines: (1) various Education programs that can be used to support extended learning time for K-12 students, and (2) learning time in public schools nationwide. In this report, GAO focuses on programs that require or may allow schools to lengthen the school day, week, or year. GAO analyzed the most recent available SIG and 21st Century grant data, as well as Education data on learning time from a nationally representative sample of schools. GAO also reviewed applicable federal laws, regulations, and agency documents; and interviewed Education officials and stakeholders selected to obtain diverse perspectives of school districts, states, service providers, and teachers. GAO makes no recommendations in this report. Education provided technical comments, which are incorporated as appropriate. The Department of Education (Education) primarily supports extended learning time for K-12 public schools through the School Improvement Grants program (SIG). The SIG program, with an average 3-year grant of $2.6 million, is the only Education program that provides funds specifically to establish extended learning time in schools, according to Education. Nearly 1,800 schools that received SIG funds (about 94 percent of SIG schools) were required to extend learning time under the SIG program for school years 2010-2011 through 2014-2015. In addition, under the 21st Century Community Learning Centers (21st Century) grant program a small number of grantee schools—about 69 of the 10,000—have used program funds to support extended learning time. However, to do so, states need to obtain a waiver from Education to permit schools to use funds to conduct authorized program activities during an extended school day, week, or year. Education officials said that the average annual 21st Century grant was about $113,000. Although Education supports extended learning time with the SIG and, in rare cases, the 21st Century program, Education officials also pointed out that most of its K-12 programs are designed to be used during the school day, regardless of the length of the day. Regarding learning time, GAO estimates that the average length of the school day for K-12 public schools nationwide is just under 7 hours and the average school year is almost 180 days, according to GAO's analysis of Education's 2011-2012 data, the most recent available. In terms of hours per year, schools with the most time average almost 1,350 hours compared to about 1,200 hours, nationally. In addition, among all public schools, charter schools represent a larger proportion of schools with more time (about one-third of all charter schools) compared to approximately 9 percent of traditional public schools. Charter schools also represented a larger proportion of students who are low income, African American, or Hispanic. Regarding how schools use extended learning time, we found that schools with the most hours in a school year use it for different purposes. For example, GAO estimates that eighth-grade students in these schools spend, on average, one more hour per week on academic subjects such as English, math, and science, while third-graders spent more time in music, art, and physical education classes. |
As consumers increasingly use credit and debit cards for purchases, federal entities’ acceptance of cards to pay for goods and services has also increased. The Treasury’s FMS performs the processing for card transactions for executive, judicial, and legislative branch agencies, as well as a number of governmental commissions, boards, and other entities that choose to accept credit and debit cards as a method of payment. Some other federal entities, such as the U.S. Postal Service and Amtrak, operate their own credit and debit card-processing programs and pay the associated fees for processing card transactions. FMS operates the Credit and Debit Card Acquiring Service, a governmentwide service that allows the federal entities for which it collects revenues to accept Visa, MasterCard, American Express, and Discover credit cards, as well as some types of debit cards. The volume of card transactions that FMS processed increased by more than 30 percent from fiscal year 2005 to fiscal year 2007. In fiscal year 2007, FMS processed more than 65 million card payments made to federal entities. FMS pays the fees associated with card acceptance for the federal entities that participate in the Card Acquiring Service. A merchant—including a government entity—that accepts MasterCard or Visa credit and/or debit cards for payment of goods and services enters into a contract with an acquiring bank that has a relationship with Visa and/or MasterCard to provide card payment-processing services. The merchant contract specifies the level of services the merchant desires, as well as the merchant discount fee and other fees that will apply to the processing of the merchant’s card transactions. To provide card acceptance services to federal entities that participate in the Card Acquiring Service, FMS enters into an agreement with a financial institution that has been designated as a financial agent of the U.S. government to provide acquiring banking services. The agreement specifies the services to be provided to FMS and the federal entities that participate in the Card Acquiring Service. Visa and MasterCard establish and enforce rules and standards that may apply to merchants who choose to accept their cards. According to officials of the card networks, however, the networks are not involved in the relationship between a merchant and its acquiring bank. Several parties are involved in a card transaction. For example, Visa and MasterCard transactions involve (1) the bank that issued a cardholder’s card, (2) the cardholder, (3) the merchant that accepts the cardholder’s card, and (4) an acquiring bank. The acquiring bank charges the merchant a merchant discount fee that is established through negotiations between the merchant and the bank. A portion of the merchant discount fee is generally paid from the acquiring bank to the issuing bank in the form of an interchange fee to cover a portion of the card issuer’s costs to issue the card. The balance of the merchant discount fee is retained by the acquiring bank to cover its costs for processing the transaction. A merchant does not pay the interchange fee directly; rather, the interchange fee portion of the merchant discount fee is transferred from the acquiring bank to the issuing bank. Because issuing banks incur costs to issue cards to consumers, the interchange fee helps to allocate these costs among the parties involved in card transactions. Figure 1 illustrates the roles of each of the four parties in a typical credit card transaction and how fees are transferred among the parties. The figure shows that when a cardholder makes a $100 purchase, the merchant pays $2.20 in merchant discount fees for the transaction. This amount is divided between the issuing bank, which receives $1.70 in interchange fees, and the acquiring bank, which receives $0.50 for processing the transaction. For American Express and Discover card transactions, generally only three parties are involved: the consumer, the merchant, and one company that acts as both the issuing and acquiring entities. Merchants that choose to accept these two types of cards typically negotiate directly with American Express and Discover over the merchant discount fees that will be assessed on their transactions. Because the issuing and acquiring institution are the same, no interchange fee is involved in the transaction. The merchant discount fees charged on American Express and Discover transactions are, however, set to cover some of the same types of costs that merchant discount fees (which include interchange fees) cover for Visa and MasterCard transactions. Officials of both the Visa and MasterCard networks told us that they aim to set default interchange rates at a level that encourages banks to issue their cards and merchants to accept those cards. According to the network officials, the rates are set to recognize the value of card acceptance and to reimburse issuing banks for some of the risks and costs incurred in maintaining cardholder accounts, including lending costs, such as the cost of funding the interest-free loan period, the cost associated with cardholders that default on their loans, and losses stemming from fraud. Officials with one of the card networks noted that interchange fees help to reimburse issuers for bearing the costs that merchants would otherwise have to bear for the ability to make sales to customers on credit. Both Visa and MasterCard develop and publish interchange rate tables that disclose the default rates that apply to various types of transactions. According to Visa and MasterCard officials, four main factors determine interchange rates applicable to a given transaction: Type of card—Different interchange rates apply to different types of card products. For example, both MasterCard and Visa have separate interchange rates for general purpose consumer credit cards, reward credit cards, commercial credit cards (issued to businesses), and debit cards. The rates vary because the costs, risks, and revenues associated with these different card products vary for issuers; they also reflect the networks’ goal of providing incentives for both issuance and acceptance of cards. For example, reward cards involve higher interchange fees for a number of reasons: According to network officials, such cards tend to provide greater benefits to merchants (in the form of average transaction amounts that are typically higher than those on standard cards) and to cardholders (in the form of cash rebates or points). Merchant category—The card networks classify merchants according to the line of business in which they are engaged. Interchange rates may reflect unique characteristics of different merchant categories, such as average profit margins and the way in which merchants authorize transactions. For example, according to card network officials, because the supermarket industry tends to have very low profit margins, the networks set interchange rates to encourage supermarkets to accept cards. Also, the method in which a merchant authorizes payments can affect the extent to which a card network’s system is used. (For example, hotels typically must authorize a payment at least twice—once at guest check-in to ensure the customer is authorized for the minimum payment amount, and again at checkout to authorize the final payment amount.) Additionally, some merchant types may qualify for special incentive interchange rates if a card network determines the merchant category has growth potential for card acceptance. For example, government organizations and utility providers receive lower interchange rates to encourage them to accept cards. Merchant size (transaction volume)—Both MasterCard and Visa set lower interchange rates for merchants in some categories that conduct high volumes of card transactions over their networks. For example, according to Visa’s default interchange rates that were in effect as of October 2007, supermarkets that conducted a minimum of about 7 million Visa card transactions in calendar year 2006 qualified for lower rates than supermarkets that conducted fewer Visa transactions. Mode in which a transaction is processed—Interchange rates also differ depending on how a card transaction is processed. For example, transactions that occur without a card being physically present, such as in Internet transactions, carry a greater risk of fraud; therefore, higher interchange rates apply to these transactions. Similarly, swiping a card through a card terminal, rather than key-entering the account number, provides more information to the issuing bank to verify the validity of a transaction; therefore, swipe transactions are assessed a lower interchange rate. Interchange fees are not regulated at the federal level in the United States. The Federal Reserve, under the Truth in Lending Act (TILA), however, is responsible for creating and enforcing requirements relating to the disclosure of terms and conditions of consumer credit, including those applicable to credit cards. In addition, the Federal Reserve and other federal agencies, including the Office of the Comptroller of the Currency, the Federal Deposit Insurance Corporation, the Office of Thrift Supervision, and the National Credit Union Administration oversee credit card issuers. As part of their oversight, these regulators review card issuers’ compliance with TILA and ensure that an institution’s credit card operations do not pose a threat to the institution’s safety and soundness. The Federal Trade Commission (FTC) generally has responsibility for enforcing TILA and other consumer protection laws for credit card issuers that are not depository institutions. As of early 2008, interchange fees were the subject of federal and state legislative proposals. For example, the Credit Card Fair Fee Act of 2008, introduced in March 2008, would, according to one of the bill’s sponsors, establish a process by which merchants and issuing banks could agree to set interchange fees and other terms of access to covered electronic payments systems without violating federal antitrust laws. Additionally, the bill would establish a three-judge panel, called the “Electronic Payment System Judges,” to make determinations of access rates and terms for electronic payments systems. The purpose of the panel would be to conduct proceedings to ensure that the rates and terms established by participants in the system are calculated to represent the rates and terms that would be negotiated in a perfectly competitive marketplace, that is, a marketplace of willing buyers and sellers in which neither has market power. Also, under legislative initiatives pending in some states, merchants who are parties to payment card agreements would be given access to information about the issuing bank’s interchange fees, including a schedule of all interchange fees charged by the bank, as well as notice of any change in the fees. State bills also would, among other things, prohibit a financial institution that issues a credit card or debit card from charging any fee, including interchange fees, based on the sales and use tax portion of a retail sales transaction. prohibit a financial institution from increasing the fee based on the size or cost of a transaction. call on Congress to assess the impact on merchants of interchange fees and other discount fees and to require credit card issuers to be more open with merchants about the costs of the payment systems in which they participate. As of March 2008, none of the initiatives had been enacted into law. Interchange fees also have been a factor in lawsuits alleging violations of the antitrust laws by credit card networks and related parties. The plaintiffs in those cases alleged that interchange fees were an example of the networks’ unlawful exercise of market power. As of October 2005, merchants had instituted at least 14 class action lawsuits in four separate districts against Visa and MasterCard and their member banks, alleging specifically that the defendants fixed interchange fees at supracompetitive levels in violation of Section One of the Sherman Antitrust Act. Currently, in a consolidated action pending in the United States District Court for the Eastern District of New York, merchants claim that interchange fees have an anticompetitive effect in violation of the federal antitrust laws. Appendix II provides additional information on cases that include, among other things, allegations that interchange rates were a function of anticompetitive conduct in violation of antitrust laws. Under GSA’s SmartPay program, GSA negotiates master contracts with banks to issue cards to federal entities that participate in the program. The first SmartPay master contracts were established in 1998 with five banks. These contracts are set to expire in November 2008 and will be replaced by new master contracts with four issuing banks under GSA’s SmartPay 2 program. Participating federal entities choose a bank from among those under contract with GSA that offer services that meet their needs, and develop individual task orders that specify the products and services that the banks will provide them. In negotiating their individual task orders, these federal entities also can specify to the issuing banks other services they may need to operate their card programs. For example, banks can provide tools that the federal entities use to monitor card usage and expenses, or customer service support, such as 24-hour emergency card service for federal employees. Federal entities realize benefits from accepting credit and debit cards, including increased customer satisfaction, fewer bad checks and cash thefts, and improved operational efficiency. Realizing these benefits entails costs, principally the merchant discount fees associated with card transactions but also the costs for related equipment needed to process the transactions. In fiscal year 2007, federal entities from which we collected data reported paying $433 million dollars in merchant discount fees for the processing of over $27 billion in credit and debit card revenues. As card acceptance has become more common, federal entities have worked to control the associated fees, including reviewing the ways in which transactions are processed to ensure they qualify for the lowest possible interchange rates. Additionally, FMS began a pilot program in which it is reviewing the revenue collection mechanisms of the federal entities for which it provides services, with the aim of identifying cost savings and efficiencies. FMS has reviewed collection cash flows for eight federal entities thus far and has identified cost-savings opportunities. While it plans to conduct over 100 more reviews, it has not yet developed a full implementation strategy for the program. Such a strategy would help ensure that FMS achieves the program’s goals as expeditiously as possible and increase overall savings to the government. The ability to accept credit and debit cards provides a variety of benefits to federal entities, including greater customer satisfaction and improved internal operations. Officials at several federal entities noted that card acceptance helped to ensure that the federal entities would remain competitive with private sector organizations. Many of the officials we spoke with told us that consumers expect to be able to use cards to make payments, and some stated that they did not think they could stop accepting cards. For example, Amtrak officials stated that customers paying with cards account for about 85 percent of its sales and that if they did not accept cards, the number of people who ride their trains would decline significantly. Among the benefits mentioned by federal officials with whom we spoke was that card acceptance improves customer satisfaction with their organizations because consumers like to use their cards for convenience, credit card reward programs, and security reasons. Accepting cards also has enabled entities to conduct business via the Internet, which can reduce labor costs associated with sales and also can provide greater convenience to customers. For example, officials from the U.S. Mint stated that about 50 percent of their sales occurred through the Mint’s Web site. Some entities also stated that the ability to accept cards has increased their sales volume. Federal entity officials also noted that accepting cards reduces the amount spent on processing other forms of payment. By accepting cards, federal entities incurred less expense in transporting cash, lower losses from theft of cash, and had fewer bad check expenses. For example, officials at the Department of the Interior noted that cash transport costs can be high for some remote parks and wildlife refuges. Several federal officials also stated that accepting cards has reduced the costs associated with processing checks, and that funds are deposited in accounts faster when customers use credit or debit cards than when they use checks. Additionally, Amtrak officials told us that accepting cards onboard trains for ticket and food and beverage sales resulted in fewer instances of employee theft of cash. Finally, many officials cited that card acceptance improved internal operations at their entities. For example, officials at the Department of the Interior stated that payments made by credit cards result in a more streamlined bookkeeping approach because card sales involve less paperwork (for reconciliation) than other payment forms. Defense Commissary Agency (DeCA) officials also stated that they believed that labor associated with reconciling sales at the end of the day declined as a result of the reduced volume of cash. Additional operational efficiencies mentioned by officials included a reduction in costs and exposure to fraud and errors from misplacing or miscounting cash and checks. Some officials stated that the efficiencies gained in their internal operations as a result of card acceptance allowed them to reallocate staff to different and more productive uses. For example, officials at the Department of the Interior explained that card acceptance at automated kiosks allowed them to reallocate some staff that used to collect entrance fees to more productive tasks. Amtrak officials also stated that customers’ ability to purchase tickets using cards, especially through the Amtrak Web site, has reduced their labor costs. Because the federal entities that utilize FMS’s collection services are not responsible for the associated card-processing costs, we could not determine how officials at these agencies would regard card acceptance if they had to pay these costs. However, an official at one federal entity that accepts cards and pays the associated costs noted that it is difficult to assess if the savings from receiving less revenue in the form of cash or checks (and more from cards) sufficiently offsets the entity’s card-related processing costs, including the interchange fees. He also stated that it is uncertain whether the entity receives higher revenues from accepting cards, as some customers would likely spend the same amount with them regardless of the type of payment used. However, customers demand convenient payment alternatives, and for some of their products, private sector entities provide similar services, and thus he believed the ability to accept cards allows the entity to stay competitive with these entities. The federal entities we contacted were not able to provide comprehensive data on any cost savings from accepting cards. We identified various government, academic, and industry studies that compared the cost of processing for different forms of payment; however, many of these studies found that precise estimates were difficult to calculate. Additionally, while most of the studies we reviewed found cash to be the least expensive payment form to process, the methodologies used in the studies were not consistent and the data contained in many of them were outdated. The volume of revenues accepted through credit and debit card payments was growing for the group of federal entities we reviewed. Data on revenues collected by FMS, which processes the card transactions for a large number of federal executive, legislative, and judicial branch agencies and other federal entities, show that while credit and debit card transactions accounted for only 0.23 percent of the total federal government revenues FMS collected in fiscal year 2007, its card collections have grown by almost 28 percent in just 2 years—from approximately $5.5 billion in fiscal year 2005 to almost $7.1 billion in fiscal year 2007 (in current dollars). As shown in table 1, the other federal entities from which we collected data also experienced an increase in card payments over the 3-year period, with the total reaching approximately $27 billion in credit and debit transactions for fiscal year 2007. (App. I contains a detailed discussion of our data sources and analysis of the data reported to us from the federal entities.) As revenues from card payments have increased, so has the total amount of merchant discount fees paid by the federal entities from which we collected data. These federal entities reported paying a total of almost $433 million in merchant discount fees in fiscal year 2007 (see table 1). This figure represents an almost 12 percent increase over the amount paid in fiscal year 2006 and an almost 27 percent increase over the amount paid in fiscal year 2005. The average merchant discount rate increased about 4 percent from fiscal year 2005 to fiscal year 2007. Among the entities included in our review, Amtrak, FMS, and the Postal Service provided data specifically showing the amount of interchange fees associated with their Visa and MasterCard transactions (their acquiring banks provide them with these data). These three entities paid a total of approximately $205 million in interchange fees during fiscal year 2007, out of a total $218 million in merchant discount fees specifically for MasterCard and Visa transactions. These interchange fees accounted for the majority of total merchant discount fees these entities paid for accepting all card types. As card revenues and merchant discount fees increased for these three entities, so did the interchange fees they paid. Interchange fees increased by almost 36 percent, from almost $151 million in fiscal year 2005 to $205 million in fiscal year 2007 (in fiscal year 2006, they were $179 million). For a variety of reasons, some of the Department of Defense and Department of Homeland Security NAFIs were not able to separate interchange fees from the total merchant discount fees they paid. (For example, according to an official from one entity, its contract with its acquiring bank specified that all credit card transactions would be charged a fixed percentage fee, regardless of the interchange fees associated with a particular transaction; therefore, the entity did not have specific information on interchange fees.) The data provided by these entities showed that both card revenues and the associated merchant discount fees increased over the 2005 to 2007 period. Revenues from sales made on cards were about $7.5 billion in fiscal year 2005 and over $8.5 billion in fiscal year 2007, an approximately 14 percent increase. The merchant discount fees for card payments at these entities also increased from approximately $128 million in fiscal year 2005 to almost $150 million in fiscal year 2007, an increase of almost 17 percent. For some payments made using cards, the government does not bear merchant discount costs. For example, consumers can pay their income and business taxes to the Internal Revenue Service (IRS) using cards. To accept these payments, IRS has agreements with two private third-party entities that process payments for individuals or businesses that choose to use a credit or debit card to make a tax payment. The two private entities charge a convenience fee of 2.49 percent of the total tax payment for taxpayers who use their services, a portion of which covers the merchant discount fees paid by the third-party entity to its acquiring bank. In fiscal year 2007, these merchant discount fees totaled about $47.5 million for approximately $2.4 billion in tax payments, an 85 percent increase in tax payments made with credit and debit cards from fiscal year 2005. In addition to the interchange and processing fees that make up the merchant discount fee, federal entities face other costs associated with the acceptance of credit and debit cards. For example, entities must pay for equipment and software for card transactions, such as point-of-sale terminals, keypads for PIN debit card transactions, computers, modems, and printers, and pay for their installation and maintenance. While FMS pays the merchant discount fees associated with card transactions for entities for which it settles transactions, it does not pay for the costs associated with equipment and software; these costs are the responsibility of the entities. Other costs of accepting cards include complying with industry security standards, known as the Payment Card Industry Data Security Standard, training employees to process and reconcile card transactions, and experiencing losses associated with fraudulent use of cards. However, information provided by some entities indicated that these additional costs were not significant compared to merchant discount fees. As card acceptance has grown, federal entities have used several methods to manage their costs and reduce the fees associated with card transactions. One method is to ensure that their Visa and MasterCard transactions are processed so as to qualify for the lowest applicable interchange rate. Both Visa and MasterCard have a merchant category for federal entities, and the interchange rates for the transactions of merchants in these categories are lower than those for many other merchant categories. As long as federal entities’ transactions meet all applicable processing requirements—for example, they must be submitted for final settlement in a timely manner—the entities are charged the interchange rate applicable to those merchant categories. For example, as of April 2008, if transactions met all applicable processing requirements, government entities accepting a MasterCard consumer credit card as payment would pay an interchange fee of 1.55 percent of the transaction amount plus $0.10, and if accepting a Visa consumer credit card, an interchange fee of 1.43 percent of the transaction amount plus $0.05. (In comparison, the interchange rate applicable to a MasterCard general purpose consumer credit card transaction at some fast food stores is 1.90 percent.) In some cases, card transactions at federal entities can be assessed a lower rate. For example, FMS officials told us that the DeCA’s transactions qualify to be processed using the interchange rate applicable to the supermarket merchant category, which can range from 1.27 percent to 1.48 percent plus $0.05 for MasterCard general purpose consumer credit card transactions, depending on the volume of card transactions processed. Given that the method in which the card is accepted, transaction volume, and other factors can affect interchange rates, many federal entities have taken steps to ensure that the acceptance and processing procedures they follow result in the most advantageous interchange rates applying to their transactions. For example, Amtrak officials explained that by replacing card machines (that embossed paper receipts) with wireless card terminals on trains, they were able to significantly reduce the interchange rates that applied to transactions made aboard their trains, because the electronic transaction qualified for a lower interchange rate than the paper transactions. Moreover, FMS officials explained that before the agency signed the current agreement with their acquiring bank in August 2006, they carefully reviewed the bank’s interchange management capabilities and incorporated provisions to ensure that the bank employs them. For example, the bank is responsible for monitoring how card transactions are being processed and the interchange rates they are being assessed. In addition, the bank provides FMS with daily and monthly reports that provide various levels of detail on the interchange fees paid. Both the bank and FMS officials review these reports to identify instances in which transactions may have been charged a higher interchange rate—known as a downgrade—because they were not processed under the requirements necessary to qualify for a lower rate. An FMS official stated that FMS then works with the acquiring bank and individual federal entity that processed the transaction to identify the reasons and to resolve the problem in order to avoid future downgrades. For example, an FMS official explained that in one instance a DeCA store had a broken card terminal in a checkout aisle that prevented employees from swiping cards. Instead, employees keyed in card information, which resulted in a number of transactions being downgraded and assessed a higher interchange rate. With the assistance of FMS’s acquiring bank, the problem was identified and DeCA employees were told that should the problem reoccur, they are to use other terminals to process card transactions, which would ensure they would not be assessed a higher rate. An FMS official stated that under the current agreement with its acquiring bank, very few transactions have been downgraded; however, FMS still works to resolve these instances when they occur so that the total cost associated with government transactions can be reduced. Officials of two other federal government entities told us that they similarly review data provided by their acquiring banks to identify opportunities to reduce fees. Another way that several federal entities have attempted to control fees associated with card acceptance is by expanding their ability to accept PIN debit card payments. For example, PIN debit transactions generally are assessed lower interchange rates than “signature” debits, and therefore some federal entities are beginning to implement the technology necessary to accept these transactions. While federal entities must make an investment in the equipment needed to process PIN debit transactions (for example, PIN pads), one entity told us that the much lower interchange rates associated with PIN debit transactions justified the investment. An FMS official stated that the only entity for which it processes card transactions that currently has the ability to accept PIN debit cards is DeCA; however, as entities undergo equipment upgrades, FMS works with them to identify equipment that may lower overall collection costs. For example, one federal entity is in the process of developing a new terminal system for card collections, and as part of this process, FMS is encouraging the entity to implement a system that has the capability to process PIN debit transactions. Additionally, some of the military NAFIs with which we spoke adopted technologies necessary to accept PIN debit cards, stating that they too recognized the cost savings associated with these transactions. Federal entities also can reduce card acceptance fees by changing the way in which they or their acquiring banks connect to various card networks. For example, Postal Service officials explained that they were in the process of converting to a new method of processing transactions called a payment switch, which will funnel all of the information from the Postal Service’s 70,000 terminals into one settlement file at the end of the day. The file then is sent to a third-party card processor. The officials explained that the payment switch will reduce substantially the processing fee component of card payment costs, because the technology in the payment switch allows for routing each transaction to the lowest cost processor. Additionally, the payment switch will enable the Postal Service to send some card transactions directly to a card company rather than through the third-party processor, reducing the cost of accepting those transactions. FMS’s current acquiring bank has also implemented changes in the method by which it processes PIN debit card transactions. FMS officials explained that the bank identified a method for routing PIN debit card transactions to different networks so that the costs for processing the transaction are minimized, resulting in annual savings of almost $300,000 for FMS. Another way in which federal entities have acted to reduce card acceptance costs is by negotiating with their acquiring banks for lower merchant discount rates or with card networks for lower interchange rates. Some of the federal entities we reviewed have realized card acceptance savings by negotiating new acquiring bank services contracts. These entities were able to negotiate lower rates for the processing component of the merchant discount rate applied to their transactions. For example, by signing a new acquiring bank agreement, one federal entity received a substantial reduction in the processing fee component of its merchant discount rate. Also, to obtain a more favorable merchant discount rate for their transactions, officials from some of the military service NAFIs have been working together to try to negotiate a lower merchant discount rate with American Express on the basis of the volume of transactions they provide to that company. Officials at some of the entities with whom we spoke stated that they did not believe they could negotiate effectively with the card networks— MasterCard and Visa—for lower interchange rates for their transactions. However, some federal entities stated that they have attempted to negotiate and have had varying levels of success: FMS officials told us that they tried to negotiate lower interchange rates with both Visa and MasterCard by stating that some factors that are included in determining interchange rates do not necessarily apply to federal government transactions. For example, FMS officials argued that the federal entities that participate in the Card Acquiring Service pose less risk than other merchant types and that there is no risk of delinquency on the part of the Treasury. FMS officials stated that their negotiations were not successful and that they were not able to negotiate lower interchange rates. Officials from the Postal Service also explained their attempts to negotiate with the card networks. They stated that they believe lower interchange rates should be applied to their transactions for a variety of reasons. First, the Postal Service estimates that it is one of the top U.S. merchants in terms of card transaction volume. Second, there is less risk of fraud than some other merchants because most transactions are conducted face to face. Third, the Postal Service operates a large retail network with 35,000 offices, self-service terminals, mail and phone orders, plus a Web site that receives approximately 30 million hits per month and provides a great amount of visibility for the networks. Fourth, the Postal Service has its own law enforcement agency that investigates instances of fraud, including fraudulent use of cards where merchandise travels through the mail. These investigations result in the recovery of merchandise as well as stolen card data and in some cases the arrest of international criminals to the benefit of the credit card industry. They noted that the benefit of such a service to the card networks was not reflected in the interchange rates applicable to Postal Service transactions. The officials did state that they have had some limited success in negotiations with the card networks resulting in some small cost savings. Officials from another federal entity told us that they have had some success in receiving funds from one of the networks as a result of a joint marketing program. The funds could be used to reduce interchange costs and/or for additional marketing efforts; however, the details of the negotiations are bound by confidentiality agreements and are considered proprietary information. The officials explained that negotiations of this type are not typical of federal entities because of the limited marketing opportunities available to most government entities. Although some federal entities have had some success in negotiating lower interchange rates for their transactions, whether additional opportunities exist for further reductions in interchange rates is unclear. According to officials of MasterCard and Visa, among the factors that are considered when setting interchange rates is whether the industry or sector represents a new market for credit and debit cards. According to these officials, they see government payments as a market in which they hope to increase card acceptance and transaction volumes; thus, the interchange rates that Visa and MasterCard set for government transactions are lower than those of many other merchant categories. Additionally, officials at both MasterCard and Visa told us that opportunities exist for merchants, including federal entities, to negotiate for lower interchange rates assessed on their transactions. For example, the MasterCard officials explained an instance in which, in response to rapidly rising gasoline prices, they worked with gasoline merchants to develop a cap on the interchange fees that can be charged on petroleum purchases. Officials from both networks explained that they have individuals dedicated to developing customized arrangements with merchants and that these negotiations involve identifying mutually beneficial arrangements for both the merchant and the network. Also, we found it difficult to assess whether federal entities could negotiate rate reductions based on their relative transaction volume or aggregate card revenues, because we could not identify any publicly available data we could use to determine how the federal government’s total transaction volume or aggregate card revenues compare with those of other large merchants. In addition to looking for opportunities to reduce card acceptance costs, FMS has initiated a program to review the overall cash management practices of federal entities. In its role as the federal government’s central collection services provider, FMS provides federal entities with a number of alternative revenue collection mechanisms to meet their needs. It is also responsible for ensuring that the federal government’s collection activities are efficient and that costs are minimized. Additionally, according to FMS, the Deficit Reduction Act of 1984 authorizes FMS to conduct periodic cash management reviews of federal entities’ financial operations. In the past, FMS allowed federal entities for which it collected revenues to pick from the variety of collection mechanisms that FMS offered without examining the most cost-efficient mechanisms of collecting the revenue. However, the Office of Management and Budget’s (OMB) 2004 assessment of FMS’s collections program identified the need for FMS to develop additional techniques to convince the federal entities to reduce paper-based collections. In 2007, FMS piloted a program to review the revenue collection mechanisms used by the federal entities for which it collects revenues, and how and from whom payments to these entities typically are made. The reviews are designed to identify inefficiencies in current collection mechanisms and to help FMS attain one of its strategic goals of providing timely collection of federal government revenues, at the lowest cost, through electronic means. According to FMS officials, the program is not focused on card transactions, but rather on overall payment management improvements. The reviews will allow FMS to work with federal entities to take advantage of advances in lower-cost technology that may have occurred since the entities began using their existing mechanisms. Among other things, FMS is examining whether entities are using paper collection mechanisms when they could instead be using electronic mechanisms, or—if electronic mechanisms are already being used—opportunities to reduce any associated fees by substituting cheaper electronic mechanisms. For example, if an entity accepts credit cards, FMS may also suggest cheaper collection alternatives, such as PIN debit cards or automated clearinghouse transactions. Once it has reviewed an entity’s collections and processes and identified improvements, FMS develops an agreement that details the changes to be made and the timeline for implementing them. FMS officials explained that while entities are not mandated to implement changes in their collection mechanisms, the agreements will provide for an “inefficiency charge” that will assess penalties to the entity if the agreed-upon recommendations are not implemented by the dates stipulated in the agreement. Such charges will be calculated on a per transaction basis and require that the entity transfer funds to the Treasury to cover the amount. In determining which entities to review for the pilot phase, FMS officials said that their focus for the program was first on the 24 Chief Financial Officer (CFO) agencies identified in the Chief Financial Officers Act of 1990. FMS officials said that they also focused on entities that showed the most potential for savings that could be realized by revising their collection mechanisms. Criteria used for selecting agencies to participate in the pilot program included (1) the dollar volume of the entity’s collections, (2) the amount of revenue not collected in electronic form (that is, cash and checks), and (3) entities with whom FMS previously experienced good cooperation in converting paper processes to electronic mechanisms. As of March 2008, FMS had reviewed collection cash flows at eight federal entities and had drafted agreements to implement revised collection procedures with each. The results confirm that opportunities for improvement exist, although only two of the eight agreements have been signed (the agency’s goal for the program for fiscal year 2008 is to have at least six of the eight agreements signed). Through the eight agreements that have been developed, FMS has identified various potential process improvements and changes that would result in recurring cost savings. For example, FMS staff determined that replacing the check-processing method DeCA used with a more advanced method that converts paper checks to electronic images at the point of sale would produce savings each time a check is presented at a DeCA location. FMS officials told us that they previously had developed a general estimate for cost savings that could be achieved by converting from paper collection mechanisms to electronic collection mechanisms before beginning the program; however, they have not developed cost savings estimates that would be achieved by implementing the specific actions that they have recommended at each of the entities they have reviewed thus far. At our request, FMS officials developed an estimate of the cost savings associated with a recommendation contained in one of the draft agreements they have prepared. FMS estimated that if IRS converted 67 million payments currently being received in paper to transactions processed by an electronic system, savings of approximately $40 million annually would result. FMS officials stated that they have begun to prioritize the order in which they will conduct reviews for the remainder of the federal entities. They estimate that they will conduct reviews, and draft agreements, with as many as 85 entities within the 24 CFO agencies. An FMS official estimated that the average length of the reviews they plan to complete should take approximately 6 to 9 months; however, each of the reviews that have been conducted as part of the pilot have taken longer. FMS officials attributed the extra length of time to conduct reviews during the pilot phase to the fact that the program is new, and they have spent time developing a standard review process and templates for the agreements. Additionally, the officials explained that much of the success and length of time a review takes is dependent on the willingness of the entities to work with FMS and to incorporate the recommended changes into their existing mission and goals. After reviews of the CFO agencies are completed, FMS officials anticipate that an additional 29 reviews will be conducted for the non-CFO agencies for which FMS provides collection services. The FMS staff responsible for conducting these reviews consists of five full-time staff members that constitute a new customer relationship management group formed in the last few years, and performing the reviews currently consumes the majority of these staff members’ time. In addition to these five staff members, FMS has a director who oversees the program, as well as staff in various program areas within FMS that assist in different stages of the reviews. Because FMS began this program as a pilot, it has not developed a full implementation strategy that could help ensure an appropriate resource commitment and timely attainment of its goals. For example, FMS officials told us they have not developed a timeline for completion of the reviews for all agencies because they are focused on the 24 CFO agencies. However, because this program will help FMS achieve its strategic goal of increasing the percentage of federal government revenues collected electronically—a percentage that has remained constant for the last 3 fiscal years—establishing a targeted timeline for completing the remaining reviews could help FMS ensure that it makes progress toward this goal. In addition, in its 2004 review, OMB noted that FMS lacked policies and techniques for convincing federal entities to eliminate paper-based collections. Including in its reviews estimates of the cost savings to be achieved by implementing the recommended changes could help FMS emphasize to the entities the importance of acting on the recommendations that it identifies. Finally, FMS has already found that reviews are taking more time to complete than it initially anticipated. The cost savings associated with implementing the efficiencies identified in the reviews are both immediate and recurring. Accordingly, as the pilot program is fully implemented, ensuring that it has adequate resources for completing the reviews expeditiously would help achieve the program’s goals. Authorities in as many as 26 countries have taken or considered actions intended to either limit interchange fees or improve card payment systems. In the 3 countries we examined in more detail—Australia, Israel, and Mexico—reforms designed to effect reductions in interchange rates were undertaken as part of broader efforts to change payment systems or card markets; thus, isolating the effects of the interchange interventions is difficult. Further, differences regarding the regulatory and market structures between these countries and those of the United States make it difficult to estimate the effects of any similar actions in the United States. According to information from regulators, card networks, and others, actions regarding card fees, issuer practices, or payment system functioning in general have been taken or considered in as many as 26 countries as well as the European Union in the last 18 years. These actions were described as, among other things, agreements between card networks or issuing banks and governmental authorities, as well as decisions by antitrust tribunals and commissions. For example, in December 2007 the European Commission issued a decision finding that MasterCard’s interchange fees for cross-border transactions in the European Economic Area violate European Community Treaty rules on restrictive business practices. In addition, the commission recently announced that it would conduct an inquiry into whether Visa’s interchange fees similarly violate the treaty rules. In some cases, the actions taken are under appeal in these jurisdictions. In reviewing information available from U.S. and foreign regulators, card networks, and other sources, we determined that Australia, Israel, and Mexico had taken actions affecting various parts of their card and payment system markets in recent years, including actions specifically addressing merchant discount or interchange fees. However, data on the impact of the actions taken in these three countries are limited. The following sections summarize the actions in the three countries. A 1998 amendment to Australia’s Reserve Bank Act created the Payment Systems Board within Australia’s central bank, the Reserve Bank of Australia (RBA), and tasks the board with ensuring the efficiency, competition, and stability of that country’s payment system. In 2000, RBA published the results of a study that it conducted with the Australian Competition and Consumer Commission, which concluded that prices to cardholders for various forms of card payments did not generally reflect the relative costs of those forms of payments. The authors of the 2000 study noted that merchant discount rates for credit card transactions averaged 1.78 percent, which included average interchange rates of 0.95 percent. RBA officials explained to us that because card users do not directly pay some of the costs of using cards, including interchange fees, consumers’ use of credit cards at the expense of other lower-cost payment methods, such as debit cards was inefficient for their economy as a whole. To help remedy this perceived inefficiency, RBA first attempted to encourage voluntary action on the part of the credit card industry. When these attempts were unsuccessful, RBA set a ceiling applicable to average credit card interchange rates, which took effect in 2003. RBA officials explained that to determine how to assess appropriate interchange rate levels, they worked with card networks to identify the range of costs incorporated in the calculation of interchange rates. After considering these costs, RBA officials decided that costs associated with transaction processing, fraud and fraud prevention, authorizing transactions, and financing the period between the time the merchant is paid and the time that the issuer receives payment should be covered by the interchange fees, while costs associated with credit losses should not be. To lower interchange rates from their then current levels, the central bank set a benchmark rate that excluded the disallowed costs, and required that the weighted average of the rates set by each four-party credit card system— which at that time included Visa, MasterCard, and a domestic card brand called Bankcard—not exceed that benchmark. RBA officials stated that they chose to use a cost-based method because it appeared to be a transparent and objective way to lower interchange rates. As a result of the reforms, the average interchange rate in the Visa and MasterCard networks declined from 0.95 percent to around 0.50 percent. In addition to the actions taken to limit credit card interchange fees, the central bank also took several other actions designed to promote efficiency and competition in the payment systems during the same period. In the late 1990s, officials at the Israel Antitrust Authority (IAA) considered actions to address a lack of competition in their country’s credit card market. The market was dominated by two companies, each of which issued and acquired its own major card brand. The rates of merchant discount fees charged by these companies differed according to merchant type, and estimates of the average merchant discount rate at that time varied. Some estimated averages reported in 1997 and 1998 ranged from 1.9 percent to 2.46 percent. In 1998, a second company began issuing Visa cards and acquiring Visa transactions in Israel. According to IAA officials, the two Visa issuers executed an agreement between them that included provisions setting the interchange rates applicable to transactions involving their cards. IAA declared the agreement between the companies to be a restraint of trade under Israeli antitrust law, but granted the agreement several exemptions in return for a gradual reduction in the interchange fees, under the condition that Visa conduct an issuer cost study that would provide the IAA with data to establish a suitable and acceptable interchange fee. After these exemptions expired and the IAA found the data provided by the Visa companies to be incomplete, the law required that banks obtain approval of their agreement from the Israeli Antitrust Tribunal—a court with exclusive jurisdiction over noncriminal governmental antitrust proceedings. After years of discussions on the appropriate costs to be covered and different methodologies for setting interchange rates, the Israeli Antitrust Tribunal issued a decision in 2006 that the costs that could be considered in calculating interchange rates included those relating to financing the period between when the merchant is paid and when the issuer receives payment, and payment guarantee (including both costs involving losses due to cardholder fraud and costs related to prevention of such fraud). At the same time that this decision was reached, the two Visa issuers, along with Israel’s single MasterCard issuer, agreed with IAA to contract with merchants to accept both Visa and MasterCard transactions and to gradually reduce interchange rates. Under this agreement, interchange rates are to gradually drop from their October 2006 level of 1.25 percent to 0.875 percent by 2012. As of January 2007, interchange rates fell to 1.2 percent in keeping with the agreement. In addition, in accordance with the tribunal’s decision many of the categories based on merchant type will be eliminated. However, the transactions of government entities that accept cards in Israel will continue to be eligible for a lower interchange rate, also in accordance with the tribunal’s decision, under the theory that government entities do not benefit from the payment guarantee, because they have other ways of guaranteeing payment (for example, confiscating assets), and so the interchange fee charged on its acceptance transactions should not include that cost. Although the Antitrust Tribunal has temporarily approved this agreement, it has stated that final approval cannot occur until an independent expert appointed by IAA determines that the agreement is consistent with the tribunal’s approved methodology for setting fees. Given responsibility for ensuring the proper functioning of payment systems, the Banco de Mexico (the Mexican central bank) has been encouraging the use of more efficient means of payment. In 2004, the Banco de Mexico was granted specific authority to regulate interchange fees in response to concerns by legislators in that country regarding the amount that banks were charging for services as well as the lack of sufficient information for cardholders and merchants. Shortly after the 2004 law was passed, the Association of Mexican Banks, which establishes interchange rates in Mexico, undertook a review of interchange rates and under the supervision of the Mexican central bank, began to develop a method to set them. In addition, the association and the central bank reviewed the way in which interchange rates applied to merchants. For example, five different interchange rates could be applied to transactions, depending on the merchant’s expected annual sales volume, with merchants with higher sales volumes receiving lower rates. Mexican central bank officials explained to us that they believed this led to discrimination against small merchants, and as part of the reforms, the bank association introduced new categories that were based on merchant type rather than size. To address interchange rates, the bank association under the supervision of Banco de Mexico established a method to set a “reference” interchange rate. In contrast to the cost-based approaches used by Australia and Israel, the bank association used a model that balances issuing and acquiring banks’ profits (net of interchange) through the interchange fee. Prior to these developments, the interchange rates for credit cards averaged about 2.73 percent. Since that time rates have declined. In February 2005, the association reduced the credit card interchange rate by an average of 43 basis points and also eliminated the highest bracket of rates for credit cards. Because some of the disadvantages of the previous system persisted despite this intervention, in October 2005 the association proposed a new mechanism for setting a reference interchange rate, which accounts for issuer and acquirer revenues and expected network growth in addition to issuer and acquirer costs. The association then adjusted the single reference rate to account for differences in merchant type, resulting in 22 different merchant categories, most of them with different applicable interchange rates. The association and the central bank continue to work together to refine this method. As of January 2008, the effective reference interchange rate for credit cards was lowered to 1.61 percent. In the three countries we examined, incomplete information is available on the impact of actions to reduce interchange rates, but available data indicate that merchants appear to have benefited, while the impact on consumers has been mixed. Because the actions relating to interchange rates in these countries generally coincided with various other changes in credit and debit card markets, researchers’ ability to isolate and measure the specific effects of interchange rate intervention has been limited. However, merchants in these countries generally appear to have received benefits in the form of lower merchant discount rates. Data on merchant discount rates for credit cards in Australia show a significant decline in these rates since the reforms were instituted and suggest that changes in interchange rates have been reflected in merchant discount rates. The Australian central bank reported that the average merchant discount rate for Bankcard, MasterCard, and Visa had fallen by around 62 basis points to 0.79 percent between the September quarter of 2003, just prior to the reforms, and the December quarter of 2007, which was greater than the decline in interchange rates over that period. Merchant discount rates for American Express and Diners Club cards, although not regulated by the central bank, also fell by 0.29 and 0.18 percentage points, respectively, between September 2003 and December 2007. In September 2007, the central bank estimated that, in the aggregate, merchants’ costs for card acceptance over the previous financial year were about $920 million lower than they would have been absent the reforms. Similar reductions also have occurred in Mexico as the credit card merchant discount rates across all businesses declined an average of 8 percent, from 2005 through 2006. According to information provided by IAA, average merchant discount rates have declined in Israel since 1998, especially for Visa cards; however, other factors may have contributed to the overall decline in merchant discount rates in Israel. For example, other regulatory actions relating to limiting merchant discount rates also were being taken during this period. In addition, officials from the antitrust authority expressed the belief that the increased competition in the Visa issuing market since 1998 has contributed to the lower merchant discount rates. Evidence relating to impacts on consumers since the interchange rate intervention in these countries is limited. In Australia, where the reforms have been in effect long enough to allow for some study, cardholders have experienced a decline in the value of credit card reward points for most cards and an increase in annual and other consumer credit card fees. For example, RBA estimated that average annual fee revenue from fees, such as cash advances and late payments on bank-issued personal credit cards has doubled from around $40 per account in 2002 to around $80 in 2006, although it did not estimate the total amount paid by all cardholders. RBA officials attributed these changes to their reforms of the credit card system. Although card users may receive fewer rewards and experience higher fees when using their cards, consumers in Australia that want to use cards to finance purchases may benefit from the lower-interest cards that issuers began increasingly offering after the reforms were implemented. Regulators indicated that banks altered their business models when interchange fees were reduced to focus more on attracting cardholders who carry a balance. This may have been due, in part, to decreased revenue from interchange fees. In addition, Australia’s central bank has not been able to discern whether merchants have passed along their reduction in the costs of accepting cards—resulting from the reforms—in the prices charged for retail goods and services. An RBA official told us, however, that while such an effect would not likely be measurable, he believed competition among merchants would lead merchants to pass some portion of a reduction in their costs along to consumers. RBA’s assessment of the reforms’ effects on overall welfare is positive and it estimates that welfare gains are likely substantial. In addition to the impact on merchants and consumers in the three countries we examined, other developments in these countries’ payment system markets have occurred since interchange rates were lowered. For example, in Australia, the central bank found that over the past few years, the number and value of debit card payments grew more quickly than those of credit card payments. The central bank stated that this difference reflects slowing growth in the number of credit card transactions—in part resulting from cutbacks in credit card rewards and the introduction of surcharges—as well as increasing growth in the number of debit card transactions due in part to new types of deposit accounts offered by banks that make debit card transactions more attractive. Additionally, the combined market share of MasterCard, Visa, and Bankcard decreased, and the combined market share of American Express and Diners Club correspondingly increased by about 1 percent to around 16 percent of the value of credit card transactions. The Mexican central bank reports that the number of credit and debit card payments increased significantly in the last few years. In addition, several new banks have entered the issuing and acquiring markets and concentration in these markets has decreased, although both markets still continue to be relatively concentrated compared to that of the United States. In Israel, IAA officials told us that too little time has passed to evaluate the effects of their reforms; however, they expect that the creation of a single interchange system will yield efficiency gains and promote competition for the benefit of consumers. The extent to which similar actions to lower interchange rates in the United States might reduce costs to merchants and consumers is unclear. While actions in the three countries examined appear to have reduced the costs to merchants for accepting cards, less information was available on the impact on consumers. In Australia, for example, costs for card users appear to have increased, but having these individuals experience higher costs could be considered more efficient and appropriate than merchants passing their card acceptance costs along to all consumers through higher prices for goods and services, as RBA concluded was occurring before the reforms. However, whether consumers choosing to make purchases with other forms of payment have experienced any benefits was not clear. In addition, variations in payment systems across the countries we studied suggest that interchange levels may not be the only relevant factor to consider when examining card costs in the United States compared with those of other countries. For example, although average interchange rates for credit cards in the United States are higher than the rates that have been set in the countries we reviewed, one industry group found in 2005 that the amount of the processing fee component included in the total merchant discount rate applied to credit card acceptance transactions in many other developed countries around the world is actually greater than in the United States. Therefore, comparing only interchange rates may not give an accurate picture of the relative costs of card acceptance to merchants. Further, because interchange rates are reportedly intended to balance costs across consumers, merchants, and issuing and acquiring banks, differences in interchange levels between the United States and other countries could be the result of different cost structures for the banks in these markets. For example, Israel has fewer than 10 card issuers, and officials at the Federal Reserve Bank of Kansas City estimated in 2006 that the four largest banks in Australia issued 55 percent of cards. In contrast, we reported in 2006 that the United States has more than 6,000 depository institutions that issue credit cards, and therefore the costs of issuing credit cards in this country could be different than in countries with many fewer issuing banks. Finally, the regulatory and legal structure in the United States differs from those of other countries. For example, unlike in Australia and other countries we reviewed, in the United States there is no entity specifically tasked with regulating or overseeing the competitive aspects of the interchange fee structure or the fees’ effects on consumers. To the extent that the imposition of interchange fees would constitute an anticompetitive or unfair business practice prohibited by the antitrust laws or the Federal Trade Commission Act, the Department of Justice (DOJ) and FTC, respectively, could take measures to ensure compliance with those laws. In 1998, DOJ sued Visa and MasterCard for alleged antitrust violations relating to the networks’ “exclusivity rules,” which prohibited member banks from issuing Discover or American Express cards. The court found that the exclusivity rules were a substantial restraint on competition in violation of the Sherman Act. Although the imposition of interchange fees was not found to violate the law, the trial court noted that the defendants’ ability to impose and change the fees was evidence of market power, which was an element in proving the anticompetitive nature of the exclusivity rules. Further, DOJ officials told us that under its authority to enforce the antitrust laws, DOJ is again looking into issues concerning the payment systems industry. (Also, as previously noted, interchange fees have been a factor in lawsuits alleging violations of the federal antitrust laws by credit card networks and related parties. In addition, private parties are pursuing civil actions that address interchange fees under these same laws.) FTC officials expressed to us the view that the FTC does not have authority to regulate interchange fees. Also, officials of the Board of Governors of the Federal Reserve noted that the Federal Reserve does not have a specific mandate to regulate interchange fees in the United States. Many federal entities use cards to make purchases of goods and services needed for their operations, spending more than $27 billion on purchase, travel, and fleet cards in fiscal year 2007. Officials we interviewed from five federal entities that were high-volume users of cards for goods, travel, and automotive expenses told us that using cards reduces their administrative expenses, provides income from the rebates they receive from the issuing banks, and provides other benefits. Although generally citing few drawbacks to the use of charge cards, federal entity officials acknowledged challenges in controlling use of cards, but also noted that the data available on card use and tools provided by the issuing banks help them address these challenges. More than 350 federal entities participate in GSA’s SmartPay program— which provides purchase, travel, and fleet cards for these entities to use. Federal entities pay no direct costs for the general use of cards. According to card network officials, the banks that issue cards to federal entities are compensated in part by the interchange fees they receive when a government entity or employee uses a card to make a purchase. In fiscal year 2007, federal entities used cards to purchase more than $27 billion of goods and services. This represents an inflation-adjusted increase of 51 percent over fiscal year 1999 spending levels (see fig. 2). Most of this spending occurred using purchase cards, which account for nearly 70 percent of total federal entity card spending, while about one- quarter of card spending was done using travel cards and about 5 percent using fleet cards. The number of transactions has also increased by 50 percent since 1999, from about 60 million transactions to over 90 million in 2007. However, the rate of growth of both spending and transactions has slowed in recent years. According to the Director of GSA’s Office of Charge Card Management, the increases in spending and the number of transactions in the early years of the SmartPay program were due to entities adjusting their purchasing behaviors from previously used systems, such as purchase orders, and learning how to use their cards to make additional purchases. Although the number of transactions remained roughly constant between fiscal years 2002 and 2007, the average transaction value rose from about $240 to about $300, accounting for the growth in total spending during this time. According to the Director, the number of transactions has remained relatively stable in current years because, for the most part, entities have transitioned from most of their previously used purchasing systems and are now making only small changes to their programs to improve efficiencies. The Director of GSA’s Office of Charge Card Management also told us that card use by federal entities is expected to continue growing as the entities identify additional ways of using cards and implement new payment technologies. For example, officials from the Department of Veterans Affairs (VA) told us that they are working with the bank that issues the department’s purchase cards to find new ways to increase card usage. They explained that in 2003 they developed a process for making payments through the card system to non-VA medical providers for services provided to veterans who are unable to visit a VA center for medical care, reducing the number of checks they must issue and increasing both the number of electronic payments made and their card use rebates. Additionally, officials stated that VA is reviewing its purchase records to attempt to shift more purchasing to vendors that accept cards. Similarly, the U.S. Army has developed an automated payment system that uses purchase cards for most of the $400 million per year it pays schools and other institutions for soldiers’ tuition assistance. GSA officials also expect the new products and services that will be available under the SmartPay 2 program will lead to increases in overall card spending. Some of these products include prepaid cards, contactless cards, and cards in foreign currencies. According to federal entity officials we spoke with, one of the primary benefits associated with card usage is the administrative cost savings compared with procurement methods that card usage has partially replaced, such as purchase orders, imprest funds, and blanket purchase agreements. For example, obtaining goods or services under a purchase order system requires that a purchase request be filled out and approved, then sent to a procurement office, which issues it to a vendor. When government entities use a card, however, goods or services can be directly purchased by cardholders, who then review their statements at the end of the billing cycle and forward the statement to an approving official. Officials from the Department of Agriculture said that if cards were not used, staff would need to complete purchase orders for each of the 1.5 million transactions per year that currently are made using purchase cards. Officials from the Department of Homeland Security estimated that the department would require four to five times the current number of staff who operate its travel card program if it paid for travel expenses without cards. In addition, officials at the Department of Agriculture stated that new tools, such as an automated process to reset charge card passwords, may further reduce the costs of administering their program. Estimates of per transaction administrative costs savings from card usage vary, making it difficult to estimate total administrative cost savings. GSA estimated total administrative cost savings from card use in fiscal year 2006 to be $1.7 billion. An official from GSA told us that this estimate was based on per transaction saving estimates by the Purchase Card Council. In 1994, the council, an interagency group, asked 17 civilian government organizations to perform a detailed cost-benefit analysis comparing the use of purchase orders versus purchase cards for transactions of $2,500 and below. The per transaction savings estimates for the 17 organizations ranged from $1.42 to more than $142, with an average of about $54. More recently, in a 2006 research study, the Association of Government Accountants surveyed four civilian agencies with an approach similar to that of the Purchase Card Council and reported savings estimates of $60 to $166 per transaction, with a weighted average of about $87. In comparison, a 2005 survey of almost 1,300 purchase card program administrators from corporations, nonprofits, and government entities found, for state and federal government entities, a $53 administrative cost savings per transaction compared to purchase orders. Finally, a 1997 analysis by the U.S. Army Audit Agency showed that the average cost to the U.S. Army of processing a purchase order was about $155 compared to about $62 for a card, a savings of about $93 per transaction. Another benefit of card use for federal entities is the receipt of rebates from the banks that issue their cards. Rebate amounts, which, after adjusting for inflation, have almost doubled since fiscal year 2002 to $175 million in fiscal year 2007 (see fig. 3), are based on a number of factors, mainly the volume of net spending on cards and how quickly balances on the cards are paid. GSA establishes a minimum rebate rate that federal entities should receive, but entities can choose to negotiate with their issuing banks for additional amounts. Between 1998 and 2007, the minimum rate was 6 basis points of the net volume of spending on the cards, while under the SmartPay 2 program, the minimum rebate rate will increase to 8 basis points. A GSA official stated that typically in federal entities’ negotiations with issuing banks, the rebate rate is increased as an incentive for an entity to choose a particular bank to issue its cards. According to the GSA official, however, some entities negotiate for specialized services rather than increased rebate amounts, and GSA encourages agencies to examine their programs holistically when negotiating terms. Federal entities differ in how they use their rebates. Two of the federal entities we spoke with return the rebates directly to the location that originated the relevant transaction, one adds the rebates into general income for the entity, and one other allocates rebates to a working capital fund for initiatives of general benefit to the entity. Officials from federal entities also cited several other benefits associated with using cards to make purchases. For example, officials from several entities told us that the increased data on purchases that is available to them by using charge cards allows for better management and/or tracking of spending. According to officials at the Department of Agriculture, purchase card data allowed them to examine their purchasing patterns and identify opportunities for savings. They explained that by using purchase cards to buy office supplies, they received data on the transactions, which they used to negotiate a contract with a vendor to buy supplies in bulk that resulted in millions of dollars in savings per year. Officials from several entities also told us that cards allow them to make purchases more quickly and/or more conveniently than previously used methods of purchasing. For example, officials from one entity told us that once the approval process is completed for a particular purchase, it can be made immediately, whereas previously used methods take a longer time to complete. According to officials from another entity, the ability to obtain cash advances on cards benefits them because it eliminates the need for imprest funds, which, according to officials from a different entity, are harder to monitor for fraud. Other benefits cited by officials from one entity included compensating vendors doing business with the government more quickly and greater ability to resolve disputes with vendors because charges can be reversed until the dispute is resolved. Officials at the federal entities with whom we met cited only a few drawbacks associated with the use of cards, though officials from some entities mentioned the risk of fraud and misuse. However, these officials told us that the risk of these occurrences is less than or equal to that under previously used procurement systems. Although the instances of fraud and misuse on cards may be infrequent, we and several inspectors general have reported internal control weaknesses in charge card programs at federal entities and instances of fraud and abuse. For example, in 2001 and 2002 we issued reports on control weaknesses in purchase card programs at the Air Force, Army, and Navy. The reports contained over 100 recommendations targeted at improving the design and implementation of controls over card use and establishing guidelines for disciplining those who misuse their government purchase cards. In 2003, we reported that the military services had begun or implemented nearly all of those recommendations, some of which were included in legislative requirements for the Department of Defense. In addition, earlier this year we reported on breakdowns in internal controls in various federal entity purchase card programs, which in some instances resulted in fraudulent, improper, and abusive use of purchase cards. For the most part, fraud and misuse can be limited through strong internal controls in card programs of federal entities. GSA and OMB have issued guidance on internal controls intended to reduce the risk of misuse of cards. For example, GSA develops guidance through training courses for federal entities and publishes guidelines for oversight and information on detecting misuse and fraud. Additionally, OMB has issued several memorandums related to oversight of card programs. For example, a 2002 OMB memorandum provided that each federal entity review the adequacy of its internal controls for purchase and travel card expenditures, and required entities to submit action plans detailing any risks associated with these programs and identifying the internal controls that will be used to manage these risks. In 2005, OMB also issued an appendix to its 1995 circular on management accountability and control, which consolidated and updated governmentwide card program requirements and included minimum requirements and best practices on several aspects of card programs. Some of the best practices to limit fraud and misuse identified in these guidance documents included implementing appropriate training for cardholders, approving officials, and other staff; deactivating cards that are not used; requiring charge card transaction or statement reconciliation on the part of the cardholder in a timely manner; ensuring managerial review of charge card purchases; and implementing policies outlining appropriate administrative and/or disciplinary actions for charge card misuse. Finally, officials from some of the federal entities we interviewed told us that the tools and data provided by their card-issuing banks helped them to limit the risk of misuse of cards by enabling them to track and limit the types of purchases made on the cards. For example, some entities block the use of cards at certain merchant types, to help ensure that the cards are used only for approved goods and services, or limit transaction amounts, cash withdrawals, and other activities. Officials from several entities noted that the data on card transactions they receive from their issuing bank allow them to monitor for potentially fraudulent or inappropriate transactions. For example, an official from one entity told us that the data allowed it to identify suspicious transactions based on specified dollar amounts, charges to certain vendors, and other types of transactions that could involve misuse. Officials from another entity noted that security features on cards help identify suspect charges by generating alerts for questionable transactions and by sending an e-mail to the cardholder every time a transaction occurs on his or her account in order to verify whether the transaction was approved by the cardholder. Federal entities’ acceptance of credit and debit cards provides a number of benefits, including client and customer convenience, but also entails costs. In collecting over $27 billion in revenue via cards in 2007, the transactions of federal entities included within the scope of this report resulted in more than $430 million in merchant discount fees, including at least $205 million in interchange fees (paid by entities that provided us with data specifically on interchange fees). Federal entities have undertaken a number of worthwhile actions to ensure that card acceptance costs are minimized. Further, FMS’s program to comprehensively examine the revenue sources and collection mechanisms used by the many entities for which it performs collections shows great promise for achieving savings and identifying improvements for revenue collection, whether through cards or other mechanisms. Since its initiation on a pilot basis in 2007, this program has already identified potential cost savings or efficiency improvements at the eight entities FMS has examined to date. Because such savings would be recurring—in that they are applicable to future transactions—this program appears to be a valuable effort for FMS to complete in a timely manner. Ensuring that FMS’s program implementation strategy has additional elements, such as a timeline for completing the reviews, cost savings estimates, and an assessment of the adequacy of the resources committed will increase the likelihood of FMS achieving its goals as expeditiously as possible. Establishing a timeline for completion would allow FMS management to determine whether the program is being implemented expeditiously, including taking action if interim milestones are not being met. Generating cost savings estimates would appear to provide FMS with an additional tool for prompting entities to implement the improvements that are identified. Further, establishing a timeline for monitoring progress and estimating the cost savings to be realized could also allow FMS to better assess whether the level of resources committed to the program is appropriate. Perhaps most important, developing a full implementation strategy would allow FMS to identify potential cost savings for its collection activities—and federal entities to begin realizing them—more quickly, resulting in larger overall financial benefits to the government. Other countries have examined the significance of interchange fees as part of credit and debit card payments, and several have taken or are considering actions to improve efficiencies and reduce costs involving their card payment systems. In one of the three countries we examined that has acted to limit interchange fees, available evidence suggests that the costs for merchants from accepting cards has declined but the direct costs for consumers using cards may have increased. However, a number of factors may be influencing costs, and additional data and study would be needed to more definitively assess the effects of these actions. Further adding to the difficulty of estimating the potential effects of such actions in the United States, are differences in the structure and regulation of the U.S. card payment market from those of the other countries we examined. Federal entities have realized benefits from using cards to make purchases of needed goods and services, including supplies, travel expenses, and vehicle operating costs, and have taken actions to address the challenge of ensuring that cards are used only for intended purposes. In addition to increased efficiency in administrative processes and cost savings, in fiscal year 2007 card use also produced about $175 million in additional operating funds through the rebates provided by the banks that issue government cards. Agencies have acknowledged the continuing need to ensure adequate monitoring and to have controls in place to minimize fraudulent and abusive use of their cards. The ability to analyze data on card activities—a capability that the issuing banks are providing to agencies—appears to be a valuable tool, in that it helps federal entities manage their card activities and potentially reduces costs for the government. In order to help expeditiously achieve savings to the government, including those associated with accepting cards, we recommend that the Secretary of the Treasury take steps to establish a full implementation strategy for FMS’s revenue collection review program. Such a strategy should include a timeline for completing the reviews, cost savings estimates associated with individual reviews, and an assessment of the adequacy of the resources committed to the program. We requested comments on a draft of this report from the Treasury and GSA. In an e-mail providing the Treasury’s comments, the manager of FMS’s Internal Control Branch noted that our report acknowledges that the acceptance of credit and debit cards has provided significant benefits to the agencies and the public, and that as agencies implement more e- commerce initiatives and interact more with the public through the Internet, credit and debit card acceptance is likely to continue to increase. While FMS did not directly address our recommendation, the manager agreed that FMS’s revenue collection review program, in which the acceptance of credit and debit cards is only one of many processes that will be evaluated, will help improve overall financial management at federal agencies. FMS also provided technical comments, which we have incorporated where appropriate. In addition, GSA reviewed a draft of this report and, in an e-mail from the Director, Internal Control and Audit Division, Office of the Controller, indicated agreement with the report’s contents regarding the SmartPay program. We are sending copies of this report to various other interested congressional committees and members and to the Secretary of the Treasury; the Administrator, General Services Administration; and other interested parties. We will also provide copies to others on request. This report will also be available at no charge on GAO’s Web site http://www.gao.gov. Please contact me at (202) 512-8678 or [email protected] if you or your staff have any questions about this report. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. Our objectives were to examine (1) the benefits and costs, including interchange fees, associated with federal entities’ acceptance of cards as payment for the sale of goods, services, and revenue collection; (2) actions taken in countries that have regulated or otherwise limited interchange fees and their impact; and (3) the impact on federal entities of using cards to make purchases. To determine the benefits received by federal entities from the acceptance of credit and debit cards, we conducted semistructured interviews with five judgmentally selected federal entities that participate in Financial Management Service’s (FMS) Credit and Debit Card Acquiring Service, which is a governmentwide service that allows federal entities to accept payment by Visa, MasterCard, American Express, and Discover cards, as well as some types of debit cards. FMS provides this service to any executive, judicial, and legislative branch agency; government corporation; commission; board; or other federal entity that determines that the acceptance of cards is needed for revenue collection. Three of the five entities we contacted were among those that conducted the highest volume of card transactions, and two entities were among those that conducted the lowest volume of card transactions. We also reviewed and summarized studies and reports on the costs associated with processing different forms of payment to identify how these costs compared with the costs associated with card acceptance. To estimate the costs associated with federal entities’ acceptance of cards as payment, we collected data from as broad a range of entities associated with the federal government as possible. To determine the federal entities from which to collect data, we met with FMS who provided us with data on all federal entities that participate in its Credit and Debit Card Acquiring Service. FMS provided us data on revenues collected through card transactions and the merchant discount, interchange, and processing fees it paid for these entities’ acceptance of cards for fiscal years 2005 through 2007. Additionally, FMS officials provided us with a list of Department of Defense and Department of Homeland Security nonappropriated fund instrumentalities that have independent authority to collect revenue and thus handle their own card collections. We reviewed data for these entities as well. These entities included Air Force Services Agency, U.S. Army and MWR Command, Army and Air Force Exchange Service, Marine Corps Community Services, Navy Exchange Service Command, Navy Morale, Welfare and Recreation, Coast Guard Exchange System, and Coast Guard Morale, Well-being, and Recreation. The U.S. Postal Service, Amtrak, and Smithsonian Institution operate their own card collection programs as well and do not utilize FMS’s services, thus we collected data directly from those entities for fiscal years 2005 through 2007. Smithsonian Institution and the Coast Guard Morale, Well- being, and Recreation were unable to provide us data on their card collection programs for this period of time because they do not maintain centralized program data on card revenues and fees. Instead, their card operations are decentralized among the various locations in which they operate. We also collected data from two private entities that accept tax payments made by credit and debit cards on behalf of the Internal Revenue Service (IRS). These two entities—Official Payments Corporation and LINK2GOV—provide this service at no cost to IRS and instead charge taxpayers who choose to use their services a convenience fee for doing so. While we report the card acceptance fees associated with federal tax payments for these two entities, we do not include them in the total amount of card acceptance fees paid by federal entities. We did not attempt to determine additional federal entities beyond those listed here that may operate their own card collection programs and therefore pay fees related to card acceptance. From each of the entities that we collected data, we requested three pieces of information for fiscal years 2005 through 2007: total amount of revenue collected in credit and debit cards, total amount of interchange fees assessed on card transactions, and total amount of merchant discount fees (for processing fees as well as interchange fees) assessed on card transactions. Only three entities—Amtrak, FMS, and the Postal Service—were able to separately identify the amounts they paid in interchange fees. For the other entities, we obtained the total amounts paid in merchant discount fees. The data we collected on the costs associated with card acceptance from the federal entities were the best data available; however, because of limitations in and differences among the record keeping of the entities, the data may not be complete for all years, may treat some costs inconsistently, and in one case contain estimated, rather than actual, values. For example, not all entities could provide us with complete data for all 3 fiscal years, and some entities treated certain costs inconsistently, such as including cost information for chargeback fees in their merchant discount fee data. In another case, a federal entity used data from other time periods to estimate some of the pieces of information we requested. We reviewed these data for completeness and accuracy and determined that none of the limitations materially affect the findings we report. However, due to these limitations, the actual figures presented are best viewed as approximations, or estimates in some cases, rather than precise figures. The dollar values for this objective are reported as current dollars. In addition to analyzing data from federal entities on the revenues and costs associated with card acceptance, we also reviewed some federal entities’ contracts or agreements with acquiring banks. To determine the interchange fees applicable to the federal entities’ card transactions, as well as the factors that cause interchange fees to vary, we reviewed MasterCard and Visa interchange rate schedules effective beginning October 2007 and April 2008. We also reviewed historical interchange rate schedules for rates that were effective August 2003 through April 2007 that were provided by an acquiring bank. Additionally, we interviewed government officials responsible for settling card transactions, and officials from American Express Company, Discover Financial Services, MasterCard Incorporated, Visa Inc., and Fifth Third Bancorp—FMS’s current acquiring bank—to gather information on how government entities’ card acceptance fees are assessed and steps being taken to manage the fees. To examine actions taken in countries that have limited interchange fees, we reviewed available literature, contacted our counterparts (other audit institutions) in several countries, and interviewed Federal Reserve and industry officials to identify various countries where regulators or others had taken such actions. We judgmentally selected countries for further examination from among those identified based on three criteria: (1) actions had been taken that required actually determining interchange rates, (2) information available on the methods they used to determine the rates had been made available (3) efforts had been under way for sufficient time to allow for study. To allow for illustration of diverse approaches to limiting interchange fees, we sought to include countries that had taken different types of actions. In addition, in order to study the impacts of these actions, we sought to include countries where the effects of the intervention had been the subject of empirical study. On the basis of these criteria, we selected three countries—Australia, Israel, and Mexico—for more detailed study. We conducted further literature reviews on these countries and conducted interviews with officials involved in the efforts to limit rates in each of these countries to learn about the measures taken, other measures that were considered, and any empirical data on the effects of the interchange limitation. Additionally, we met with officials from the Board of Governors of the Federal Reserve System, Department of Justice, and the Federal Trade Commission to learn how the regulatory and legal structure in the United States addresses interchange fees. To determine the impact on federal entities of using cards to make purchases, we obtained and analyzed fiscal years 1999 through 2007 General Services Administration (GSA) SmartPay program data on spending, transactions, and rebates received. On the basis of our review and testing of GSA’s data for a separate engagement, we determined that these data were sufficiently reliable for the purposes of this engagement. Dollar values have been adjusted for this objective to fiscal year 2007 constant dollars using the gross domestic product (GDP) price index. Additionally, we reviewed policies and procedures related to card usage from GSA and other government entities, as well as our prior reports, and academic and government reports. To obtain their views on the benefits and drawbacks of card usage, we interviewed officials from GSA, 5 federal entities that were among the 10 entities with the highest spending and most transactions on cards in fiscal year 2006, the bank that issued cards which accounted for the highest government card spending in fiscal year 2006, and one academic researcher with extensive work on government use of cards. We conducted this performance audit from June 2007 to May 2008 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The following identifies key cases concerning interchange fees. In this 1980s case, NaBanco claimed that the setting of credit card interchange fees by Visa member banks constituted unlawful price fixing. NaBanco was a third-party enterprise that processed credit card transactions for its client acquiring banks, who were members of the Visa network. NaBanco alleged that the imposition of an interchange fee affected the amount it could collect for its service, and that under Visa’s rules the fee had an anticompetitive effect. The court ruled that NaBanco did not satisfy its burden of proof under a “rule of reason” analysis to show that interchange fees were a restraint of trade. In 1998, Department of Justice (DOJ) sued Visa and MasterCard for alleged antitrust violations. In that proceeding, the government focused on two points. First, the department claimed that because the boards of Visa and MasterCard were dominated by many of the same banks, intersystem competition was reduced. Second, DOJ challenged the networks’ “exclusivity rules,” which prohibited member banks from issuing Discover or American Express cards. The court ruled against the government on the first claim (DOJ did not appeal) but found that the exclusivity rules were a substantial restraint on competition in violation of the Sherman Act. The district court invalidated the exclusivity rules, enjoined the defendants from restricting banks from issuing other cards, and permitted Visa and MasterCard issuers to terminate any contractual obligations to abide by the exclusivity rules. Although the imposition of interchange fees was not found to violate the law, the court noted that the defendants’ ability to impose and change the fees was evidence of market power, which was an element in proving the anticompetitive nature of the exclusivity rules. Pending Class Action—U. S. District Court (E.D.N.Y.) In a class action pending in the United States District Court for the Eastern District of New York, merchants claim that interchange fees have an anticompetitive effect in violation of the federal antitrust laws. This case is a consolidation of numerous separate actions. As of October 2005, merchants had instituted 14 class action lawsuits in four separate districts against Visa and MasterCard and their member banks. According to the Magistrate Judge assigned to the consolidated case, as of February 2006 “some forty class action lawsuits” had been brought “on behalf of a class of merchants against the defendant credit card networks and certain of their member banks.” In March 2008, the Federal Court of Appeals for the Ninth Circuit upheld the District Court’s dismissal of a claim in which merchants alleged that the merchant discount fees set by Visa, MasterCard, Bank of America, Wells Fargo Bank, and U.S. Bank violated Section 1 of the Sherman Act,15 U.S.C. § 1, and Section 16 of the Clayton Act, 15 U.S.C. § 26. The court ruled that the plaintiffs failed to plead evidentiary facts necessary to support such a claim. Specifically, the court found that the merchants failed to allege facts necessary to support their theory that the banks conspired or agreed with each other or with Visa and MasterCard to restrain trade. With respect to the allegations against the banks, the court observed that “merely charging, adopting or following the fees set by a Consortium is insufficient as a matter of law to constitute a violation of Section 1 of the Sherman Act.” Further, the court concluded that the interchange fee set by Visa and MasterCard was not imposed directly upon the merchants as an anticompetitive measure but instead constituted a cost imposed on the banks which the banks passed on to the merchants as a rational business decision. In addition to the individual named above, Dave Wood, Director; Cody Goebel, Assistant Director; Rudy Chatlos; Isidro Gomez; Christine Houle; Christopher Krzeminski; Marc Molino; Paul Thompson; Ann Marie Udale; and Ethan Wozniak made key contributions to this report. | Federal entities--agencies, corporations, and others--are growing users of credit and debit cards, as both "merchants" (receiving payments) and purchasers. Merchants accepting cards incur fees--called merchant discount fees--paid to banks to process the transactions. For Visa and MasterCard transactions, a large portion of these fees-- referred to as interchange--goes to the card-issuing banks. Some countries have acted to limit these fees. GAO was asked to examine (1) the benefits and costs associated with federal entities' acceptance of cards, (2) the effects of other countries' actions to limit interchange fees, and (3) the impact on federal entities of using cards to make purchases. Among other things, GAO analyzed fee data and information on the impact of accepting and using cards from the Department of the Treasury (Treasury) and the General Services Administration, reviewed literature, and interviewed officials of major card companies and three foreign governments. By accepting cards, federal entities realize benefits, including more satisfied customers, fewer bad checks and cash thefts, and improved operational efficiency. In fiscal year 2007, federal entities accepted cards for over $27 billion in revenues and paid at least $433 million in associated merchant discount fees. For those able to separately identify interchange costs, these entities collected $18.6 billion in card revenues and paid $205 million in interchange fees. Federal entities are taking steps to control card acceptance costs, including reviewing transactions to ensure that the lowest interchange rates--which can vary by merchant category, type of card used, and other factors--are assessed. While the Visa and MasterCard card networks have established lower interchange rates for many government transactions, some federal entities have attempted to negotiate lower ones, with mixed success. To identify savings from cards and other collection mechanisms, Treasury's Financial Management Service (FMS)--which handles revenues and pays merchant discount fees for many federal entities--initiated a program in 2007 to review each entity's overall revenue collections. FMS has identified potential efficiency and cost saving improvements at the eight entities it has reviewed thus far, but has yet to develop a full implementation strategy-- including a timeline for completing all reviews, cost savings estimates, and resource assessment--that could help expeditiously achieve program goals. Several countries have taken steps to lower interchange rates, but information on their effects is limited. Among the three countries GAO examined, regulators in Australia and Israel intervened directly to establish limits on interchange rates, while Mexico's banking association voluntarily lowered some rates. Since Australia's regulators acted in 2003, total merchant discount fees paid by merchants have declined, but no conclusive evidence exists that lower interchange fees led merchants to reduce retail prices for goods; further, some costs for card users, such as annual and other fees, have increased. Few data exist on the impact of the actions taken in Mexico (beginning in 2004) and Israel (beginning in the late 1990s). Because of the limited data on effects, and because the structure and regulation of credit and debit card markets in these countries differ from those in the United States, estimating the impact of taking similar actions in the United States is difficult. Federal officials cited various benefits from card use--which totaled more than $27 billion in fiscal year 2007, a 51 percent increase since fiscal year 1999 after adjusting for inflation--including the ability to make purchases more quickly and with lower administrative costs than with previously used purchasing methods. The banks that issue cards to federal entities also rebate a small percentage of their card purchase amounts; these rebates totaled $175 million in fiscal year 2007. Preventing inappropriate card use poses challenges, and GAO and others have identified inadequate controls over various agencies' card programs. However, tools and data provided by the issuing banks now allow entities to review transactions more quickly, increasing their ability to detect suspicious transactions. |
The fiscal year 1997 DOD procurement appropriation is $43.8 billion, a reduction of over 67 percent from the $134.3 billion (in constant fiscal year 1997 dollars) appropriated in 1985. Many weapon acquisitions have been affected by this decline in the procurement budget. DOD’s primary response to the reduced budget has been to reduce annual procurement quantities of weapons in full-rate production and extend their production schedules. DOD buys new weapons in two phases: low-rate initial production (LRIP) and full-rate production. When in LRIP, according to 10 U.S.C. 2400, DOD is to buy minimum quantities of a new weapon. This legislation resulted from concern in the Congress about the large quantities of weapons units bought before adequate testing. The purpose of LRIP is to (1) provide weapons for operational test and evaluation, (2) establish an initial production base for the weapon, and (3) permit an orderly increase in production before full-rate production begins. Operational test and evaluation is key to ensuring that a weapon’s capabilities operate as designed before full-rate production begins. At this time, field tests are done to demonstrate the weapon’s effectiveness and suitability for military use. After the weapon’s design has stabilized and the weapon’s capabilities are proven, the services enter full-rate production to begin buying proven weapons in economic quantities. In practice, DOD views low-rate production as any production prior to completion of initial operational tests and full-rate production as the production that follows these tests, with the terms low rate and full rate having little or no relevance to the annual quantity bought. We reviewed 6 weapons in LRIP and 22 weapons in full-rate production. (See app. I for a list of the weapons.) The 22 weapons in full-rate production represent those that in fiscal year 1996 had substantial ongoing production lines. The six low-rate production weapons were ones in production in fiscal year 1996 with substantial planned follow-on full-rate production quantities. For the six weapons in low-rate production, we looked for increases in production rates before operational tests were completed and decreases in the planned future full production rates. For the 22 weapons in full-rate production, we compared DOD’s planned optimal production rates, costs, and schedules to that of actual full-rate production through fiscal year 1996 (see app. II). It is not uncommon for DOD to reduce the annual production quantities of proven weapons, stretching out full-rate production schedules for years. For 17 of the 22 proven weapons we reviewed, the actual production rates were 57 percent lower than originally planned. Decreased rates vary from 10 percent for the E-2C Hawkeye to 88 percent for the Standard missile system. For 12 of these weapons with reduced rates during full-rate production, program officials cited insufficient funding as a contributing reason for lower rates, and therefore stretching out production. As a result of reduced rates, production of the 17 weapons will take an average of over 8 years, or 170 percent, longer to complete than originally planned. The number of years the 17 weapons’ production schedules have been stretched out ranges from 1 year for the Avenger to 43 years for the Black Hawk helicopter based on current production rates. (See app. III for the reduced production rates on each of these weapons.) Examples of proven weapons with reduced annual production rates follow: At the extreme for slowed production is the Army’s Black Hawk helicopter. If the Army continues to buy the Black Hawk at the current rate, full-rate production will take almost 54 years to complete, about 43 years longer than originally planned. The Navy’s production of the Tomahawk missile was to be completed in 9 years or by 1992, but instead it will take 15 years or until 1998, a 67-percent schedule increase. Originally, the Navy’s planned procurement rate was 600 Tomahawks annually; instead, it has averaged 276 missiles a year, a decrease of over 50 percent from the planned production rate. Because of their reduced annual production rates and stretched out schedules, the acquisition of the 17 weapons we reviewed in full-rate production has cost nearly $10 billion more, through fiscal year 1996, than the program offices estimated based on their original planned production rates. Since 14 of the 17 weapons will still be in production beyond fiscal year 1996, the total increased cost at completion of these weapons could be significantly more than $10 billion. When the annual production quantity of a weapon is reduced, its unit cost generally increases because fixed costs are spread over a smaller quantity. This was the case for 14 of the 17 weapons we reviewed that had reduced production rates (see app. II). For example, the Navy planned to produce 48 T45 training aircraft annually at a unit cost of $8.7 million. Instead, an average of 12 T45s has been produced annually since full-rate production began in 1994, at a unit cost of $18.2 million. For the quantity produced in full-rate production through fiscal year 1996, T45 costs have increased from the original estimate by $345 million. When weapon systems are funded at their planned full production rates or higher, the unit cost of the weapon generally decreases, as illustrated in the following examples: The Army’s program office increased the quantities of its Global Positioning System (with an original planned annual rate of 14,000) from 11,000 to 18,500 during 4 years of full-rate production. As a result, the unit cost of the system decreased from $1,400 to $1,076. If annual production were increased, the Army could save up to an estimated $491 million on the remaining 109 Kiowa Warrior helicopters it needs to finish full-rate production. For each of the last 3 years, the program office has procured an average of 16 units a year at a unit cost of $10.22 million. According to Kiowa program officials, the most efficient annual production rate of 72 helicopters would reduce unit cost to $5.72 million. The practice of allocating funds during low-rate production to increase annual production quantities before successful completion of initial operational test and evaluation has frequently been wasteful. As we reported in November 1994, the consequences of buying large quantities of untested weapons are increased acquisition costs, the accumulation of unsatisfactory weapons that require costly modifications to meet performance requirements and, in some cases, the deployment of substandard weapons to combat forces. That report contained 12 illustrative examples describing the problems experienced when the weapons were tested, the major fixes required after significant quantities were bought and, in many cases, the deployment of substandard weapons to combat forces. (Those 12 examples are included in appendix IV of this report.) In one case, before the Army did any operational test and evaluation, a multiyear production contract was awarded for up to 10,843 trucks. Operational testing was suspended 2 months after it began because the trucks were found to be unreliable and therefore not operationally effective. Production continued while the contractor modified the truck design to correct deficiencies. By the time the trucks passed operational testing, over 2,000 trucks were produced, the majority of which required extensive remanufacturing to correct the deficiencies. Most program offices developed an acquisition strategy for both low-rate and full-rate production based on optimistic projections of available funding. As a result, the offices tended to over program the number of weapons that can be bought with the dollars available in DOD’s spending plan. As we have previously reported, the use of optimistic planning assumptions has led to program instability, costly program stretch-outs, and program terminations. Current DOD acquisition guidelines permit increasingly higher quantities of weapons in low-rate production to provide for the orderly transition to full-rate production. In addition, DOD’s acquisition culture encourages this practice to solidify organizational commitment to keep weapon acquisition programs moving and to protect them from interruption. In this regard, within DOD’s acquisition culture, a weapon’s acquisition manager’s success depends on getting results, and in acquisitions, results mean getting the weapon into production and into the field. The trend to reduce the full production rates from the original plans because of limited funds and to produce more quantities than are needed for testing during low-rate production increases procurement costs. For example, DOD increased the annual low-rate production of the Army’s untested Longbow Hellfire Missile in fiscal years 1995, 1996, and 1997 from 0 to 352, and 1,040, respectively; while the Navy reduced full-rate production of the Standard missile system for those fiscal years from 202, to 64, and 127, respectively. Between fiscal years 1995 and 1997, low-rate production funding for the Longbow was increased from $41.2 million to $249.5 million while the full-rate funding for the Standard missile was reduced from $240.4 million to $197.5 million. The Navy originally planned to produce 2,160 Standard missiles a year during full-rate production over a period of 4 years. Instead, the Navy has averaged only 266 missiles a year and at that rate it will take 21 years to complete production, 17 years longer than planned, and at a cost of $286 million more than estimated at the originally planned rate. Many times, the services steadily increased the annual LRIP quantities, exceeding the number ultimately needed to complete operational tests and prove out the production line. The increase in annual quantities of weapons produced during low-rate production resulted in a substantial reduction of funds available for the production of proven weapons at planned rates. By minimizing the quantities of weapons procured during LRIP, DOD can reduce the risk associated with producing untested weapons and increase the funding available to produce other proven systems in full-rate production at planned rates, lowering their unit cost. For eight of the weapons we reviewed, the services’ procurement rates during LRIP were equal to or more than they were during full-rate production. For example, the program office for the advanced medium range air-to-air missile increased the quantities produced during low-rate production to 900 units annually. However, since 1992, when it completed operational tests and entered full-rate production, the missile has been produced at an annual rate of 900 or more only twice. In fact, from fiscal years 1997 to 2007, the program office plans to procure an average of only 338 units a year. Table 1 shows the remaining seven weapons with low-rate production quantities equal to or higher than full-rate quantities. DOD continues to generate optimistic full-rate production plans that are rarely achieved. One example where this situation could occur and where planned increases in low-rate production quantities may be unnecessary is the Navy’s F/A-18E/F system. The Navy plans to procure 72 F/A-18E/F aircraft over 3 years during LRIP—12 in 1997, 24 in 1998, and 36 in 1999 and then procure 72 each year during peak full-rate production years. However, the Congress has questioned the affordability of this full production rate and has directed DOD to calculate costs based on estimates of 18, 24, and 36 aircraft a year. In addition, the conferees on the Omnibus Consolidated Appropriations Act for Fiscal Year 1997 asked for calculations based on 48 aircraft a year. The increased quantities procured during low-rate production are not necessary to transition to full-rate production, especially if the number of aircraft procured during full-rate production drops significantly. Even if the Navy buys the aircraft at the rate originally planned, production rate increases to reach peak full rates could occur after the system has been operationally tested, rather than before. The same optimistic planning is reflected in the Air Force’s F-22 program. The Air Force plans to contract for F-22 aircraft under four low-rate buys of 4, 12, 24, and 36 aircraft for a total of 76 aircraft at an estimated cost of nearly $11 billion prior to completing initial operational test and evaluation and entering full-rate production at 48 aircraft a year. During LRIP, DOD is supposed to restrict the number of weapons produced to the minimum quantity necessary to conduct operational testing, establish the initial production base, and allow for an orderly increase into full-rate production. However, because DOD often budgets available funding for unnecessary increases in low-rate production quantities of unproven weapons, it rarely is able to buy proven weapons at originally planned full-rates. When funding is insufficient to produce proven weapons in full-rate production at optimum levels and therefore to complete programs in a timely manner, it is not cost-effective to use limited funds to unnecessarily increase production of untested weapons whose designs are not yet stabilized. This wasteful practice could be minimized by shifting increases in annual production rates from the low-rate production phase to the beginning of full-rate production. We recommend that the Secretary of Defense revise DOD’s weapon acquisition policies to require that (1) annual quantities of weapons bought during LRIP be limited to the minimum necessary to complete initial operational test and evaluation and prove the production line and (2) rates and quantities not be increased during low-rate production to ease the transition into full-rate production unless DOD clearly establishes that the increase is critical to achieving efficient, realistic, and affordable full production rates and can be accomplished without affecting the efficient production of proven systems. We also recommend that the Secretary of Defense direct the Under Secretary of Defense for Acquisition and Technology and the Under Secretary of Defense (Comptroller and Chief Financial Officer) to submit future budgets that place priority on funding the efficient production of weapons in full-rate production. In commenting on a draft of this report, DOD agreed with the principle that premature commitment to LRIP is unwise and that LRIP should not be used to buy equipment that is known not to work. DOD believes the existing policy as set forth in the requirements of 10 U.S.C. 2400 (enacted in 1995) and DOD Directive 5000.2-R (issued in 1996) adequately provides an acquisition structure that allows DOD to focus on minimizing LRIP quantities, while providing the flexibility to maintain an adequate industrial base capability (e.g. ramp-up) to meet the interest of national security. DOD also stated that it makes every effort to fund full-rate production programs to the maximum extent possible within funding availability, changing priorities, and program realities. Concerning our recommendations, DOD commented that (1) its current acquisition policies fully comply with the intent of the policy proposal to minimize the quantities produced under LRIP, (2) increasing production rates (ramping-up) during LRIP allows the contractor to hire and train his production team and maintain a production workforce while operational testing is being conducted, and (3) it makes every effort to fund full-rate production programs but fiscal realities driven by a fluid environment is a serious challenge that will continue to impact the stability of major defense acquisition program production rates and quantities. Although efforts have been made in the last year to reduce the quantities bought under LRIP, our review indicates that DOD is still buying more than the minimum quantities needed. By allowing the ramp-up of quantities under LRIP to hire, train, and maintain a workforce to produce a still unproven product, funding is diverted from contractors producing proven products and their workforce by reducing their production rates and quantities. DOD’s comments have not addressed (1) the negative effect of the current approach on the industrial base, (2) the cost implications, and (3) the delayed deployment of proven weapons. Cost implications include the added funding that will be needed to correct the problems in products produced before operational testing is completed and the increased costs from stretching out the production run of proven products. Stretched production schedules can also undermine national security interests by delaying deployment of needed proven systems to field units. If the LRIP rate “ramp-up” was delayed until after the completion of operational test, initial quantities of unproven systems would be reduced and additional funding would become available to buy the proven systems at more efficient rates. Although there are many reasons why weapon quantities and funding for full-rate production should be changed (such as changes in threats and technology), as long as the existing requirement remains valid, we believe priority should be given to funding the already tested, less risky full-rate systems at the most efficient rate possible. DOD’s comments are presented in their entirety in appendix V. To quantify the number of weapons being bought below their planned full production rates, we screened the line items contained in the February 1995 Procurement Programs document. We determined that 88 percent of the budget for fiscal year 1996 was concentrated into 300 line items. We then reviewed the 300 line items, primarily using budget back-up books’ documentation, to determine which of those items were being bought on an annual repetitive production basis, which is more conducive to increased rate production. We narrowed our universe to 83 line items, or 80 weapons, by excluding line items that were multiple procurement items such as spares, modification programs if the work was being done at a depot, advance procurements, commercial products, and items that did not have a repetitive annual production profile, such as a single one-time procurement. As we obtained additional program-specific data on the 80 weapons, we determined that an additional 52 weapons should be excluded based on the original criteria. Thus, our final universe was 22 weapons in full-rate production and 6 weapons in LRIP with a total cost of about $6.5 billion in fiscal year 1996 procurement funds. We collected cost and schedule data for all 28 weapons through interviews and documents from program officials for each weapon, service- and DOD-level acquisition officials, a DOD Comptroller office official, and a defense contractor. We did our review primarily at the individual program offices responsible for procuring the weapons. We performed our review from August 1995 through November 1996 in accordance with generally accepted government auditing standards. This report contains recommendations to you. The head of a federal agency is required under 31 U.S.C. 720 to submit a written statement on actions taken on our recommendations to the Senate Committee on Governmental Affairs and the House Committee on Government Reform and Oversight no later than 60 days after the date of the report. A written statement must also be submitted to the Senate and House Committees on Appropriations with an agency’s first request for appropriations made more than 60 days after the date of the report. We are sending copies of this report to appropriate congressional committees and the Secretaries of the Army, the Navy, and the Air Force. We will also make copies available to others on request. Please contact me at (202) 512-4841 if you or your staff have any questions concerning this report. Major contributors to this report are listed in appendix VI. Standard missile Rolling airframe missile (RAM) RAM Guided Missile Launch System (GMLS) The C-17’s reliability is significantly less than expected, and the system cannot meet current payload/range specifications. Also, while known problems with the wings, flaps, and slats are being fixed, other problems continue to emerge. (GAO/T-NSIAD-94-166, Apr. 19, 1994). Despite the poor operational, test, and evaluation (OT&E) results, the Air Force continued full-rate production and had acquired about 750 systems at a cost of over $570 million, as discussed in a classified GAO report. All 65 systems were produced under LRIP at a cost of $256 million, before any OT&E was conducted. Because of performance problems, most of the jammers were placed in storage and only 24 were installed on aircraft. One year later, the 24 jammers were deactivated because of poor performance. (GAO/NSIAD-90-168, July 11, 1990). Through 1993, 331 of the 514 planned units were acquired under LRIP. However, the system has encountered significant software problems, which have delayed completion of development testing by about 2 years. OT&E has not yet started. After the Air Force bought most of the total quantity of units under LRIP, tests found serious performance problems. As a result, the system was deployed with the receiver/processor inoperative due to a lack of software. Other deficiencies were also present. (GAO/NSIAD-90-168, July 11, 1990). Before the Air Force conducted OT&E, 72 test sets were procured under LRIP at a cost of $272 million. Later testing showed that the equipment would not meet requirements, and the units were put in storage. Developmental, operational, test, and evaluation (DOT&E) recommended that jammers production be stopped because of poor OT&E results. However, the system had already entered and continued full-rate production anyway. We later found that most of the 24 jammers deployed to a tactical fighter wing had been placed in storage. (GAO/NSIAD-90-168, July 11, 1990). OT&E showed that the F-14D was not sufficiently developed and lacked critical hardware and software capabilities. The program was terminated after 55 units were produced. (GAO/IMTEC-92-21, Apr. 2, 1992). (continued) One year into LRIP, OT&E found that the T-45A was not effective in a carrier environment and was not operationally suitable because of safety deficiencies. Subsequent major design changes have included a new engine, new wings, and a modified rudder. (GAO/NSIAD-91-46, Dec. 14, 1990). The Navy procured and deployed Pioneer as a nondevelopmental item and without testing it. Numerous problems ensued, including engine failures, landing difficulties, and a cumbersome recovery system. Many modifications were required to bring Pioneer up to a minimum essential level of performance. Before the Army did any OT&E, a multiyear production contract was awarded for up to 10,843 trucks. Subsequent OT&E was suspended because the vehicles were found to be unreliable and not operationally effective. However, production continues. (GAO/NSIAD-93-232, Aug. 5, 1993). OT&E showed the system to be not operationally suitable. Despite the need for design modifications to correct reliability and maintainability problems, full-rate production was approved. The following is GAO’s comment on the Department of Defense’s letter dated December 26, 1996. 1. Appendix IV provides examples that illustrate how buying large quantities of unproven systems during LRIP has been costly. All costs are reported in fiscal year 1996 constant dollars unless otherwise indicated. We have modified the report to recognize the fact that there may be a number of valid reasons for changing the quantities and funding for full-rate production, but if the existing requirement is still valid and everything else is equal, we believe priority should be given to buying the proven systems over the unproven. Arthur Cobb Daniel Hauser The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | GAO reviewed the Department of Defense's (DOD) weapons acquisition procedures, focusing on: (1) DOD's practice of reducing the annual production of weapons below planned optimum rates during full-rate production; (2) the reasons for this practice; and (3) the effect of this practice on the costs and availability of weapons. GAO found that: (1) DOD has inappropriately placed a high priority on buying large numbers of untested weapons during low-rate initial production to ensure commitment to new programs and thus has had to cut by more than half its planned full production rates for many weapons that have already been tested; (2) this practice is wasteful because DOD must often modify, at high cost, the large numbers of untested weapons it has bought before they are usable and must lower annual buys of tested, proven weapons, stretching out full-rate production for years due to a lack of funds; (3) GAO has repeatedly reported on DOD's practice of procuring substantial inventories of unsatisfactory weapons requiring costly modifications to achieve satisfactory performance and, in some cases, deployment of substandard weapons to combat forces; (4) GAO found the practice of reducing planned full production rates to be widespread; (5) primarily because of funding limitations, DOD has reduced the annual full-rate production for 17 of the 22 proven weapons reviewed, stretching out the completion of the weapons' production an average of 8 years longer than planned; (6) according to DOD's records, if these weapons were produced at their originally planned rates and respective cost estimates, the quantities produced as of the end of fiscal year 1996 would have cost nearly $10 billion less; (7) at the same time, DOD is funding increased annual quantities of weapons in low-rate production that often are in excess of what is needed to perform operational tests and establish the production base; (8) if DOD bought untested weapons during low-rate initial production at minimum rates, more funds would be available to buy other proven weapons in full-rate production at more efficient rates and at lower costs; and (9) this would reduce costly modifications to fix substandard weapons bought in low-rate production and allow full-rate production of weapons with demonstrated performance to be completed and deployed to combat forces earlier. |
Under VA’s Disability Compensation program, the agency can award total (100 percent) disability compensation to veterans who cannot work because of service-connected disabilities, even though their schedular rating is less than 100 percent. Specifically, VA will consider a veteran for IU benefits if the veteran has a single disability rated at least 60 percent or multiple disabilities rated at least 70 percent (with at least one disability rated at 40 percent or more) and there is some evidence that the veteran cannot work. In some instances, veterans with lower ratings may also be evaluated for and granted IU eligibility. As shown on table 1, veterans receiving an IU total disability compensation rate may receive substantially greater benefits than they would have received based on their schedular rating. IU benefits, like other VA disability compensation benefits, are exempt from federal taxation. VA created IU benefits in 1934. By statute, VA is required to adopt and apply a schedule of ratings to compensate veterans for reductions in average earning capacities resulting from service-connected medical conditions. This statute calls for compensation benefits to be tied to a schedule of ratings that is to be based, “as far as practicable,” upon the average impairments of earning capacity resulting from such injuries in civil occupations. The statute does not mention individual unemployability as a basis for granting benefits. However, VA regulations allow the agency to grant total (100 percent) disability compensation to a veteran who is unemployable due to his or her service-connected disabilities, but does not meet the requirements for a total disability using the rating schedule. Veterans can receive IU benefits when their service- connected disabilities result in their inability to obtain or retain “substantially gainful employment,” which VA defines as employment that is “ordinarily followed by to earn their livelihood with earnings common to the particular occupation in the community where the veteran resides.” Staff at VA’s 57 regional offices make virtually all eligibility decisions for VA disability compensation benefits, including IU benefits. These regional offices employ non-medical rating specialists to evaluate veterans’ eligibility for these benefits. Upon receipt of an application for compensation benefits, the rating specialist would typically refer the veteran to a VA medical center or clinic for an examination. Based on the medical examination and other available information, the rating specialist must first determine which of the veteran’s conditions are (or are not) service-connected. For service-connected conditions, the rating specialist compares the diagnosis with the rating schedule to assign a disability rating. (App. III provides examples of selected impairments from VA’s disability rating schedule.) As figure 1 shows, the service-connected impairments of IU beneficiaries include a wide range of medical conditions. Multiple disabilities will result in a combined degree of disability, which is expressed as a percentage and represents the overall effect on a veteran of all his or her service-connected disabilities. (App. IV explains VA’s process for compiling combined ratings.) VA’s IU determinations are subject to appeal to the Board of Veterans’ Appeals and subsequently the U.S. Court of Appeals for Veterans Claims. VA rating specialists initiate IU evaluations when a veteran submits an application for IU benefits or his or her application for compensation benefits contains evidence of unemployability. In all cases, before granting benefits, rating specialists must evaluate the impact that the veteran’s service-connected disability has had on his or her ability to perform substantially gainful employment, which for decision-making purposes is generally interpreted as employment that is more than “marginal employment.” VA generally defines marginal employment as employment for which the worker’s annual earned income is at or below the poverty threshold for one person established by the U.S. Census Bureau—$10,160 for 2005. However, marginal employment may also be held to exist, on a case by case basis, when a veteran maintaining employment at a sheltered workshop or family business receives annual earnings above the poverty threshold. VA rating specialists are to rely on various sources of information for the evidence needed to support such a determination, including an employment and earnings history furnished by the claimant, basic employment information from the claimant’s employers (if any), and a medical exam report from VHA. If the claimant had received vocational rehabilitation assistance from VA or disability benefits from SSA, the rating specialist might also seek information on these services or benefit decisions. Finally, under its regulations, VA rating specialists are not to consider age as a factor in determining eligibility for IU benefits; thus, veterans of any age may be determined eligible for IU benefits. When we analyzed VA data to determine the ages of all veterans receiving IU benefits as of October 2005, we found that the majority of veterans receiving IU benefits were age 60 or older. Our analysis of VA data shows that 219,725 veterans were receiving IU benefits in October 2005. As shown in figure 2, 51 percent of IU beneficiaries were age 60 or older and 38 percent were age 65 or older. In 1987, we issued a report that identified several problems with VA’s administration of IU benefits and made several recommendations for improvement. We found that VA did not require sufficient medical and vocational evaluation of IU claimants to support award decisions. To address this weakness, we recommended that, in cases involving IU benefits, VA ensure that its (1) examining physicians provide observations on how the service-connected medical condition impairs the veteran’s functional capabilities and (2) vocational counselors provide vocational information, including an assessment of how the veteran’s service- connected condition affects job skills and employment potential. Furthermore, we identified potential overpayments to IU beneficiaries and suggested that the Congress provide VA with access to Internal Revenue Service (IRS) earnings information to monitor IU eligibility and help detect and prevent overpayments. Since we made the recommendations, the Congress provided the agency with access to IRS earnings data, which the agency is using to monitor compliance with the ongoing earnings limit. However, to date, VA has not implemented our recommendation that its vocational counselors assess the veteran’s job skills and employment potential so that this information could be used in the IU decision-making process. More recently, two studies highlighted the need for fundamental changes to VA’s disability decision making. In 2004, the VA Vocational Rehabilitation and Employment Task Force study recommended that vocational professionals from VA’s VR&E should provide more complete vocational assessments to assist in disability and vocational decisions. More specifically, the task force recommended that VR&E perform a functional capacity evaluation that would identify what work a veteran could do in the paid economy despite his or her disabilities. Also, a 2005 VA Inspector General study pointed to the need for improved IU initial and ongoing eligibility determinations. The VA Inspector General found that some veterans receiving IU benefits may not have been entitled because VA had not aggressively used IRS and SSA records and developed proper controls to monitor their income through the verification process. In addition, the Veterans’ Disability Benefits Commission was created by the National Defense Authorization Act of 2004 (Pub. L. No. 108-136) to independently evaluate compensation to veterans and their survivors for disabilities and deaths attributable to military service. Among other things, the Commission plans to include IU benefits in its review. The law requires the Commission to provide a report to the Congress, with recommendations as needed, which addresses the appropriateness of benefits and the standards for granting benefits. In a recently issued report, we noted that additional benefits are available to veterans with total disabilities. In particular, awarding IU benefits increases a veteran’s monthly disability compensation. The increase in the monthly compensation for IU beneficiaries is the difference between the compensation at the veteran’s schedular rate and the compensation at the 100-percent rate. For example, a schedular rating of 60 percent would entitle a veteran to $839 per month in 2005. The veteran, however, would be entitled to $2,299 per month if granted IU benefits—a difference of $1,460 per month or $17,520 per year. The lower the veteran’s schedular rating, the higher his or her increase in monthly disability compensation when awarded IU benefits. When the present value of IU benefits is considered over a veteran’s lifetime, the value of these added benefits depends upon the veteran’s schedular rating at the time he or she begins receiving IU benefits and the length of time these benefits are received. To illustrate the potential amount of added benefits that could be received due to IU, we estimated the lifetime present value of the increase in disability compensation benefits for veterans with schedular disability ratings between 60 and 90 percent who began receipt of IU benefits in 2005 at different ages. To calculate these lifetime present values, we used the SSA general population mortality tables for males to estimate the lifespan of IU beneficiaries. Because benefits awarded to younger veterans would be expected to be received for a longer length of time in comparison with older veterans, younger veterans are estimated to receive more in benefits than older veterans who have the same schedular rating. Also, because the lower the veteran’s schedular rating, the greater the increase in monthly disability compensation benefits when awarded IU benefits, veterans with lower ratings were estimated to receive more in added IU benefits than those of the same age with higher schedular ratings. For example, for younger veterans, those at age 20 in 2005, the estimated lifetime present value of these benefits can range from almost $300,000 to over $460,000. Even for older veterans, the value of these benefits can be substantial. For veterans awarded IU benefits at age 75 in 2005, the lifetime present value of these benefits can range from about $89,000 to about $142,000. The estimated lifetime present values of the added benefits for veterans awarded IU benefits in 2005 at selected ages and schedular ratings is shown in figure 3. When we analyzed VA data to determine the age at which veterans begin receiving IU benefits, we found that just under half of new IU beneficiaries were awarded IU benefits at the age of 60 or older. For example, we found that 46 percent of veterans awarded IU benefits from October 2004 to October 2005 were age 60 or older, and 19 percent were age 75 or older. See figure 4 for the age distribution of new IU beneficiaries from October 2004 to October 2005. Data for the 2 prior year periods show a similar pattern in the age distribution of new IU beneficiaries. In addition to disability compensation benefits, some IU beneficiaries are also entitled to military disability retirement benefits or normal retirement benefits based on years of military service. In general, however, an offset provision restricts most veterans from receiving the full value of both benefits, unless they have 20 or more years of service. Recent legislation allows veterans with combat-related disabilities and 20 or more years of service to receive the full value of both benefits. Also, recent legislation is phasing out the offset for veterans who have 20 or more years of military service and disability ratings of 50 percent or more. The phase out is taking place between January 1, 2004, and December 31, 2013, but IU beneficiaries with 20 or more years of service will be eligible for full concurrent receipt with no offset beginning October 1, 2009. However, the recent legislation to eliminate the benefit offset is likely to affect relatively few IU beneficiaries, as our review of IU beneficiary data as of October 2005 shows that only about 8 percent of all IU beneficiaries have 20 years or more of service. VA’s regulations and guidelines for awarding IU benefits do not ensure that its decisions are well supported. VA regulations and guidelines lack key criteria and guidance that are needed to determine unemployability. In addition, VA guidelines do not give rating specialists the procedures to obtain the employment and earnings history, and vocational assessments needed to support IU decisions. As a result, VA rating specialists and some vocational rehabilitation staff told us that unemployability benefits have sometimes been granted to veterans who have employment potential. VA rating specialists making IU decisions are required to determine whether the claimant is capable of obtaining or retaining substantially gainful employment, which agency guidelines define as “that which is ordinarily followed by to earn their livelihood with earnings common to the particular occupation in the community where the veteran resides.” However, VA regulations and guidelines do not provide the criteria and guidance that are needed to determine whether a claimant has the ability to obtain or retain substantially gainful employment or is unemployable because of his or her service-connected disabilities. VA guidelines also define substantially gainful employment as any employment greater than marginal employment. Marginal employment generally exists when a veteran’s annual earned income does not exceed the poverty threshold for one person. In addition, the guidelines recognize that the terms “unemployability” and “unemployable” are not synonymous for compensation purposes because a veteran may be unemployed or unemployable for a variety of reasons. As noted in the guidelines, rating specialists are to determine whether the severity of the service-connected conditions preclude the veteran from obtaining or retaining substantially gainful employment. In doing so, the rating specialists are to identify and isolate the effects of extraneous factors such as age, nonservice-connected conditions, availability of work, or voluntary withdrawal from the labor market when determining whether a veteran is unemployable solely by reason of service-connected disabilities. However, the guidelines do not state how rating specialists are to isolate these factors from the veteran’s service-connected disabilities or how these factors should be considered in making IU decisions. For example, the guidelines do not specify how rating specialists are to determine whether a veteran’s lack of work or marginal employment is the result of the veteran’s service-connected disabilities or extraneous factors such as local labor market conditions or the veteran’s “voluntary withdrawal” from the labor force. In particular, the guidelines do not specify the criteria rating specialists should use in determining whether a veteran, who is not working or has only marginal employment, has the ability to obtain or retain substantially gainful employment. For instance, the guidelines do not mention how factors such as education, skills, or prior work history should be used to assess a veteran’s ability to work. Recognizing the deficiencies in VA’s regulations and guidelines, the Court of Appeals for Veterans Claims urged VA to “undertake a broad-based review and revision” of unemployability regulations. In 2001, the agency proposed regulatory changes to address this and other problems with its IU decision making. The proposal included changes intended to define key terms, such as substantially gainful employment. During the public comment period, however, VA received numerous comments from veterans groups that were strongly opposed to the proposed regulations. In December 2005, VA withdrew this regulatory proposal and initiated a new effort to develop a proposal for revising IU regulations. As of March 2006, VA was still in the process of drafting this new regulatory proposal. VA also lacks adequate procedures for obtaining necessary evidence to support IU decisions. In particular, VA does not have procedures for rating specialists to obtain (1) complete and corroborated employment information from IU claimants and their employers, and (2) vocational assessments of IU claimants that could supplement medical information, even though the agency has an in-house vocational rehabilitation service. VA guidelines state that, when making an IU determination, rating specialists are to ensure that the “evidence is sufficient to evaluate . . . the veteran’s current . . . employment status.” Such evidence generally comes from two sources. First, the IU application form requires veterans to furnish employment and earnings history (e.g., jobs held, number of hours worked, type of work performed, and accommodations) for the 5-year period preceding the date the veteran claims to have become too disabled to work and for the entire time after that date. Second, the guidelines instruct the rating specialist to request related information from each of the claimant’s employers for the 12-month period prior to the date the veteran last worked. At the VA regional offices we visited, several rating specialists stated that the employment information submitted by claimants and employers is sometimes incomplete. VA guidelines state that it is essential that the form contain the claimant’s complete work history but does not specify what is acceptable for decision making when the work history is less than complete. According to an analysis conducted at a VA regional office, failure of the veteran to submit the requested employment information did not serve as a basis for denying an IU claim. Also, when assessing the eligibility of claimants who report recent prior work experience, rating specialists told us that they sometimes have difficulty obtaining corroborating information from employers. VA regional office officials stated that it is often difficult to obtain relevant information from employers because, among other reasons, they have moved, gone out of business, maintained poor records, or had such turnover that no one remembers the claimant. One VA regional office official stated that he has instructed his staff not to “hold a benefit hostage to the employer information.” We reviewed 29 case files in which IU benefits were awarded at three of the VA regional offices we visited. We found that 23 case files contained employment history information submitted by the claimant but only eight of these contained evidence from employers. Three case files did not contain claimant or employer employment forms. In the remaining three case files, the veterans claimed to have not worked or to have been self-employed. When a veteran claims not to have worked or to have been marginally self- employed during the past 5 years, agency guidelines for IU decision making do not give rating specialists the procedures to obtain corroborating evidence in the form of earned income information from other federal databases. As a result, rating specialists are unable to confirm (or refute) the veteran’s claim. Specifically, rating specialists are unable to obtain earnings information from SSA and the IRS. In addition, VA does not have access to earnings information from the NDNH database, which contains quarterly information on earnings. Some rating specialists stated that, if available, they sometimes considered information in the medical exam report or hospital records showing that the claimant had been out of work as evidence of unemployability. VA regulations on IU decision making do not contain procedures for rating specialists to request vocational assessments of IU claimants that could supplement claimants’ medical information. VA guidelines require rating specialists to consider medical information when granting IU benefits. Specifically, the medical evidence must support a current evaluation of the extent of all the veteran’s disabilities and reflect the veteran’s condition in the past 12 months. At the regional offices we visited, managers stated that their rating specialists rely heavily on medical examinations conducted by VHA clinicians to make IU determinations. Rating specialists at one of these regional offices stated that these medical reports were often the only information they have upon which to base a decision that is not self- reported. Some rating specialists we interviewed, however, expressed concern that they were awarding IU benefits based on medical reports that provided insufficient support for determining unemployability. VA regional office officials and rating specialists told us that the current medical reports may have limited applicability to IU decision making. Medical reports may have limited applicability because, as we have noted in a prior report, while most medical impairments may influence the extent to which an individual is capable of engaging in gainful activity, vocational and other factors are often considered to be more important determinants of work capacity. It is these other factors, along with the person’s medical condition, that are considered in a vocational assessment of work potential. Vocational assessments can supplement the results of medical examinations by taking into consideration factors such as the veteran’s education, training, prior work experience, skills, and abilities, to identify the extent to which the veteran is employable. Yet, when making IU determinations, rating specialists do not have procedures to obtain vocational assessments from VR&E counselors. Rating specialists have access to vocational assessments only when they already exist prior to the request for IU benefits. According to VA officials we spoke with, rating specialists generally make employability determinations without the benefit of a vocational assessment. At 3 of the VA regional offices we visited, our review of 29 case files in which IU benefits were awarded found that 25 lacked any vocational assessment. Lacking vocational assessments for most IU claims, officials at some regional offices we visited told us that they sometimes asked VHA clinicians to assess and make a determination on a claimant’s employability. Of the 29 case files we reviewed, 7 contained medical reports that gave opinions on the veterans’ employability. These opinions ranged from a comment that the claimant is not a good candidate for working with the public to comments that one veteran is “unemployable in any function” and another is simply “unemployable.” Two cases contained employability decisions that were based on examinations of the claimant’s functional capabilities. One official in a regional office indicated that medical reports containing opinions on employability often dictate their IU decisions. A senior VHA management official explained that these medical reports should not be the only source used to render an opinion regarding a claimant’s unemployability because the agency’s clinicians are currently not trained to conduct medical examinations that support decisions on employability. Rating specialists at some of the VA regional offices we visited stated that, when available, the assessments in VR&E case files were very relevant to IU decision making. VR&E managers and counselors suggested that permitting rating specialists to obtain VR&E assessments of IU claimants could address the need for vocational information. VR&E officials stated that their counselors are qualified to conduct such assessments and, where appropriate, VR&E counselors could also use this opportunity to use incentives to encourage return to work, develop return-to-work plans in collaboration with the claimant, and identify and provide needed accommodations or services for those who can work. By incorporating vocational assessments into its IU decision-making process, VA can modernize its disability programs by enabling veterans to realize their full productive potential without jeopardizing the availability of benefits for veterans who cannot work. We discussed IU decision-making criteria and evidence requirements with managers and rating specialists at the regional offices we visited. During these discussions, some rating specialists expressed concerns that they may have awarded IU benefits to some veterans who appeared to be employable. These rating specialists told us that they awarded IU benefits in these cases with the expectation that VA would identify these beneficiaries in the income matching process as having earnings above the IU threshold and discontinue their IU benefits. Another rating specialist stated that he felt compelled by the workload at his regional office to make IU determinations based on existing evidence, even when necessary information was lacking. VR&E managers and counselors at the regional offices we visited stated that VA has awarded IU benefits to veterans making good progress in their VR&E-sponsored vocational rehabilitation. Our analysis of VA’s electronic case files identified 683 veterans who received both IU benefits and a stipend from VR&E, which is generally provided only to veterans who are attending college and who are expected to seek employment at the conclusion of their vocational rehabilitation. VR&E officials and rating staff at three of the VA regional offices we visited brought to our attention veterans who had received VR&E assistance and were making good progress in their rehabilitation plans, only to drop out of the program when they were awarded IU benefits. VA has an inefficient and ineffective process to enforce the earnings limit for ongoing eligibility for IU benefits. VA’s main enforcement mechanism is its computerized match that identifies beneficiaries with earnings, which is supplemented with a manual review to assess whether these earnings are within the limit and meet other ongoing eligibility criteria. However, this process relies on old data, outdated and time-consuming procedures, insufficient guidance, and weak eligibility criteria. Moreover, the agency does not track and review its enforcement activities to better ensure their effectiveness. VA utilizes a multi-step annual computer match and manual process, referred to as its Income Verification Match, to evaluate both the ongoing eligibility of its IU and pension beneficiaries. During 2004 and 2005, VA’s income match, in coordination with SSA and IRS, assessed beneficiaries’ income for 2002. VA provided SSA and IRS with data on VA’s 2004 beneficiaries that the agencies matched to their 2002 income data. SSA matched VA beneficiaries to its wage and self-employment earnings to provide VA with 2002 earned income data for IU and pension beneficiaries. To provide VA with data on unearned income for its pension beneficiaries, IRS matched the beneficiaries with its 2002 unearned income data. VA’s Hines Information Technology Center (ITC) used SSA’s match results to identify IU beneficiaries with earned income above $6,000 in 2002 for further review. Hines ITC combined the results of the computer matches for IU and other beneficiaries to produce and mail documents to employers and to the Pension Maintenance Centers (PMC) for further review. Hines ITC identified 8,563 IU beneficiaries with earnings over $6,000 in 2002 for review by VA’s three PMCs. For each identified beneficiary, Hines ITC produced and mailed a letter to the employer requesting earnings data to verify SSA-reported earnings. It also produced and mailed to the PMCs several documents for follow-up on each beneficiary, such as a letter for the veteran and a tracking sheet. The three PMCs manually reviewed the information provided by Hines ITC and employers and may have also contacted the veterans, as needed, to determine whether they continued to meet ongoing IU eligibility criteria. In general, IU beneficiaries who have exceeded the annual IU earnings threshold (set in 2002 at $9,039), have worked 12 consecutive months or more, and have not been employed in a sheltered workshop or family business should have their IU benefits discontinued. PMCs close the cases when they find beneficiaries meet the eligibility criteria and forward cases that needed additional information for a decision to VA regional offices for further review. The VA regional offices should obtain whatever additional information is needed to determine whether benefits should be discontinued and inform the veteran if VA decides to do so. In its computer matching process to evaluate ongoing IU benefit eligibility, VA used SSA earnings data that is about 1.5 years old, despite the fact that the data is available earlier and more recent earnings data is available from another federal database. Using old earnings data, along with other processing delays in its review, means that IU beneficiaries with earnings above the IU threshold can continue to receive benefits for up to 2.5 years before VA can determine that their IU benefits should be discontinued. Quick identification of IU beneficiaries who are no longer entitled to benefits is important because VA typically will only discontinue their benefits and will not collect any overpayments. Although SSA earnings data could be available as early as September following the end of a tax year, VA postpones the match of IU benefits and waits for unearned income from IRS so that it can evaluate both IU eligibility and pension payments at the same time. Also, HHS’ NDNH can provide more current earnings data than SSA, but VA does not have the statutory authority to access this database. The NDNH database includes quarterly wage data for up to 8 quarters, which can be compiled into annual data for matching purposes. Although VA currently lacks access to the NDNH database, other agencies—such as SSA, IRS, and the Department of Housing and Urban Development—have sought and gained statutory authority to access the NDNH to improve their enforcement efforts. In addition to gaining statutory authority, to obtain access to the NDNH, VA would need to meet data security and privacy safeguarding requirements HHS has established to ensure the security and confidentiality of NDNH data. VA’s enforcement process is also inefficient because VA has not updated its computer matching program to reflect annual changes in its IU earnings threshold. The program identified those who earned more than $6,000 rather than the annual IU threshold, which was $9,093 for 2002. As a result, the PMC staff told us that they manually reviewed many more cases than necessary. VA’s Hines ITC officials told us that they are prohibited from making any changes in the matching program until the agency has replaced its current compensation and pension payment system, which may take place in 2007. VA’s enforcement process experiences additional delays because the computer matching information is transmitted manually to VA’s enforcement staff rather than electronically. VA’s ITC mails thousands of paper documents to employers and VA’s three PMCs. It mails letters to veterans’ employers to provide verification of veterans’ earnings to the PMCs. ITC also mails a tracking sheet and a letter for each veteran earning over $6,000 to the PMCs, where the information is manually collated and reviewed. The center officials told us that they use information from the computer match, employers, and veterans to assess whether beneficiaries meet ongoing IU eligibility criteria and they close the case for those who meet the criteria. If a center did not have sufficient information to determine eligibility, it mailed the case file to a VA regional office for further review. Although VA currently mails paper documents generated from the match to its PMCs and regional offices, software exists to transfer the confidential information electronically, and VA officials acknowledged that doing so would make the process more timely. One action VA has recently taken to enhance enforcement is to reinstate a procedure that requires IU beneficiaries to annually complete a form to provide their earnings and employment status. VA had discontinued use of the form about 6 years ago to reduce the paperwork burden for beneficiaries and instead was annually sending a letter to IU beneficiaries to remind them of their responsibility to notify VA of their employment and earnings. However, VA officials believed that the annual reminder had not resulted in sufficient compliance and in September 2005, reinstituted the requirement to complete the VA form. Because the agency has very recently implemented this change, we cannot assess its effectiveness. Although the agency believes that this information will improve its ability to monitor veterans’ ongoing eligibility, it still plans to continue the income verification match. VA’s written guidance for evaluating beneficiaries’ earnings also hinders enforcement by failing to clarify that PMC staff should use all the available earnings information from the match and other sources, such as employers, to assess beneficiaries’ initial and continuing eligibility. Lacking this written guidance, VA staff focus on whether beneficiaries’ earnings and employment qualified them for benefits for the match year. For example, when PMC staff receive earnings data for veterans who were granted benefits during the match year, the staff disregard the earnings, regardless of the amount, and close the case. Staff do so because they only consider earnings subsequent to granting benefits and know that the new beneficiaries could not have worked for 12 consecutive months in the match year. The match data and the beneficiaries’ application information, however, could show that veterans may not have fully disclosed their earnings during the application process and may have exceeded the IU threshold. Staff also disregard some of the earnings information provided by employers that could have had a bearing on eligibility. Although VA’s letters to employers request earnings information for the match year and 2 subsequent years, management’s verbal guidance at one center was to disregard the earnings from the subsequent years and only consider the earnings of the match year. VA has weak criteria to determine whether veterans should continue to receive IU benefits. In evaluating IU beneficiary eligibility, PMCs allow beneficiaries to continue to receive IU benefits if their earnings at the time of the review did not exceed the IU threshold. However, some IU beneficiaries can have earnings far above the IU threshold because VA, under current law, continues to provide them benefits until they have maintained employment for 12 consecutive months. In effect, this law allows beneficiaries to retain their benefits despite unlimited earnings, so long as they do not work for 12 consecutive months. For example, a beneficiary could earn $50,000 from January to September, choose to stop working for reasons other than his or her service-connected disability, and still be allowed to retain his or her IU benefits. VA does not effectively track and review the results of its enforcement activities. VA does not track the results of cases reviewed by PMCs or those sent to regional offices. As a result, the agency does not know the results of these reviews or the reasons for continuing or discontinuing IU benefits. For example, the agency does not know how many beneficiaries were identified by its computerized match with earnings below the IU threshold or had higher earnings and continued to receive benefits. Also, without sufficient information to monitor enforcement, the agency cannot ensure that beneficiary cases are being fully reviewed or that appropriate actions are taken to discontinue benefits. Private-sector and SSA disability programs provide important features that VA’s IU benefits lack. Unlike VA, private insurers have developed assessment processes that focus on return to work and use a wide variety of assessment tools, expertise, and incentives to evaluate claimants’ ability to work and encourage and enable those with work potential to return to the labor force. Likewise, SSA requires applicants to provide substantial information for assessment purposes and, in recent years, has implemented a new program to provide return-to-work services and is conducting pilots to test new methods to return applicants and beneficiaries to work. In addition, SSA has implemented critical management practices to help ensure the financial integrity of its disability programs. The eligibility assessment processes of three U.S. private insurers we reviewed focused on returning people with disabilities to work. The private insurers’ assessment processes we reviewed both evaluated a person’s potential to work and assisted those with work potential to return to the labor force. Insurers provided assessment and other services shortly after disability onset and throughout the duration of the claim, as needed. Their ongoing assessment process is closely linked to their definition of disability that shifts over time from less to more restrictive—that is, from an inability to perform one’s own occupation to an inability to perform any occupation. Both the definitional shift and the ongoing assessment process recognize the possibility for improvement in an individual’s work capacity by providing supports and services, such as workplace adaptations or training as well as financial and other incentives to encourage claimants to return to work. Throughout the duration of the claim, private insurers use a wide variety of tools and expertise to assess the claimant’s work potential and develop and implement an individualized return-to-work plan for those with work potential. As part of the process of assessing whether a claimant can perform his or her own occupation, insurers directly contact the claimant, the treating physician, and the employer to collect medical and vocational information and initiate return-to-work efforts, as needed. For example, insurers consult medical staff and use other resources, such as medical guidelines, which describe disabilities and their treatment and duration, to evaluate whether the treating physician’s diagnosis and the expected duration of the disability are in line with the claimant’s reported symptoms and test results. Insurers’ contacts with treating physicians may also be aimed at ensuring that the claimant has an appropriate treatment plan focused on timely recovery and return to work. Insurers may also use an independent medical examination or tests of basic skills, interests, and aptitudes to clarify the medical or vocational limitations and capabilities of a claimant. In addition, they may use medical or vocational specialists to identify possible accommodations for the claimant and may also contact employers to encourage them to provide workplace accommodations for a claimant who has the capacity to work. To determine whether a claimant can go back to his current job, or if not, engage in other work, insurers will identify a claimant’s remaining skills and abilities (i.e., transferable skills) by comparing the claimant’s capabilities and limitations with the demands of the claimant’s own occupation. Included in these assessment tools and methods are services to help the claimant return to work, such as job placement, job modification, and retraining. The definition of disability shifts after 2 years from being unable to perform one’s own occupation to being unable to perform any occupation. This period provides an opportunity for claimants who have the potential to work to recover medically and develop skills to return to work. During this period, insurers may provide financial and other assistance to help claimants with work potential make a successful transition. Insurers try to develop the best strategies for managing each claim, which can include, for example, helping to plan medical care or providing vocational services to help claimants acquire new skills, adapt to assistive devices, or find new positions. For those requiring vocational intervention to return to work, the insurers develop an individualized return-to-work plan, as needed. Work incentives are an important feature of the private insurers’ programs to encourage and facilitate a claimant’s return to work. These incentives require the claimant to obtain appropriate medical treatment and can result in a possible loss of benefits if the claimant does not participate in a return-to-work program, if such a program would benefit the individual. To support these requirements, these disability systems help the individual obtain the appropriate medical care and provide financial incentives to promote participation in rehabilitation, such as reimbursement for family care costs. Insurers may provide additional financial benefits to those who participate in a return-to-work plan. For example, one insurer told us that claimants may receive an additional benefit equal to 10 percent of their disability payment for participating in rehabilitation. To further encourage rehabilitation and return to work, insurers may allow claimants who work to supplement their disability benefit payments with earned income. Conversely, insurers may reduce or terminate benefits for claimants who could work, but do not. Claimants’ benefits may also be terminated if they refuse to accept a reasonable accommodation that would enable them to work. If the insurer initially determines that the claimant has no work potential, it monitors the claimant’s condition for changes that could increase the potential to work. After 2 years, it reassesses the claimant’s eligibility under the more restrictive definition of disability. The insurer continues to look for opportunities that may enable these claimants to return to work. For example, opportunities may occur for claimants when there are improvements in medical treatments and technology, such as new treatments for cancer or AIDS. Both VA and SSA disability programs are on our high-risk list, in part, because they do not reflect the current state of science, technology, medicine, or labor market conditions. Nevertheless, SSA’s disability programs have efforts to assess eligibility and encourage return-to-work that VA’s disability compensation program lacks. For example, SSA requires applicants to provide substantial information for assessment purposes and in recent years has implemented a new program to provide return-to-work services and is conducting pilots to test new methods to return applicants and beneficiaries to work. Moreover, in 2003, SSA’s Commissioner announced in a testimony to the Congress that a key operational goal for the agency’s disability programs is to foster return-to- work efforts at all stages of decision making. As with VA’s definition of individual unemployability, SSA’s definition of disability for its two disability programs includes both medical and employment criteria. For the agency’s Disability Insurance (DI) and Supplemental Security Income (SSI) programs, the Social Security Act defines disability as the inability to engage in any substantial gainful activity by reason of any medically determinable physical or mental impairment(s) that is expected to result in death or has lasted or can be expected to last for a continuous period of not less than 12 months. In addition to SSA’s medical criteria, an applicant must also meet non- medical program criteria for both of its disability programs. For DI benefits, an individual must have contributed earnings to the DI program, have sufficient annual earnings to receive one credit per year, and generally have at least 20 credits for the last 40 quarters ending with the onset of a disability. To receive SSI benefits, individuals must have limited assets and income. To collect key decision-making information, SSA requires a DI or SSI applicant to provide the agency with extensive medical and vocational information, including the illness, injuries, or conditions and how they affect the applicant’s ability to work; 15 years of prior work history; the requirements of the applicant’s longest lasting job; medications taken and medical history; education and training; and any vocational rehabilitation. If needed, SSA may also collect additional information from the applicant about his or her pain, fatigue, and ability to perform common daily and other specific activities, like meal preparation or ability to stand and sit, as well as the use of accommodations. To assess claims for eligibility, SSA generally uses both a disability examiner and a medical consultant. If needed, the medical consultant will use the collected information to determine what an applicant can still do, despite physical or mental limitations, referred to as the applicant’s residual functional capacity. The residual functional capacity will be used by the decision makers, along with other vocational information in the applicant’s file, to determine if the applicant can perform his or her prior job. If not, the decision makers will use this information to determine if the applicant can perform another job in the national economy. Although these vocational decisions can be complex, SSA may include, but does not require, that vocational specialists provide input to decision making. SSA, however, has acknowledged the need to strengthen its decision making and has proposed, along with other changes, to establish a national network of medical, psychological, and vocational experts to assist SSA decision makers throughout the country. The SSA Commissioner’s recent commitment to fostering return-to-work efforts is illustrated by some of the agency’s ongoing programs and pilot tests. In September 2004, SSA completed implementation of its Ticket to Work and Self-Sufficiency Program. The program is intended to provide beneficiaries with greater choice in vocational rehabilitation and employment services so that they can work and become self-sufficient. While we reported in March 2005 that the program was having limited success, the agency has proposed steps to strengthen the program, such as expanding eligibility and improving incentives to encourage participation by service providers and beneficiaries. Furthermore, SSA has developed a Work Opportunity Initiative, with several demonstration projects, to provide both applicants and beneficiaries with medical coverage or cash incentives to support their ability to work. While supporting people with disabilities is an essential function of SSA’s disability programs, the agency is also responsible for ensuring the programs’ financial integrity. In 1997, we designated SSI a high-risk program after several years of reporting on specific instances of abuse and mismanagement, increasing overpayments, and poor recovery of outstanding SSI overpayments. SSA’s actions since then included developing a major SSI legislative proposal with numerous overpayment deterrence and recovery provisions. The ensuing enacted legislation directly addressed a number of our prior recommendations and warranted removal of the SSI program from our high-risk list in 2003. We have, however, continued to monitor the program to ensure that improvements have been sustained. To help ensure that applicants’ and beneficiaries’ earnings do not exceed allowed levels, SSA has incorporated several procedures into its eligibility assessments. In assessing eligibility, SSA must determine whether an applicant is working and earning an amount that exceeds its established thresholds. As part of this process, DI and SSI applicants must provide SSA with information on their past work and any current work. If applicants indicate that they are currently working or receiving earnings, or SSA obtains other information that suggests that they may have earnings, SSA requires additional information on their work and earnings. SSA field staff generally must then verify the applicants’ reported earnings using another reliable source of information. SSA also uses its online query system to access the NDNH database, which has recent earnings, new hire, and unemployment information, to verify the earnings for DI and SSI applicants it has designated as high risk, such as those whose stated income does not appear to cover their expenses. SSA has found that online access to NDNH data to verify earnings for the SSI program has a high return on investment. For example, using a pilot evaluation, SSA estimated that if it verified earnings online prior to benefit payment, it could annually reduce overpayments by $30.8 million and have a 3.6 to 1 return-on- investment ratio. After benefits are granted, SSA performs frequent computer matches that are intended to assess earnings for all its beneficiaries. These matches compare earnings information from its beneficiary databases with two federal earnings databases to detect and prevent overpayments. For its computer matches, SSA uses both its own master earnings file with earnings information from employers and the self-employed and the NDNH database. SSA uses both databases because the SSA database has more complete earnings information than the NDNH database, whereas the NDNH database has more current earnings information in its quarterly wage database, as well as other important employment data, according to SSA officials. SSA performs periodic matches using its master earnings file to detect and prevent beneficiary overpayments for all its SSI and DI beneficiaries. In addition, SSA performs quarterly matches using NDNH quarterly earnings to detect and prevent overpayments to all SSI beneficiaries. SSA has also found that using NDNH data for the matches can be very cost-effective. In evaluating fiscal year 2002 computer matches using NDNH data, SSA estimated that it could annually realize $199 million in benefits from collecting and preventing overpayments and expend $23 million for matching, following up on matches, and overpayment collection, yielding an estimated 8.7-to-1 benefit-to-cost ratio. SSA also plans to expand its use of the NDNH database to perform matches to evaluate all DI beneficiary earnings. SSA has automated many features of its matching process and follow-up verification and collection activities to help improve the efficiency and effectiveness of its disability programs. SSA’s computerized matching process can not only detect potential unreported or underreported earnings, but can also electronically forward matches to the field office responsible for follow-up and provide workload statistics to each level of management to help monitor the process. Field office staff can also use SSA computer systems to view specific information on the match (including the amount of earnings detected), document their follow-up, and initiate collection activities, as needed. For example, the system will send a letter to a current beneficiary who has received a benefit overpayment with information about this debt, such as the amount owed, and options for repayment. Through automation, the agency has increased its ability to ensure that matches are followed up and more efficiently initiate efforts to collect overpayments. SSA’s systems also have built in security features to help ensure that SSA meets legal requirements to manage the privacy of the earnings and employment data. SSA uses various collection methods and other tools to manage the debt owed by current and past beneficiaries who received disability benefit overpayments. SSA will withhold monthly disability benefits to collect overpayments from beneficiaries who are still on its rolls. In fiscal year 2005, SSA collected $2 billion in overpayments using this method. When the person is no longer on SSA’s benefit rolls, the agency uses its own billing and follow-up system to collect overpayments. That system enables SSA to send a series of progressively stronger notices requesting repayment and to make telephone calls to negotiate repayment. The agency collects several hundred million dollars a year using this approach. In addition, SSA uses other more aggressive debt collection tools, such as tax refund offsets and administrative wage garnishment, to collect debt from prior benefit recipients who are no longer on its benefit rolls. When unable to collect debts from current or prior beneficiaries, SSA will write off the debts. In 2005, SSA reported debt collection of $2.4 billion, writing off debt of $842 million, which left outstanding debt of $13.1 billion at year end. Table 2 provides a list of the tools used by the agency to manage overpayment debt. To help monitor its debt collection efforts and their effectiveness, SSA also tracks and reports key debt management activities and performance indicators. For example, SSA’s annual performance and accountability reports provide data on the quarterly cumulative totals for the debt outstanding, collected, and written off, and the varying age of delinquent debt. SSA also provides 5 years of trend data on the results and effectiveness of its activities, such as the percentage of outstanding debt that is delinquent or not expected to be collected and the average cost to collect a dollar of debt, which was $0.09 for fiscal year 2005. Furthermore, to help monitor its achievement of its strategic goals to improve debt management, SSA measures and sets a goal for the percentage of debt in collection for its Old-Age, Survivors, and Disability Insurance; and SSI programs. The indicators compare debt that is scheduled for collection through benefit withholding or installment payment with total outstanding debt. For fiscal year 2005, SSA reported that it met its collection goals, with 53 percent of SSI debt and 42 percent of Old-Age, Survivors and Disability Insurance debt in collection arrangements. VA’s management of IU benefits lacks the strong controls needed for ensuring the integrity of the process for determining the initial and ongoing eligibility for these benefits. In particular, VA lacks the criteria, guidance, and procedures to ensure that its IU decisions are well supported. For example, the guidelines do not mention how factors such as education, skills, or prior work history should be used to assess a veteran’s ability to obtain or retain substantially gainful employment in cases when the veteran is not working or is only marginally employed. As a result, the agency cannot assure that it is providing IU benefits only to those who are unemployable due to their service-related disabilities. In addition, due to limitations in the procedures to obtain evidence, VA rating specialists may not have sufficient information for determining whether claimants are unemployable. Without the procedures needed to collect complete and corroborated employment and earnings histories, rating specialists lack access to important indicators of future employability. Moreover, without having the procedures needed to obtain vocational assessments from VA’s own vocational counselors, rating specialists lack important information that is needed to determine whether a claimant may be able to obtain or retain substantially gainful employment. Further, VA’s income verification process lacks access to timely data, uses an outdated earnings threshold, and relies on a manual process for follow- up on earnings matches, which results in the agency’s inability to effectively identify overpayments. In addition, VA’s methods for determining ongoing eligibility may allow veterans who do not meet the ongoing eligibility criteria to continue to receive IU benefits. Moreover, VA’s limited ability to detect and stop IU payments to beneficiaries no longer eligible to receive them not only increases the cost of IU benefits, it can create an opportunity for program misuse. Finally, because VA does not track the results of its enforcement efforts, the agency cannot determine whether its efforts are cost-effective and cannot hold itself accountable to veterans or other taxpayers. Finally, the continuing deployment of our military forces to armed conflict has focused national attention on ensuring that those who incur disabilities while serving their country are provided the services needed to help them reach their full potential. Yet, VA is among the federal disability programs we have identified as high-risk, in part, because it is poorly positioned to provide meaningful and timely support to help veterans with disabilities return to work. VA’s management of IU benefits exemplifies these problems. Approaches from other disability programs demonstrate the importance of providing return-to-work services and using vocational expertise to assess the claimant’s condition and provide the appropriate services. Incorporating return-to-work practices in IU decision making could help VA modernize its disability program to enable veterans to realize their full productive potential without jeopardizing the availability of benefits for veterans who cannot work. We recommend that the Secretary of Veterans Affairs take the following steps to improve management of IU benefits: 1. To help ensure that IU decisions are well-supported and IU benefits are provided only to veterans whose service-connected disabilities prevent them from obtaining or retaining substantially gainful employment, VA should clarify and strengthen its eligibility criteria, guidance, and procedures for determining unemployability. For example, VA could: clarify in its regulations and guidelines how vocational factors, such as education, skills, or prior work history, should be used to assess a claimant’s eligibility; establish procedures for rating specialists to request VR&E to conduct vocational assessments of IU claimants as appropriate; and seek legislative authority to use earnings data from the National Directory of New Hires. 2. To improve the efficiency and effectiveness of VA’s enforcement efforts to monitor ongoing eligibility, VA should update procedures and strengthen criteria for the enforcement of the IU earnings limit. For example, VA could: update and automate its enforcement process, including using more current earnings data and threshold amounts in its income verification match; clarify guidance on the review of IU beneficiary earnings following the annually track and report on the results of matching process and related enforcement activities. 3. To help modernize its IU decision-making process, VA should develop a strategy to ensure that IU claimants with work potential receive encouragement and assistance to return to work, while protecting benefits for those unable to work. For example, VA could encourage claimants to return to work by having vocational counselors from VR&E develop return-to-work plans and provide assistance to claimants with work potential. We provided a draft of this report to VA, SSA, and HHS for comment. VA agreed with our conclusions and concurred with our recommendations, and stated that it has implemented and plans to implement program changes in areas that we identified as needing attention. The actions described by VA should strengthen its management of IU benefits; however, we believe that further steps are needed to fundamentally transform IU benefits into a meaningful and timely way of supporting unemployed veterans with service-connected disabilities. For example, VA seeks to improve decision making on initial and ongoing eligibility by increasing its collection of employment and earnings data. While these are positive developments, our recommendations envision a more comprehensive effort to restore the integrity of IU decision making through a series of reforms that would seek to strengthen IU criteria, guidance, and procedures for determining initial eligibility and enforcing the earnings limit. VA also proposes to encourage IU claimants to consider employment by including a motivational letter with the notice informing them that they have been approved for IU benefits. While we recognize VA’s intent is positive, providing such letters after veterans have been determined to be unemployable does not provide them with the timely support needed to return to work. Our recommendation envisions that VA implement a number of fundamental reforms that transform IU benefits from simply providing compensation for unemployed veterans with service-connected disabilities to incorporating a broad range of vocational rehabilitation services and assistance that encourage and support such veterans to realize their full productive capacity, while protecting benefits for veterans unable to work. VA will need to expand upon the initiatives outlined in its comments to take full advantage of IU benefit decision making, not only as a means to restore the lost incomes of veterans with service-connected disabilities but, when appropriate, to restore their ability to pursue a livelihood and take their place as fully productive members of society. VA’s comments appear in app. V. In addition, VA, SSA, and HHS provided technical comments, which are reflected in the report as appropriate. We will send copies of this report to the Secretary of Veterans Affairs, Secretary of Health and Human Services, Commissioner of the Social Security Administration, relevant congressional committees, and other interested parties. We will also make copies available to others upon request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. Please contact me at (202) 512-7215 if you or your staff have any questions about this report. Contact points for our offices of Congressional Relations and Public Affairs may be found on the last page of this report. Other major contributors to this report are listed in appendix VI. We estimated the number of Individual Unemployability (IU) beneficiaries for 1996 to 2005 from monthly internal Department of Veterans Affairs (VA) reports on activity under the Disability Compensation program. For each year, we identified the number of IU beneficiaries from the “end of month” total in the September report for that year. Figure 5 shows the growth in number of IU beneficiaries from 1996 to 2005. At the time of our study, VA did not report or maintain separate data on IU expenditures. We estimated VA’s annual added expenditures due to IU benefits for 1996 to 2005 from monthly internal VA reports showing expenditures on the Disability Compensation program. Using data in the September report for each year, we computed average monthly payments due to IU benefits for 1996 to 2005, annualized this amount, and factored in the number of beneficiaries to estimate total expenditures on IU benefits for each year. Figure 6 shows the growth in IU expenditures from 1996 to 2005. To illustrate the value of IU benefits, we calculated the present value of the added benefits due to IU that would be available over a lifetime to veterans who begin to receive IU benefits at different ages and with schedular disability ratings of 60-, 70-, 80-, and 90-percent. We chose the ages of 20, 25, 35, 45, 55, 65, and 75 to illustrate the added value of IU benefits over a wide range of ages at which veterans could begin to receive such benefits. For each age and disability rating combination, we calculated the present value of the added increment due to IU that would be received over a lifetime. Our present value analysis uses annuity factors that are based on two key assumptions: the length of time benefits will be received, and the rate at which future payments will be discounted (on the basis that a dollar today is worth more than a dollar received a year from today). For the first assumption about life span, we used Social Security Administration general population mortality tables for males. For the second assumption about the discount rate, we assumed that the rate of interest absent inflation (the real interest rate) is 3 percent, and that inflation is constant at 3 percent annually, resulting in an assumed nominal interest rate (which is the sum of the real interest rate and inflation) of 6 percent. Because the yearly cost-of-living adjustment of VA compensation rates is linked to the consumer price index, we assumed that this adjustment is equal to the rate of inflation, resulting in a net discount rate for our calculations of 3 percent a year. The present value of the additional amount of disability compensation provided to veterans granted IU benefits for selected ages and schedular ratings is provided in table 3. Veteran has 50% earning capacity remaining. As a result of second disability, the veteran has lost an additional 20% of earning capacity. As a result of both disabilities the veteran has lost 70% of earning capacity. Carol Dawn Petersen, Assistant Director; Joseph J. Natalicchio, Analyst-in- Charge; and Julie M. DeVault, Senior Analyst; made significant contributions to all aspects of this report. Crystal Bernard, Margie K. Shields, and Jonathan Elkin also made significant contributions. Joan Vogel, Walter Vance, Vanessa Taylor, Roger Thomas, and Joseph Applebaum provided technical assistance. | As part of its Disability Compensation program, the Department of Veterans Affairs (VA) provides Individual Unemployability (IU) benefits to veterans of any age who are unemployable because of service-connected disabilities. Over the last decade, the number of IU beneficiaries and benefit costs have more than tripled. In 2005, about 220,000 veterans received an estimated $3.1 billion in IU benefits. In response to a congressional request, GAO assessed VA's management of IU benefits. This report (1) examines the added value of IU benefits for veterans of selected ages and disability ratings, (2) assesses the criteria, guidance, and procedures used for initial decision making, (3) assesses VA's ongoing eligibility enforcement procedures, and (4) compares VA's decision-making and enforcement procedures with those used by other disability programs. Under VA's disability compensation program, VA can award IU benefits (that is, total disability compensation) to veterans of any age who cannot work because of service-connected disabilities even though VA did not rate their impairments at the total disability level. The added value of IU benefits over a veteran's lifetime depends upon the veteran's level of impairment at the time he or she begins receiving IU benefits and the length of time these benefits are received. To illustrate the potential amount of IU benefits that could be received, GAO estimated the lifetime present value of the added benefits in disability compensation for veterans with different impairment levels who began receipt of IU benefits in 2005 at different ages. GAO found that the lifetime present value of these benefits can range from about $300,000 to over $460,000 for veterans age 20 in 2005, and about $89,000 to about $142,000 for veterans age 75 in 2005. GAO also found that just under half (45.6 percent) of new IU beneficiaries was awarded IU benefits at the age of 60 or older, and 19.2 percent were age 75 or older. VA's criteria, guidance, and procedures for awarding IU benefits do not ensure that its IU decisions are well supported. VA regulations and guidelines lack key criteria and guidance that are needed to determine unemployability. VA guidelines also do not give rating specialists the procedures to obtain the employment history and vocational assessments needed to support IU decisions. As a result, some VA staff told us that IU benefits have been granted to some veterans with employment potential. In addition, VA's process for ensuring the ongoing eligibility of IU beneficiaries is inefficient and ineffective. This enforcement process relies on old data, outdated, time-consuming manual procedures, insufficient guidance, and weak eligibility criteria. Moreover, the agency does not track and review its enforcement activities to better ensure their effectiveness. VA is among the federal disability programs GAO has identified as high risk and in need of modernization, in part, because it is poorly positioned to provide meaningful and timely support to help veterans with disabilities return to work. Specifically, VA's compensation program does not reflect the current state of science, technology, medicine, and the labor market. VA's management of IU benefits exemplifies these problems because its practices lag behind those of other disability programs. Approaches from other disability programs demonstrate the importance of providing return-to-work services and using vocational expertise to assess the claimant's condition and provide the appropriate services. Incorporating return-to-work practices in IU decision making could help VA modernize its disability program to enable veterans to realize their full potential without jeopardizing the availability of benefits for veterans who cannot work. |
Developing countries expressed reservations about undertaking further trade liberalization at the 1999 Seattle WTO ministerial conference, as they were still experiencing economic difficulties despite previous market reforms. In response to developing country concerns, the 2001 WTO Doha Ministerial Declaration said that technical assistance should be designed to assist developing, least-developed, and low-income countries to meet their WTO obligations and draw on the benefits of an open, rules-based multilateral trading system. To implement the declaration, in December 2001, the WTO created the Doha Development Agenda Global Trust Fund to help developing countries build capacity and establish a reliable basis for funding WTO-related technical assistance (to which the United States has contributed $3 million, to date). In addition, in November 2002, the WTO and the Organization for Economic Cooperation and Development (OECD) created a database to provide comprehensive information on bilateral donor and multilateral/regional agency support for trade capacity building. The Congress included trade capacity building in the African Growth and Opportunity Act of 2000 to help eligible countries meet the act’s requirements. The act provides that sustained economic growth in sub- Saharan Africa depends in large measure on the development of a receptive environment for trade and investment. The act instructs the U.S. Customs Service to provide technical assistance to beneficiary countries in developing and implementing visa systems for textile transshipment and for antitransshipment enforcement. In addition, the Congress, in the Bipartisan Trade Promotion Authority Act of 2002, declared that among the principal negotiating objectives of the United States are to strengthen the capacity of the U.S. trading partners to promote respect for core labor standards and to protect the environment. That act calls for the President to seek to establish consultative mechanisms with parties to trade agreements to promote respect for core labor standards and compliance with International Labor Organization conventions on child labor, to develop and implement standards for the protection of the environment and human health, based on sound science. It provides for the President to direct the Secretary of Labor to provide technical assistance to countries wishing to establish trade agreements on its labor laws, if needed. In providing funding for trade capacity building in the foreign operations appropriations for fiscal years 2003 and 2004, the House appropriators called trade capacity building a critical element of development assistance because it can “be leveraged to generate economic growth, reduce poverty, promote rule of law….” The Congress earmarked funds appropriated in fiscal years 2003 and 2004 for trade capacity building. Specifically, the Congress provided that not less than $452 million in fiscal year 2003 should be made available for trade capacity building. Out of this amount, $159 million and $2.5 million were earmarked for USAID and USTDA, respectively. Similarly, in fiscal year 2004, the Congress provided that not less than $503 million should be made available for trade capacity building with $190 million earmarked specifically for USAID. The appropriations for each of those fiscal years also provided for funding from accounts managed by other agencies, including the Departments of State and Treasury, although amounts were not specified for the individual accounts. U.S. trade capacity building is primarily a collection of existing activities placed under the umbrella of trade capacity building by a U. S. government survey. Initiated in 2001, this survey was to capture, qualitatively and quantitatively, U.S. agencies’ existing activities promoting trade-related capacity building in transitioning economies and developing countries. The survey defined trade capacity building and asked agencies to place their assistance into a range of categories and estimate funding obligated for each category. U.S. trade capacity building is not a discrete area with its own budget. However, 18 agencies have self-reported that they obligated almost $2.9 billion for trade capacity building activities in over 100 countries from fiscal years 2001 through 2004. Overall, the assistance was distributed worldwide, although the focus differed somewhat from region to region. USAID reported providing 71 percent of the trade capacity building assistance. The U.S. government survey administered by USAID defined trade capacity building as activities meant to help countries become aware of and accede to the WTO; implement WTO agreements; and build the physical, human, and institutional capacity to benefit more broadly from a rules-based trading system. The survey asked agencies to place their assistance into several categories, including WTO awareness, WTO agreements, trade facilitation, human resources and labor standards, physical and economic infrastructure, agriculture development, environmental sector trade and standards, financial sector development, competition policy and foreign investment incentives, and services trade development (table 1 provides further information about these categories). Agencies estimated their obligated funding for each category from 2001 through 2004. The largest obligations were for trade facilitation at 27 percent, followed by human resources and labor standards at 16 percent, agriculture development at 12 percent, financial sector development at 11 percent, and physical infrastructure development at 8 percent. The governance, transparency, and interagency coordination category and the WTO-related category each received an estimated 6 percent of this assistance (see fig. 1). According to the database, a significant portion of U.S. funding for trade capacity building assistance, or 27 percent, supported trade facilitation activities such as business services and training, export promotion, customs operation and administration, and E-commerce development and information technologies. For instance, to facilitate trade in El Salvador and Ghana, USAID financed matching grants to artisans and small-to- medium-sized firms for business services training, product design, and packaging. U.S. assistance helped several artisans and firms develop ways to increase their capacity to market and export their products by improving product design and packaging and arranging trade fair visits to the United States and Europe (see figs. 2 and 3). In another example of trade facilitation, U.S. Customs and Border Protection officials trained Ghana’s Customs, Excise, and Preventive Service officials in procedures to comply with the African Growth and Opportunity Act textile visa enforcement system to prevent illegal transshipment and use of counterfeit documents relating to the importation of apparel products into the United States (see fig. 4). The U.S. Department of Labor funds a number of programs to strengthen labor systems and improve occupational safety and health. For instance, a project in Central America seeks to improve labor law compliance, while another project in Central America aims to reduce the incidence of workplace injuries. In Ghana, USAID provided technical assistance to a government committee drafting new labor legislation and focused on increasing worker collective bargaining by encouraging workers to work with members of the nonprofit and philanthropic sectors. For agriculture development, assistance is used to extend the benefits of trade to rural sectors and support trade-related aspects of agribusiness. In Ghana and El Salvador, U.S. assistance is supporting Salvadoran and Ghanaian food producers’ efforts to meet sanitary and phytosanitary standards through training. In El Salvador, U.S. assistance helped small- to-medium-sized farmers export fruits and vegetables (see fig. 5). U.S. Department of Agriculture officials also helped African nations meet export requirements under the African Growth and Opportunity Act by sponsoring training on quality control, risk analysis, and food safety. Under the category of financial sector development, U.S. assistance supports reforms in banking and securities markets and implementation of laws and regulations that promote trade-related investment to provide an enabling environment for international trade. For instance, several U.S. Treasury officials have worked to reform Ghana’s banking and tax systems. Specifically, these U.S. officials have helped the government of Ghana to restructure the funding relationship between the Ministry of Finance and the Central Bank, improve tax collection procedures, and strengthen the financial sector. U.S. assistance for physical infrastructure development helps establish trade-related telecommunications, transport, ports, airports, power, water, and industrial zones. For instance, according to USAID, U.S. assistance to improve the telecommunications sector helps Egypt’s ability to increase trade and investment (see fig. 6). U.S. telecommunications infrastructure projects have led to the installation of hundreds of thousands of telephone lines, serving more than 4 million Egyptians. Joint U.S.-Egyptian investments in the sector have supported the institutional strengthening of Egypt Telecom and the improvement and expansion of telecommunications networks throughout Egypt. Under a current telecommunications project, a state-of-the-art network operations center is being constructed, and several initiatives to strengthen the telecommunications infrastructure system are being carried out. U.S. trade capacity building supports an array of activities to help developing countries participate fully in the WTO and the global trading system generally and to implement their current and future trade commitments. For example, U.S. assistance helped create a WTO unit within the Egyptian Ministry of Foreign Trade to enable the ministry to participate in international trade negotiations and implement trade agreements. Moreover, this assistance provided training to unit officials on trade policy formulation and equipment to allow these officials to develop statistics and databases related to trade. This assistance supports institutional reform to improve governance and make policies more transparent. It also helps different ministries function more effectively in the trade policy arena. For example, in Ghana, U.S. assistance has supported workshops for the government, the private sector, and civil society to discuss and develop Ghana’s trade policy. The United States supports trade capacity building assistance globally, covering six regions including Asia, Central and Eastern Europe, the former Soviet Republics, Latin America and the Caribbean, the Middle East and North Africa, and sub-Saharan Africa. The Middle East and North Africa received the most funding, or 24 percent (see fig. 7). Funding per category of trade capacity building varied by region (see fig. 8). Overall, the trade facilitation category dominated with about a third of the funding in each region except for sub-Saharan Africa (about 27 percent) and Asia (about 17 percent). In Asia, the human resources and labor standards category received the most trade capacity building funding. In the former Soviet Republics, assistance for financial sector development received 20 percent of the funding. USAID provides most of the funding for trade capacity building assistance, with $423 million (71 percent), $477 million (75 percent), $554 million (73 percent), and $611 million (68 percent) in each of fiscal years 2001, 2002, 2003, and 2004, respectively (see fig. 9). Other key funding agencies, in decreasing order of funding during the 4- year period, were the U.S. Departments of Labor and State at approximately 15 percent and 4 percent, respectively, and the Overseas Private Investment Corporation and USTDA, both with about 2 percent. The other main providers of trade capacity building over the past 4 years include the U.S. Departments of Agriculture, Energy, and the Treasury (see table 2). Agencies have traditionally implemented trade and development assistance based on broad criteria such as national security and foreign policy considerations. Some agencies are beginning to incorporate trade capacity building into their approach to trade and development assistance. For instance, the Departments of State and Agriculture, USAID, and USTDA are taking into account trade capacity building in their planning. USAID is training its staff on trade capacity building concepts, designing funding instruments for trade capacity building, and starting to identify trade capacity building activities for budgeting purposes. Several agencies are focusing assistance on countries participating in trade preference programs and trade agreements with the United States. Agencies are also recasting some of their assistance to focus on trade capacity building through coordination via the trade capacity building interagency group formed in June 2002 to facilitate countries’ participation in free trade agreement negotiations with the United States. U.S. agencies are providing assistance to recipients based on broad criteria. Agency officials cited national security and foreign policy considerations, which are often driven by the Department of State, regional factors, and the countries’ expressed needs, as important factors in determining how to match assistance to recipients. Following are examples of how some agencies have applied broad criteria in choosing recipients and types of trade and development activities: National security: Agency officials cited the prevention of terrorism as driving assistance to certain areas. For example, the Department of State asked the U.S. Department of Agriculture (USDA) to provide assistance for rural development in Afghanistan. In addition, USTDA officials stated that national security has gained prominence in their work since September 11, 2001, particularly in the area of air and sea transportation. Foreign policy: Agency officials said foreign policy was an important factor in directing assistance to certain countries. For example, USDA is helping Colombia develop alternative crops to reduce illicit drug production. In addition, USTDA considers foreign policy when it responds to requests from U.S. ambassadors and other Department of State officials. Regional considerations: Agency officials sometimes tailor their assistance to particular regions. For example, USTDA officials said that they try to create geographic balance in their portfolio and work with regional clusters when it makes sense to share information among nearby countries, such as working with India and Pakistan on a telecommunications conference. In addition, USAID officials stated that they have worked on regional economic growth in Central America, for example, by taking stock of each government’s capabilities through diagnostic tools. Country needs: Agency officials said that countries’ expressed needs are an important factor (in conjunction with other factors) in selecting trade capacity building activities and recipients. USDA officials stated that they have used responses from a WTO questionnaire to develop a benchmark for developing country needs regarding plant, animal, and human health requirements. USTDA officials said that they have specialized in translating country needs into projects by conducting feasibility studies and arranging for the appropriate technical assistance. Department of Labor officials considered country needs by working directly with labor ministries. For example, Labor officials said that they respond to requests from Central American countries for help in identifying inspection systems, expediting dispute resolution outside the courts, and informing the public about Central American countries’ labor laws. Finally, according to USAID officials, their agency’s strength lies in having resident country missions, which allow staff to gain insight into countries’ motivations and needs regarding trade. Generally, USAID field missions have the lead in devising program-planning requirements for USAID assistance. Some agencies, USAID in particular, are beginning to focus on trade capacity building in managing their assistance. For instance, the Department of State, USAID, USDA, and USTDA are incorporating trade capacity building into their planning. USAID is training its staff on trade capacity building concepts, designing trade capacity building-specific funding instruments, and beginning to identify trade capacity building activities for budgeting purposes. Several agencies have also provided assistance to support trade agreements and trade preference programs. USAID and USDA have incorporated trade capacity building in their fiscal year 2005 congressional budget justifications. For example, USAID included as a key initiative for fiscal year 2005 trade capacity building in support of WTO and bilateral U.S. government trade objectives. In addition, in their joint strategic plan for fiscal years 2004 through 2009, the Department of State and USAID stated that they “will strengthen the capacity of developing and transitional economies to participate in, and benefit from, trade by enhancing their ability to respond positively to global opportunities….” USAID also called trade capacity building a key result of its economic growth strategic goal in its fiscal year 2003 annual performance and accountability report. USDA included a strategic objective to “support international economic development and trade capacity building” under its strategic goal of enhancing economic opportunities for agricultural producers. Furthermore, USTDA included a performance goal in its 2004 performance plan to provide capacity building activities to support USTR in trade negotiations. In addition, both USAID and USDA have issued formal strategies for providing trade capacity building. USAID’s 2003 strategy, Building Trade Capacity in the Developing World, emphasizes that while ongoing activities address a variety of trade capacity building needs, USAID will focus new activities on helping countries participate in and implement trade agreements and take advantage of trade opportunities. Ways to increase trade opportunities include strengthening economic policies; removing trade barriers; and building well-functioning economic, political, and legal institutions. USDA’s strategy targets its trade capacity building initiatives on promoting science and rules-based regulatory frameworks for agricultural trade and supporting improved understanding of agricultural biotechnology and expanded trade in safe food products developed by biotechnology. USAID is training its headquarters and field staff on trade issues, including the WTO framework and principles, the current multilateral negotiating agenda, and trade capacity building best practices. USAID has also conducted seminars for its economic officers in the field missions on its approach to trade capacity building. USAID is also using specialized contract mechanisms to fund trade capacity building assistance quickly. For instance, its “trade capacity building support mechanisms” provide quick funding (new requests for technical assistance can be addressed “in as little as” 3 weeks) to help USAID missions help countries assess their trade constraints and prioritize their trade-related technical assistance needs. The project also provides short-term technical assistance to assist missions in designing, implementing, monitoring, and evaluating trade-related technical assistance, such as technical training for trade officials and trade workshops for public and private sector leaders. USAID reports that these assessments have focused on integrating trade into poverty reduction strategies and negotiating and implementing free trade agreements. USAID officials are beginning to attribute, or identify, funding for trade capacity building activities for budgeting purposes. For example, in budgeting for each of their strategic objectives, USAID is identifying amounts to be attributed to activities that are considered trade capacity building. The purpose is to ensure that funding reflects the priorities of the agency and the Congress. However, one USAID official said that this was particularly difficult for trade capacity building because it was relevant to multiple strategic objectives, and funding was programmed in more than one office. Several agencies considered supporting countries participating in trade preference programs and trade agreements with the United States to be an increasingly important factor driving their trade and development assistance. For instance, a USDA official stated that there has been a change in thinking regarding the role of the Foreign Agricultural Service in developing countries. Traditionally, the mission of the Foreign Agricultural Service has been to promote U.S. agricultural exports. Officials said that the African Growth and Opportunity Act of 2000 has made the Service more aware of the limitations that developing countries have in exporting their agricultural products and that helping them to do so will “win friends” in multilateral trade negotiations. For instance, the Foreign Agricultural Service worked with the Animal and Plant Health Inspection Service to help the act’s recipients set up animal and plant inspection systems for exporting their products. The Department of Labor has provided trade capacity building assistance to improve countries’ enforcement of their labor laws, in response to requests from USTR consistent with authority in the Bipartisan Trade Promotion Authority Act of 2002. For example, it has allocated funds to strengthen the capacity of labor ministries in Central American countries to enforce their national labor laws. A USAID official said that USAID’s work to help Central American countries has become more market-oriented, and improved social and economic conditions have laid the foundation for negotiating and implementing CAFTA. Another new trade capacity building initiative was the formation, shortly after the November 2001 Doha ministerial, of the trade capacity building interagency group dedicated to coordinating trade capacity building in support of free trade agreements, which USTR co-chairs with USAID. The Assistant U.S. Trade Representative for Trade Capacity Building said that U.S. success at the negotiating table depends upon the meshing of trade and aid. In fact, the trade capacity building interagency group has spun off special working groups to facilitate specific trade negotiations such as CAFTA, bilateral agreements with Morocco and the Dominican Republic, and the free trade agreement negotiations with the Andean region. The CAFTA working group met in tandem with CAFTA negotiating groups to help CAFTA countries develop national strategies for implementing the agreement. These trade agreement-specific working groups are led by USTR. Agency officials told us that they meet as frequently as once a month to coordinate trade capacity building at the policy level and that the meetings are informal and have no written guidelines or minutes. A USTR official said that the U.S. Trade Representative places primary importance on coordinating trade and development policy and that this has been critical to the successful negotiation of free trade agreements with Morocco, Central America, and Chile. According to one interagency group participant, USTR informs the group about progress in ongoing free trade agreement negotiations and any trade capacity building needs emerging from the negotiations. Agency attendees then exchange information about their trade capacity building activities to determine whether any might meet negotiating countries’ needs. Although USTR might suggest possible trade capacity building initiatives, specific trade capacity building projects do not typically emerge from the meetings but are worked out later. One official said that USTR likes to go into negotiations with information on what trade capacity building assistance agencies are already providing countries. The meetings are mostly informational, ensuring that all U.S. agencies “speak with one voice” on trade capacity building, according to this official. Another official said that USTR’s role was to persuade the other agencies to provide funding for trade capacity building to support the free trade agreements and that the agencies then provide what they can. A CAFTA-dedicated trade capacity building working group met in tandem with the six CAFTA negotiating groups during each of the nine rounds of CAFTA negotiations. Each CAFTA country prepared a national strategy to define and prioritize its trade capacity building needs. U.S. agencies, five international institutions, corporations, and nongovernmental organizations were to provide trade capacity building assistance. According to a USAID official, the CAFTA trade capacity building working group had no direct role in the negotiations and did not influence the outcome of the negotiations. Rather, it strove to ensure that countries were made aware of the trade capacity building assistance available or already provided to them. The trade capacity building assistance that emerged from the CAFTA trade capacity building working group included both reorienting existing activities and creating new ones. For example, USAID funds in Honduras were redirected to establish a trade unit in one of the Honduran ministries, helping it determine staffing needs and providing some office equipment. An example of a new initiative coming out of the working group was a commercial law diagnostic tool and a new regional program to help countries meet the customs reforms called for in a CAFTA chapter. According to participants, the CAFTA country national strategies and the process for creating them were important tools for prioritizing and focusing CAFTA countries’ trade capacity building needs. One USAID official explained that, at first, CAFTA countries created “wish lists” that were somewhat unrealistic, asking for projects beyond the scope of donor resources. An official in the USAID mission to El Salvador stated that the mission and other donors have worked with the El Salvador Ministry of Economics to develop trade capacity building project profiles, a common template to prioritize trade capacity building needs. Ultimately, the profiles reflected the needs and priorities of both sides. The National Action Plan for Trade Capacity Building issued in July 2003 by the government of El Salvador emerged in part from this exercise. The plan lays out what trade capacity building is needed to help El Salvador prepare for, participate in, and implement CAFTA and transition to free trade. According to USAID officials, the Ministry of Economics used the plan in its strategic planning. One USAID official stated that the national plans are meant to be flexible as needs change but should impose discipline on donors and recipients to stay within agreed-upon priorities. Government of El Salvador officials stated that the CAFTA trade capacity building process helped donors to better coordinate their assistance and will encourage the enforcement of environmental laws. A USDA official said that the trade capacity building interagency group meetings have given agencies insight into U.S. views on free trade agreements and have sometimes alerted USDA to agriculture policy issues about which it was unaware. A USTDA official stated that the interagency meetings have improved coordination among agencies and helped USTDA focus on trade capacity building activities with the most value to recipients and donors. They also said that USTR and other agencies have become more aware of what each is doing to provide trade capacity building. In addition, the meetings have helped agency officials form relationships and contacts to better provide trade capacity building. U.S. agency officials in Washington had positive comments about the CAFTA trade capacity building coordination process. One USDA official called the process an agile mechanism to provide assistance quickly and pull the right people together to provide it. A USTDA official said that the process had helped negotiators “sell” CAFTA to CAFTA countries because countries are getting concrete help on specific projects such as port modernization that have tangible benefits. An official from the Department of State called it a rapid response mechanism. Officials from the Departments of Labor and the Treasury and USAID stated that the system allowed donors to identify country needs and avoid duplication. For example, USAID was able to plan its customs work appropriately when the Inter-American Development Bank informed the CAFTA trade capacity building working group that it was providing a regional customs program. For the most part, the six agencies we reviewed are neither systematically monitoring nor measuring program performance against program goals in terms of building trade capacity, neither are they evaluating the effectiveness of their trade capacity building activities. While some of the agencies we reviewed have set program goals for building trade capacity, generally, most have neither developed performance indicators related to trade capacity building, nor have they compiled performance data and analyzed the results in terms of building trade capacity. USAID presented goals for building trade capacity in its March 2003 strategy, Building Trade Capacity in the Developing World, with a limited number of performance indicators to monitor or measure results and measure performance against those goals. USDA’s trade capacity building strategy does not include performance indicators. Although USAID officials have called developing trade capacity building performance indicators difficult, they are working toward that end independently and with other donors. Among the six agencies we reviewed, only USAID and USDA have strategies for trade capacity building other than what is contained in strategic plans and annual performance plans. As shown in table 3, USAID’s 2003 strategy lays out goals with a limited number of trade capacity building performance indicators to measure performance against goals. USDA’s trade capacity building strategy, which focuses on promoting a rules-based regulatory framework for agricultural trade and on supporting better understanding of agricultural biotechnology, contains no performance indicators. A performance indicator is a specific value or characteristic to measure output or outcome. An “output measure” records the actual level of activity or whether the effort was realized and can assess how well a program is being carried out. An “outcome measure” assesses the actual results, effects, or impact of an activity compared with its intended purpose. USAID has acknowledged the difficulty of developing a set of trade capacity building performance indicators for missions to use in their performance monitoring plans. A USAID official stated that the agency had not, to date, developed a set of trade capacity building indicators because most of the agency’s trade capacity building activities focus broadly on economic development, whose benefits are difficult to quantify. Although currently many missions use increased exports as an indicator, one USAID official pointed out that exports can increase for reasons unrelated to trade capacity building. The USAID official also said that coming up with indicators is sometimes less of a problem than collecting the data, which can be hard to come by in many developing countries. For instance, USAID contractors may have to rely on a country’s private sector to obtain data on value-added products since the local government would not collect such data. On a small project, with individual firms, this would be feasible, but it would be costly for a whole sector, the official said. Furthermore, another USAID official stated that USAID has struggled to help missions understand the distinction between economic development projects and trade capacity building projects. For instance, the official said that, although USAID had undertaken many agricultural projects in the past, many project activities were not linked to markets and trade. One USDA official said, however, that he considered the development of new institutions, laws, and regulations to be good performance indicators for trade capacity building efforts. Despite the challenges of monitoring and measuring the results of trade capacity building assistance, USAID is working on its own and through the international community to develop trade capacity building performance indicators for missions to use in their performance monitoring plans. USAID has contracted for a consultant’s study and expects to have a draft report on trade capacity building indicators in the near future. Furthermore, USAID is not alone in dealing with the difficulties of evaluating trade capacity building efforts, as other countries face the same issues. USAID has been collaborating with other country donors through the OECD’s Development Assistance Committee to develop a common framework for results monitoring and assessment of trade capacity building efforts. To date, the OECD committee members have discussed a flexible “tool kit” of trade capacity building indicators, in recognition that the wide range of trade capacity building projects would argue against using the same indicators for all trade capacity building activities. Based on our interviews, U.S. agencies have not specifically conducted program evaluations to assess the effectiveness of their trade capacity building efforts. Program evaluation is an assessment of the effects of a program or policy that can measure unintended results, both good and bad, and can be used to validate or find error in a program’s basic purposes and premises. GPRA called for agencies to improve congressional decision making by providing more objective information on the relative effectiveness and efficiency of their programs and spending. According to agency officials with whom we spoke, some agencies have evaluated their activities but not in relation to trade capacity building. For instance, Department of Labor officials stated that Labor evaluates its projects against project-specific objectives that are not trade capacity building objectives. USTDA officials stated that they have a set of measures for the development effectiveness of each of their activities. USTDA officials stated further that, while they have recently developed a system for evaluating the development impact of their activities over the next 6 years, the system is not meant to measure trade capacity per se. USTDA officials do believe, however, that their development impact measures will in most cases ultimately serve as a good proxy for measuring trade capacity building impact. Examples of effectiveness in the USTDA system would include the percentage of activities that lead to the adoption of market- oriented reforms or result in the transfer of advanced technology to increase productivity. Finally, USTDA officials emphasized that few of their trade capacity building activities are mature enough to be evaluated. The Department of State evaluates the effectiveness of its International Visitor Leadership Program with general anecdotal feedback from participants. One USDA official said that the agency has not done any assessments specifically of the effectiveness of its trade capacity building efforts, but that USDA did do program evaluations. For example, one USDA evaluation concluded that a refrigeration project resulted in improved refrigeration management and a reduction in perishable losses of one company of 60 percent. USAID reported in May 2004 that it has conducted fewer program evaluations overall since instituting its performance measurement system under GPRA, replacing them with annual reports “that measure progress toward specific goals on a country by country basis” rather than evaluating the effectiveness of the program as a whole. In addition, one USAID official said that USAID moved away from using formal evaluations about 10 years ago because of lack of personnel. In October 2004, USAID issued the report, USAID Trade Capacity Building Programs: Issues and Findings, which examined issues related to USAID’s trade capacity building assistance programs. The report concluded that USAID should collect and analyze more trade capacity building data to monitor results and use those results to conduct program evaluations. It also said that USAID should conduct more and better evaluations of its trade capacity building projects to know what approaches work best and under what conditions. While several of the agencies we reviewed emphasized trade capacity building in their strategic plans or annual performance plans, and two agencies have produced trade capacity building specific strategies, the lack of performance data linked to trade capacity building limits their ability to monitor and measure current results. In addition, without evaluations identifying what trade capacity building activities are effective, the agencies will have difficulty determining whether their efforts are achieving their overall trade capacity building goals. Finally, as we discuss in appendix IV, greater openness to international trade can have a variety of effects, both positive and negative, on different aspects of developing countries’ domestic economies. Therefore, evaluating the effectiveness of trade capacity building efforts is important to identify those that build trade capacity and those that do not and to determine if any negative effects should be mitigated. The executive branch and the Congress have elevated trade capacity building as a crucial tool for U.S. trade and development policy. This warrants a comprehensive, coordinated approach to its delivery, based on solid evidence of its effectiveness in generating economic development and growth through trade. The challenge is that the estimated $2.9 billion in U.S. trade capacity building assistance covers multiple categories of assistance across numerous types of trade and development programs that have many goals and are implemented by multiple agencies. The cross- cutting nature of this assistance argues for a coordinated approach to its implementation. The trade capacity building interagency group has demonstrated that a coordinated approach is possible under the right circumstances by bringing agencies together to deliver relevant, focused, and timely technical assistance to countries participating in free trade agreements. The cross-cutting nature of trade capacity building also makes it difficult to evaluate. While agencies track the results of individual activities, they do not consistently do so in terms of building trade capacity, in part, due to the relative newness of the concept and the lack of a common framework for evaluation. USAID is working independently and in conjunction with other country donors to develop a common set of indicators to monitor and measure performance and to assess trade capacity building effectiveness. Without evaluating the effectiveness of its trade capacity building assistance, the United States cannot ensure the reasonable use of resources devoted to such assistance, determine whether the assistance is helping countries participate in and benefit from trade, and credibly demonstrate that trade capacity building is a useful U.S. trade and development policy. To provide more objective information on the progress of U.S. trade capacity building efforts and allow the United States to assess their effectiveness, we make the following two recommendations: The Administrator, U.S. Agency for International Development, and the U.S. Trade Representative, as co-chairs of the trade capacity building interagency group, in consultation with other agencies that fund and implement trade capacity building assistance, should develop a cost- effective strategy to systematically monitor and measure program results and to evaluate the effectiveness of U.S. trade capacity building assistance. The Administrator, U.S. Agency for International Development, should direct the agency to set milestones for completing its efforts to develop trade capacity building performance indicators to be used by (1) its field missions to monitor and measure the results of their trade capacity building efforts and (2) its relevant agency bureaus to conduct periodic program evaluations. The U.S. Agency for International Development should share its findings with other agencies that fund and implement trade capacity building assistance. We provided a draft report to the Office of the U.S. Trade Representative, the U.S. Agency for International Development, the Departments of Agriculture, Labor, State, and the Treasury, and the U.S. Trade and Development Agency. We received technical comments from the Office of the U.S. Trade Representative, the U.S. Agency for International Development, the Departments of Labor and State, and the U.S. Trade and Development Agency. The Department of Agriculture provided no comments. We received written comments from the U.S. Agency for International Development, the Office of the U.S. Trade Representative, and the Department of the Treasury, which are reprinted in appendixes V through VII. The U.S. Agency for International Development agreed with our two recommendations. Regarding our first recommendation, USAID emphasized the importance of considering the large number of agencies involved, the diversity of trade capacity building programs, and the cost- effectiveness of different approaches to monitoring and evaluating trade capacity building activities. In addition, USAID believed that developing a monitoring and evaluation system should be done selectively, starting with programs with the clearest links to building trade capacity. Regarding our second recommendation, USAID noted that the Administrator had directed the agency to reinstate its overall project evaluation efforts, and that USAID had several ongoing efforts to support the recommendation. USAID noted, however, that standard indicators designed to report on agencywide trade capacity building program performance will not be sufficient to monitor the effectiveness of all aspects of every trade capacity building project. USAID country missions will need to continue to develop specialized indicators that are tailored to local goals, opportunities, constraints, and needs. The Office of the U.S. Trade Representative reiterated the important role of trade capacity building in linking trade and development by providing developing countries with the tools to maximize trade opportunities offered by multilateral and bilateral trade agreements and trade preference programs. In addition, USTR believed that the discussion in the report about interagency coordination demonstrated the importance of linking trade capacity building needs with the needs generated by trade negotiations. The Department of the Treasury complimented our assessment, stating that the report provided a good example of cooperation and mutual support between the Department’s Office of Technical Assistance and USAID in providing trade capacity building. Treasury also emphasized that its role in helping countries to institute financial reforms contributed to building trade capacity. We will send copies of this report to appropriate congressional committees and to the U.S. Trade Representative; the Administrator, USAID; the Secretaries of the Departments of Agriculture, Labor, State, and the Treasury; and the Director, U.S. Trade and Development Agency. We also will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-2717 or at [email protected]. Another GAO contact and staff acknowledgments are listed in appendix V. The Chairman and Vice Chairman of the House Subcommittee on National Security, Emerging Threats, and International Relations, Committee on Government Reform, asked us to provide information about U.S. trade capacity building assistance. This report (1) identifies the nature and extent of U.S. trade capacity building; (2) describes how agencies implement such assistance, including coordination; and (3) assesses whether agencies evaluate its effectiveness. To address these objectives, we reviewed agency information on trade capacity building programs. We visited overseas missions in Egypt, El Salvador, and Ghana; we chose these countries because they were among the 20 countries receiving the most trade capacity building funding, and they represented different regions and income levels (low and middle income). We initially interviewed officials from 12 U.S. agencies responsible for trade capacity building activities. We then narrowed this down to the six agencies that funded and implemented 96 percent of trade capacity building assistance. These agencies were the Departments of Agriculture, Labor, State, and the Treasury, the U.S. Agency for International Development (USAID), and the U.S. Trade and Development Agency. We also interviewed officials from the Office of the U.S. Trade Representative, an agency with a coordination role. We concentrated our review on USAID as it provided the bulk of the funding. To describe the nature and extent of trade capacity building, we reviewed documents from the World Trade Organization regarding the Doha ministerial conference in 2001 and subsequent international work on the Doha Development Agenda. We also reviewed documents from the Organization for Economic Cooperation and Development and the United Nations Conference on Trade and Development. To determine the U.S. definition of trade capacity building, we examined congressional documents providing guidance on funding and implementation, as well as relevant U.S. legislation, and relevant agency documents. We also examined the guidance and definitions specified in the U.S. government trade capacity building survey administered by USAID. We surveyed economic literature on the relationships among trade, economic growth, and development in developing countries. We examined U.S. government reports on trade capacity building assistance, annual agency reports, and agency trade capacity building planning and project documents. Because USAID—as the primary funder of trade capacity building—administers foreign aid through a decentralized organizational structure, we visited USAID missions in Egypt, El Salvador, and Ghana to observe a range of trade capacity building activities. At the missions overseas, we examined program documents and interviewed USAID officials to understand the types of trade capacity building programs the missions manage. In addition, in conjunction with our work at the missions, we held meetings with other key U.S. government officials, USAID contractors, host government ministry officials, and various trade capacity building recipients. We analyzed data from the U.S. Government Trade Capacity Building database to identify the major funding categories, agencies, and recipients of trade capacity building assistance. To assess the reliability of the U.S. Trade Capacity Building database, we reviewed the survey instruments used to collect the data, examining country activity sheets and survey forms, and performed our own data reliability tests. We also interviewed the USAID contractor that manages the data collection and analyzed the steps the contractor took to ensure data reliability. For example, we asked the contractor how the survey data were collected, what quality checks were performed, and what other internal controls were in place. In Washington, D.C., we asked U.S. officials at the Departments of Agriculture and Labor and the U.S. Trade and Development Agency a standard set of data reliability questions. In El Salvador and Ghana, we conducted data reliability interviews with officials at the USAID missions. We determined that the data in the database were sufficiently reliable for the purposes of identifying the major categories of trade capacity building funding, the agencies funding the trade capacity building programs, and the regions and countries receiving trade capacity building funding. To examine how agencies implement trade capacity building assistance, we examined agency documents on trade capacity building activities, strategic plans, and other relevant documents. We asked agency officials about factors affecting agency decisions concerning the type of assistance provided, the countries selected as recipients, and the amount of funding. We focused our interviews on officials at USAID, the U.S. Trade and Development Agency, and the Departments of Agriculture, Labor, State, and the Treasury at this stage because these agencies reported implementing and funding 96 percent of the trade capacity building assistance in fiscal years 2001 to 2003 (we obtained 2004 data at the end of the review). To assess how U.S. agencies coordinate the allocation of trade capacity building assistance, we reviewed published reports on trade capacity building activities, agency strategies, and program documents. In Egypt, El Salvador, and Ghana, we interviewed U.S. officials responsible for implementing trade capacity building activities, as well as host government officials in Egypt, El Salvador, and Ghana. We observed one meeting of the interagency group on trade capacity building. To assess whether agencies evaluate the effectiveness of their trade capacity building efforts, we analyzed U.S. agency project documents, annual reports, performance and accountability reports, and reports on trade capacity building. Using interview responses and analyses of the reports and documents related to trade capacity building, we examined these agency efforts against the Government Performance and Results Act of 1993 criteria for performance monitoring and program evaluation. We also examined performance and monitoring principles used by multilateral donors and international organizations that we identified by reviewing a GAO analysis of relevant U.S. legislation and Organization for Economic Cooperation and Development documents. We conducted our work between September 2003 and November 2004 in accordance with generally accepted government auditing standards. The U.S. government has conducted an annual survey of U.S. agencies’ trade capacity building assistance efforts since 2001. The survey collects funding data for an entire agency or U.S. Agency for International Development (USAID) mission in the given fiscal year. U.S. agency officials complete the survey by providing financial information on funds obligated for various projects and activities in a given year. Actual expenditures of funds for these activities may not occur until a year or two after the survey. For example, the fiscal year 2002 database accounts for obligations in fiscal year 2002. However, activities may not occur and thus may not be expended until fiscal year 2003, or 2004, or even later. To answer the survey, an agency official typically goes through all of the agency’s projects for a given fiscal year and reviews the survey guidance, including the trade capacity building categories (see table 1 for complete list of categories), to determine which projects are related to trade capacity building. The officials then assign a percentage amount of the total funded project to a trade capacity building category. An “other” category is provided for activities that do not fit the given trade capacity building categories in the survey. In addition, activities in the database are often not discrete projects but parts of larger programs. For instance, the USAID/Egypt mission has a $200 million Commodity Import Program, but only $50 million of the project is counted as trade capacity building and included in the database. The database, created by a USAID contractor from the surveys, is available online at http://qesdb.cdie.org/tcb/index.html. It provides information by type of activity, by recipient country, and by U.S. government agency. Agencies are grouped in three ways: those that fund, those that implement, and those that both fund and implement trade capacity building activities. The database provides financial information for the period of fiscal years 1999–2004. According to the survey administrator, the technical team reviewed completed survey forms, checking for accuracy and consistency in the reporting and allocation of funding to various trade capacity building categories. In addition, the survey administrator told us that, whenever a report was ambiguous or incomplete, the technical team worked with the reporting U.S. government agency, department, or field mission to amend the data. Assistance to developing countries for trade capacity building is based on the premise that international trade can positively benefit a country’s overall growth and development. Economists postulate that these potential benefits come as trade increases competition and specialization, provides greater access to technology for domestic producers, expands export markets and earnings, and fosters new foreign investment and institutional reforms. However, economists have also argued that international trade can create significant challenges for developing countries, such as greater instability due to volatile export markets, increased reliance on international debt to finance trade deficits, and exacerbated income inequality and unemployment. Following the rapid growth of certain East Asian countries, and more recently China, the role of international trade in fostering growth and development has become more widely accepted. Some empirical studies have confirmed a positive relationship between trade liberalization and growth; however, others question the robustness of these results and stress that greater openness does not uniformly lead to development. Economic theory predicts a variety of ways in which international trade can positively affect a country’s growth and development. First, greater openness to imports from other countries increases competition in the country’s domestic market. This can lead to greater efficiency as less competitive producers are driven out of the market. In addition, resources will shift to more competitive producers and industries enabling them to expand. Second, these expanding domestic producers may now be able to export their products to a worldwide market, rather than sell them only in the local economy. With a larger market, some producers also may benefit from economies of scale in production; that is, they are able to reduce their costs per unit of output by producing on a larger scale. Third, overall productivity in the economy can increase due to greater competition and specialization. Competition increases the number of efficient producers and reduces the number of less efficient producers. Fourth, imports also may provide access to machinery and equipment that the domestic economy does not produce but are needed so domestic firms can expand. These imports may embody technology and innovations that the domestic economy lacks but which help improve labor productivity and benefit industries that use them. Increased openness to trade may also create incentives for foreign direct investment and institutional reforms, both of which may facilitate growth. For example, a more liberal trading regime that reduces costs on both imported manufacturing inputs and exported final products may create incentives for foreign producers to invest in new production in the domestic market since the cost of foreign-produced components used domestically is lower and producers can export more competitively. Lower tariffs mean the domestic industry can import components used in their final products more cheaply, while lower export taxes enable the final products to be sold at a lower price internationally. Increased foreign investment expands developing countries’ stock of capital, technology, and managerial expertise, which may expand production directly through new subsidiaries and have positive spillover effects on other companies and industries in the economy. Trade liberalization also may positively affect institutional development and reform. For example, some economists argue that greater competition from imports may encourage institutional reforms and reduce corruption by reducing the monopoly power of domestic interests that benefited from the protected market. At the same time, export industries that are expanding to take advantage of opportunities in the world market have an incentive to lobby for further reforms that increase the competitiveness of the domestic economy. Economists have also pointed to a variety of significant challenges that international trade raises for developing countries. For example, many developing countries have significant exports of primary products, such as agriculture and raw materials. Dependence on these types of exports, particularly for countries that generate their export earnings from a few products (such as coffee, cocoa, or bananas), creates large economic fluctuations since primary product prices tend to be relatively unstable. In addition, many developing country exporters also have faced deterioration in their terms of trade, as the prices of their export products fell relative to the prices they paid for their imports. This can create a situation in which trade barrier reductions in the domestic market increase demand for imports and displace domestic production, but export sectors do not expand to capture these resources because prices in world markets are declining. Consequently, the gap between export earnings and import payments may lead developing countries to maintain current account deficits. This means that more foreign currency for imports is paid for imports than received from selling exports. To acquire foreign currency to cover this deficit, countries need an inflow of foreign financial assistance, either through private investment or public assistance (such as loans and aid). Persistent current account deficits were partly responsible for the accumulation of debt among developing countries in the 1980s and 1990s. Some economists point out that, although trade may benefit a country’s growth and overall wealth, distributional problems such as wage inequality, unemployment, and poverty may accompany this growth and be contrary to a country’s development goals. For example, trade liberalization may worsen a country’s income distribution and reduce the wages of low-skilled workers if it encourages (as a result of increased foreign competition) the adoption of technologies that favor more skilled workers. In addition, the economic changes induced by greater competition may affect workers, industries, and communities disproportionately. The potentially positive role of international trade on economic growth and development is not a new concept. Eighteenth-century economists such as Adam Smith and David Ricardo argued for the benefits of international trade for economic growth. In the twentieth century, the rise of trade barriers among the major trading nations and the resulting decline of international trade has been cited as one of the reasons for the depth and duration of the worldwide recession in the 1930s. Following World War II, the reduction of trade barriers among trade partners was seen as an important component of the world economic system. The General Agreement on Tariffs and Trade was inaugurated in 1947 and then followed by successive rounds of negotiations, which resulted in the formation of the World Trade Organization in 1995. Similarly, the United Nations Conference on Trade and Development was formed in 1964 because of a general understanding that trade and development were interrelated. Despite these developments, economists and developing countries from the 1950s through the 1970s held divergent views about the best policies for growth and development. These views involved engaging the world market versus sheltering certain industries from competition until they were better able to compete. Ultimately, the divergent experiences of developing countries over this period led to a broader acceptance of the role of openness to international trade in fostering economic growth and development. Many countries, such as Argentina, El Salvador, Ghana, and Nigeria, pursued an inwardly focused development strategy known as import substitution. This strategy focused on restrictive trade policies that sought to protect certain domestic industries in order to foster a diverse industrial base. On the other hand, certain East Asian economies, including Hong Kong, Korea, Singapore, and Taiwan, pursued a more outwardly focused development strategy known as export promotion, which sought to encourage industrial development by tapping into larger export markets rather than relying on protected domestic markets. Although the debate between these two broad approaches has swung back and forth, export promotion, and trade liberalization in general, was more broadly accepted by the 1980s as the dominant development strategy. This was due in large part to the rapid growth of the East Asian economies, as well as China more recently, and the relatively stagnant growth of many countries that pursued more restrictive policies. Openness to trade, sound fiscal and monetary policy, security of property rights, and privatization were key policy prescriptions in what became known as the “Washington Consensus.” This consensus generally characterized the advice of the World Bank and International Monetary Fund (both based in Washington, D.C.) to developing countries. As a result, since the 1980s, a variety of countries have liberalized their trade regimes by reducing trade barriers through unilateral, bilateral, regional, and multilateral trade negotiations. The range of policies that affect the trade openness of particular countries makes it difficult to measure levels of openness over time and across countries. However, a wide variety of evidence shows that developing countries have liberalized their trade regimes extensively over the past two decades. For example, average tariffs of developing countries have fallen from around 36 percent in the early 1980s to around 16 percent currently, based on World Bank and International Monetary Fund statistics. However, the trend to greater openness varied among regions and countries, with Latin America tending to move the most rapidly and comprehensively, while South Asian countries made little progress until the 1990s. For Ghana and Egypt— countries we visited in our work—average tariffs were similar in the early 1980s at 43 percent and 47 percent, respectively. However, Ghana reduced its average tariff much more rapidly than Egypt, so that currently Ghana’s average tariff is about half that of Egypt (16 percent compared to 30 percent). A large economics literature exists on the relationship between trade and growth. Many studies have attempted to empirically measure (and confirm) the relationship between a country’s level of openness to trade and per capita income, or the relationship between changes in trade flows and changes in gross domestic product (growth). For example, regularly cited research by economists David Dollar, Aart Kraay, Jeffrey Sachs, Andrew Warner, Dan Ben-David, and Sebastian Edwards generally finds an important relationship between changes in trade flows or liberalization and growth rates across countries. The studies construct measures of openness to trade and econometrically estimate the relationship to growth, controlling for causality (e.g., growth may also spur increased trade) and other factors that affect growth. Similarly, research over the past 15 years by economists Robert Hall, Charles Jones, Andrew Rose, Jeffrey Frankel, and others have found that large differences across countries in the level or the growth rate of real GDP per capita may be systematically related to the level (or degree) of openness of those countries. However, these studies have also found that institutional quality, such as the effectiveness of government, is also an important factor affecting growth and difficult to separate from the effects of openness. Although there is a general acceptance that trade can play an important role in economic development, some economists have criticized the methodologies used to study the relationship between openness and growth. For example, Francisco Rodriguez and Dani Rodrik argue that methodological problems in this literature leave the results open to diverse interpretations. They find little convincing evidence that changes in trade policy (i.e., reductions in government-imposed trade barriers) are significantly associated with economic growth. One challenge that affects the robustness of studies trying to estimate the impact of trade liberalization on economic growth is constructing reliable and reasonable measures of “openness.” The few measures that are relatively widely available, such as tariff rates, do not fully capture the wide range of policies that governments may put into place to affect trade. Data are not readily available on barriers other than tariffs (e.g., nontariff barriers such as quotas) for many developing countries. Furthermore, for those countries for which some data are available, generally only information on whether or not nontariff barriers are in force is available, rather than precise information on their relative restrictiveness or actual effect on trade. In addition, less is known about the relationship between trade capacity building and other factors affecting economic growth and development (such as institutions and human capital). Although increased trade appears to be potentially beneficial to growth and development, countries that have liberalized over time have had mixed experiences. As mentioned above, institutional factors also appear important, as do geographical factors (proximity to trade partners), in the extent to which countries benefit from greater connectedness to the world economy. Countries in sub-Saharan Africa have remained relatively less developed and marginalized compared to developing countries elsewhere, despite undergoing some degree of trade liberalization. Trade liberalization alone does not appear to be a sufficient criterion for development but is one of several important factors. Also, the speed at which the global economy evolves may initially benefit a developing country but later pose difficulties as labor tries to adjust to new conditions. For example, the removal of textile and apparel trade restrictions on January 1, 2005, by developed economies such as the United States and European Union will allow China and other large clothing producers to compete against other developing countries for their market share previously protected by the quotas. Some economies may have difficulties adjusting to rapid changes in their export markets after having built up significant industries under the quota system. The following are GAO’s comments on the U.S. Agency for International Development’s letter dated January 18, 2005. 1. We have made changes in the report language to recognize that, while many U.S. trade capacity building efforts are existing activities, some trade capacity building activities are new. 2. To ensure accountability, it is GAO policy to address recommendations to agency officials, rather than to the agency as a whole. However, we have added the term “cost-effective” to the recommendation as suggested in the letter. In addition to the individual named above, Nina Pfeiffer, Rhonda Horried, Ann Baker, and Tim Wedding made key contributions to this report. Martin De Alteriis, Lynn Cothern, Etana Finkler, Curtis L. Groves, and Ernie Jackson also provided assistance. | Many developing countries have expressed concern about their inability to take advantage of global trading opportunities. The United States considers this ability a key factor in reducing poverty, achieving economic growth, raising income levels, and promoting stability. U.S. trade capacity building assistance is designed to address these concerns. GAO (1) identified the nature and extent of U.S. trade capacity building; (2) described how agencies implement such assistance, including coordination; and (3) assessed whether agencies evaluate its effectiveness. U.S. trade capacity building is primarily a collection of existing trade and development activities placed under the umbrella of trade capacity building. The U.S. government initiated an annual governmentwide survey in 2001 to identify U.S. trade capacity building efforts, which it defined as assistance meant to help countries become aware of and accede to the World Trade Organization (WTO); implement WTO agreements; and build the physical, human, and institutional capacity to benefit from trade. U.S. agencies self-reported that they had provided almost $2.9 billion in trade capacity building assistance to over 100 countries from fiscal years 2001 through 2004. The Agency for International Development (USAID) reported providing about 71 percent of the trade capacity building funding. Agencies are coordinating their assistance through the trade capacity building interagency group formed in 2002 to help countries negotiate and implement U.S. free trade agreements. Most of the U.S. agencies we reviewed are not systematically measuring the results of their trade capacity building assistance or evaluating its effectiveness. Although some agencies have set program goals for building trade capacity, they have not generally developed performance indicators, compiled data, or analyzed the results in terms of building trade capacity. USAID's March 2003 strategy for building trade capacity includes a limited number of performance indicators. USAID officials have stated that developing such indicators is difficult but have begun work independently and with other international donors toward that end. Without a strategy for evaluating the effectiveness of its trade capacity building assistance, the United States cannot identify what works and what does not work to ensure the reasonable use of resources for these efforts. |
The FBF, which is administered by GSA, is an intragovernmental revolving fund authorized and established by the Public Buildings Amendments of 1972. Beginning in 1975, the FBF replaced appropriations to GSA as the primary means of financing the operating and capital costs associated with federal space owned or managed by GSA. GSA charges federal agencies rent, and the receipts from the rent are deposited in the FBF. Congress exercises control over the FBF through the appropriations process that sets annual limits on how much of the fund can be expended for various activities. In addition, Congress may appropriate additional amounts for the FBF. The FBF operates as follows. Initially, as part of the President’s budget preparation process, GSA estimates the rental revenue the FBF is expected to receive. The rent estimate is prepared about 18 months in advance of the fiscal year. Through the appropriation process, Congress establishes annual limits on how much of the fund can be expended for various activities. As revenues are received, they are deposited into the FBF, and, subsequently, GSA is to fund various projects and programs within the limits set by Congress. Descriptions for some of these budget activities are shown in table 1. Our first objective was to verify, to the extent practical, the amounts GSA attributed to the individual reasons for overestimation of the FBF rental revenue projections for fiscal years 1996, 1997, and 1998. To do this, we developed an understanding of the rental revenue estimation process that PBS used. We (l) discussed with PBS program officials and staff the basic steps involved in the process used for fiscal years 1996 through 1999; and (2) reviewed studies of the process done by an internal PBS review team, two consulting firms, and GSA’s Inspector General. Further, we examined documents that supplied supporting details, such as a PBS listing of buildings associated with a particular reason, and we discussed each reason for the overestimation and the amount attributed to it with PBS program officials and staff. Our second objective was to determine whether PBS’ corrective actions appeared to address GSA’s identified reasons for the overestimation. We also determined if the corrective actions addressed the weaknesses in the estimation process that we and others identified. To do this, we interviewed PBS officials and staff, reviewed documentation associated with the actions, and observed the operation of a new management information system PBS is developing to help it estimate rental revenues, among other things. On the basis of our knowledge of the estimation system and the proposed or actual corrective actions to the system, we determined whether the corrective actions appeared to address GSA’s identified reasons for the overestimation and other identified weaknesses. Our third objective was to determine the budgetary impact of the overestimation on projects and programs in the FBF. To accomplish this, we developed an understanding of the process by which PBS identified sources of obligational authority that had the potential for inclusion in the fiscal year 1997 obligational reserve. Specifically, through interviews with PBS officials and review of documentation they maintained about the process, we developed an understanding of how PBS became aware of the magnitude of the overestimation problem—$680.5 million—and the action those officials took to identify specific sources of obligational authority. We reviewed the process that PBS used to identify unobligated balances that could be included in the reserve. Both new construction and modernization projects potentially could be included because such projects were experiencing delays that made it unlikely that they would need the obligational authority available in fiscal year 1997. We further developed information on how PBS officials narrowed the pool of potential new construction and repair and alteration projects to the final 11 new construction projects included in the reserve. Concerning the sources of the unobligated fiscal year 1996 balances included in the reserve, we obtained both the regional and headquarters final fiscal year 1996 allowances and the end-of-year obligated balances. However, we did not verify the data on allowances and the end-of-year obligated balances with regional officials or regional records. Finally, PBS headquarters officials provided us with the reasons they believed the unobligated balances existed. In reviewing the budgetary impact of the overestimation on projects and programs, we determined if PBS’ claim that none of the new construction projects included in the reserve were delayed from awarding a construction contract because they were included in the reserve. We did so by discussing the projects with PBS headquarters and regional officials as well as staff of the Administrative Office of the United States Courts (AOUSC) to obtain general background information on the projects and the dates and reasons given for schedule delays. We did not do a detailed review of the project files or the history of the projects before they were included in the reserve. Also, we reviewed the GSA and OMB statements that the impact of the funding problem on the FBF would be eliminated by the end of fiscal year 1998. We verified that GSA had proposed a fiscal year 1998 program of new construction and modernization projects and that GSA’s fiscal year 1998 appropriation did not provide obligational authority for that program. We discussed the impact of the deletion of funding for new construction projects with AOUSC officials to identify the impact on the courts’ immediate and long-range construction programs because the courts’ projects constituted the bulk of PBS’ proposed $594.5 million in fiscal year 1998 funding for new construction. We did not attempt to estimate the dollar impact on specific projects as a result of lack of fiscal year 1998 funding because GSA’s proposed program of projects may have been altered by OMB and congressional reviews prior to obligational authority being provided in GSA’s appropriation law. We did our work primarily at GSA headquarters in Washington, D.C., between July 1997 and June 1998, in accordance with generally accepted government auditing standards. On July 30, 1998, we requested comments on a draft of this report from GSA’s Administrator. GSA’s comments are discussed at the end of this report. Beginning with fiscal year 1994 and continuing through fiscal year 1997, PBS’ actual annual rental revenues were less than the estimated rent revenue PBS projected for budget and appropriation purposes. PBS, in fiscal year 1997 and 1998, took two actions to deal with the overestimation. First, PBS refrained from using about $680.5 million in obligational authority that Congress had previously provided. Second, PBS reduced operating expenses by deferring planned expenditures until later years. It also took steps to address the weaknesses that were identified in the process used to estimate rental revenues for the budget. Figure 1 shows FBF’s estimated and actual income for fiscal years 1990 through 1997. The FBF’s actual rent revenue has grown from about $2.5 billion in fiscal year 1987 to about $4.8 billion in fiscal year 1997. GSA’s historical trends of estimated rental revenue versus actual rental revenue show that actual rental revenues were less than estimated rental revenues for each of fiscal years 1994 through 1997, by amounts ranging from about $110.7 million, or 2.4 percent of the estimate in fiscal year 1995, to about $422.1 million, or 8.2 percent of the estimate in fiscal year 1996. For fiscal years 1994 and 1995, PBS’ overestimation of rental revenue was a combined total of $308.1 million. According to its Chief Financial Officer in fiscal years 1994 and 1995, PBS absorbed the overestimation by reducing planned expenditures and using unobligated carryover balances without the need for congressional action. In January 1997, PBS informed Congress that it expected its total overestimation of rental revenue for fiscal years 1996 and 1997 to be $847 million. As shown in table 2, PBS identified seven reasons for the overestimation and linked specific dollar amounts to each reason. In July 1997, PBS increased the overestimation figure for fiscal year 1997 by $86.8 million and reported a potential overestimation in fiscal year 1998 of about $109.2 million. As a result, the total anticipated overestimation for fiscal years 1996 through 1998 was about $1.04 billion. However, after it closed its fiscal year 1997 books, PBS reported the actual budget impact of its overestimation to be $634.4 million for fiscal years 1996 and 1997 and reduced its fiscal year 1998 overestimation to $28.3 million. In our March 1998 testimony on PBS’ overestimation of the FBF rental revenue projections, we reported that PBS provided documentation supporting the amount of the overestimation for six of the seven reasons shown in table 2. Although we examined the documentation PBS provided to explain its overestimation, we did not trace all the data compiled by PBS back to the original source documents. PBS could not provide documentation showing how it developed the $86 million attributed to the reason that the original fiscal year 1995 rent revenue estimate was higher than actual fiscal year 1995 revenues. We also reported in our testimony that during the course of our work, we determined that weaknesses in PBS’ estimation process contributed to the rental income overestimation. Through discussions with PBS staff and review of studies done by (l) the firms of Ernst and Young and Arthur Andersen—consultants hired by PBS, (2) the GSA Inspector General, and (3) the Rent Revenue Forecasting GO Team—an internal GSA review team established to look at PBS’ rental revenue estimation process—we identified several weaknesses in the process for estimating rental revenues. These weaknesses included the following: lack of documented policy and procedures for the estimating process; unclear lines of responsibility and accountability for revenue estimates below the level of the PBS Commissioner; lack of supporting documentation necessary to verify forecast information and assumptions; and use of national averages, rather than project-specific data, to forecast occupancy schedules and rental rates. Finally, we reported that GSA was aware of the identified weaknesses in its revenue estimation process and had corrective actions to improve this process either already under way or planned. These corrective actions included the following: Documentation is to be required for all decisions, assumptions, and steps involved in the rental revenue estimation process. The Office of Financial and Information Systems, with overall responsibility for the rental revenue forecasting process, was established. Project-specific data is to be used in occupancy schedules and rental rates instead of national averages. A new information system is being implemented to manage, track, and access data, with plans for a revenue forecasting module to be added to the system. We concluded that it appeared that the actions PBS had under way and planned to improve the process it uses to estimate rental revenue address the weaknesses that we and others had identified. If effectively implemented, these actions should help improve future revenue estimates. However, as PBS points out, because its rental revenue estimate is a forecast, it is unlikely to produce a figure that is identical to actual rental revenue. Although some variance is to be expected in any estimating process, variances that go beyond a certain level can be indicative of estimating problems that need to be addressed. In this regard we stated in our testimony that PBS had not established an acceptable margin of error against which it could measure the success of its estimation process. We said that having such a benchmark would put PBS in a better position to identify variances that need to be investigated so that it can explore and fix the causes of excessive variances, improve its estimation process, and determine its effectiveness over time. We recommended that the PBS Commissioner establish an acceptable margin of error for its rental revenue estimates, as well as a process for exploring and resolving causes of variances outside the margin adopted. In a letter dated June 11, 1998, the GSA Administrator notified us that PBS had established 2 percent as a reasonable margin of error and is developing a reconciliation process. Considering the need to prepare estimates 18 months in advance and the steps involved in the estimating process, such as identifying revenue changes for each building, 2 percent does not seem to be an unreasonable margin of error. In late spring 1996, PBS identified a potential revenue gap for fiscal years 1996 and 1997. During fiscal year 1997, PBS officials acted to address the FBF overestimation problem by preventing the use of the FBF obligational authority that could not be met from the FBF resources. PBS determined the size of the obligational authority that was in excess of the FBF resources using both actual fiscal year 1996 operating data and estimates for fiscal year 1997 (see table 3). To address the $680.5 million in obligational authority in excess of available resources, PBS officials created an obligational reserve at the beginning of fiscal year 1997. The intent of the reserve was to ensure that available obligational authority would not be used until revenue was available to cover those obligations. The reserve was composed of funds from the four FBF budget activities, as shown in table 4. To identify sources of obligational authority that could potentially be included in the reserve, PBS officials told us that they initially identified the FBF activities that had unobligated balances at the close of fiscal year 1996. As a result of those efforts, PBS officials identified and included in the reserve $176 million. To identify the additional $504.5 million needed for the reserve, in October and November 1996, PBS officials analyzed the FBF new construction and acquisition, and repair and alteration budget activities. They identified 11 new construction projects, with $591.6 million in unobligated funds, for inclusion in the reserve. Details of the sources of the funds included in the reserve are discussed below. To fund development of some facilities, PBS initially borrows the required funds and subsequently makes regular payments to the lender. The FBF spending authority that funds these annual payments is the installment acquisition payment budget activity. In fiscal years 1996 and 1997, the new obligation authority appropriated for this budget activity amounted to about $182 million and $173 million, respectively. PBS officials told us that when they initially reviewed the various FBF budget activities for available fiscal year 1996 unobligated balances, the installment acquisition payment budget activity had an unobligated balance of about $12 million. We discussed the reasons for this unobligated balance with PBS officials who told us that it was partially a result of lower interest rates for short-term construction loans on projects and for the long-term 30-year notes on the facilities. In addition, they told us that total interest needs were lower than they had budgeted for because the projects had been slower to use borrowed funds. They said that their estimates of both interest rates and the rate at which funds would be needed by projects had projected higher interest costs than actually were incurred. Therefore, the budget activity had closed the fiscal year with an unobligated balance. The PBS officials told us that the $12 million pertained to transactions involving the following nine lease-purchase projects. Foley Square, New York; Woodlawn, Maryland, Health Care Financing Administration; Chamblee, Georgia, Centers for Disease Control Offices; Memphis, Tennessee, Internal Revenue Service; Atlanta, Georgia, Centers for Disease Control; Miami, Florida, Federal Building; Chicago, Illinois, Federal Building; Oakland, California, Federal Building; and District of Columbia, Ronald Reagan Federal Building and International Trade Center. They told us that without a detailed funding analysis of each project, including the funding used versus what was budgeted and the interest rate incurred versus what was budgeted, they could not assign portions of the unobligated balance to each project. PBS officials told us that when they initially reviewed the various FBF budget activities for unobligated balances at the end of fiscal year 1996, the rental of space budget activity had an unobligated balance of about $71 million, an accumulation of fiscal years 1995 and 1996 unobligated balances. They said $68 million of the $71 million would be used as part of the reserve. PBS officials told us that having an unobligated balance in a budget activity is not unusual because regional offices do not have to obligate the entire allowance they receive. Regarding the specific reasons why the rental of space budget activity had an unobligated balance at the close of fiscal year 1996, PBS officials cited incorrect estimates of when leases would start to incur obligations so that lease payments were lower than anticipated. Another reason provided by PBS officials involved the number of lease cancellations. They said there were more cancellations than PBS had budgeted, which resulted in lower obligations. However, they were not able to provide specific dollar amounts by lease. Rather, PBS officials provided us with a breakdown of the fiscal year 1996 regional allowances and unobligated balances (see table 5). PBS staff advised us that although the actual figure, about $71 million, was a little higher than the $68 million included in the reserve, their plan at the time the reserve was established was to include only $68 million in the reserve. However, events during fiscal year 1997 precluded using most of the $68 million for funding of the reserve. In particular, in August 1997, PBS sought congressional approval to transfer about $110 million in funds within the FBF budget activities to meet needs it considered crucial for rental of space. In September 1997, congressional committees approved the transfer request but directed that PBS use $54 million in fiscal year 1996 unobligated balances, which was part of the reserve, to fund part of the transfer. PBS officials told us that the $54 million was used in fiscal year 1997, and additional unobligated construction and acquisition of facilities budget activity funds were used to replace the $54 million in the reserve to maintain full funding of the $680.5 million reserve. PBS funds the operations of government-owned and -leased facilities and pays other government agencies for building operations performed by them in GSA-controlled facilities through the building operations budget activity. Functions budgeted from this activity include cleaning services, utilities, and protection services for facilities. PBS officials told us that when they reviewed the budget activities at the close of fiscal year 1996, the building operations activity had an unobligated balance of about $51 million. This was combined with $45 million in unapportioned fiscal year 1997 funds for a total unobligated balance in the building operations budget activity of $96 million. The officials explained that on a fiscal year basis, a portion of the overall appropriation available for regional building operations is divided into initial allowances against which regions plan and operate their programs. During a fiscal year, according to PBS officials, the initial allowance may be revised to reflect unforeseen needs. These adjustments are funded from money held back by PBS headquarters when the initial allowances are given to the regions. PBS officials told us that the existence of an unobligated balance in a budget activity at the close of a fiscal year is not unusual because regional offices do not have to obligate the entire allowance they receive. At the end of fiscal year 1996, building operations’ unobligated balance was about $51 million. According to a PBS document, the balances were associated with delays in moves, deferred equipment purchases, delays in contract awards, delays in new workload coming on line, and savings achieved through cost-containment measures. This amount, along with $45 million in unapportioned fiscal year 1997 funds, created an unobligated balance of $96 million in the building operations budget activity. Table 6 presents the unobligated balance on a region-by-region basis. According to PBS staff, the FBF’s construction and acquisition of facilities budget activity involves large unobligated balances from year to year; and thus, this budget activity became the focus of planners for funding the balance of the $680.5 million obligational reserve. According to PBS officials, early in fiscal year 1997 they were looking to identify about $504.5 million in obligational authority to complete the reserve. Initially, PBS officials considered both the construction and the modernization programs in developing a list of potential projects for funding the reserve. They evaluated individual projects using the following three criteria. Project had not proceeded to construction contract award. Obligational authority for the project had not been allotted to a regional office for obligation. Both regional and headquarters officials believed the project would not meet a planned fiscal year 1997 construction contract award schedule. As a result of their analysis, PBS officials developed a list of new construction and modernization projects with obligational authority totaling about $1.5 billion. Recognizing that the list of potential projects resulted in obligational authority in excess of the $504.5 million required, PBS officials told us that the decision was made to exclude modernization projects from the reserve and to focus solely on new construction projects. PBS officials pointed out that this decision provided enough funding for PBS’ priority of maintaining the buildings already in the inventory. Table 7 lists the new construction projects from which obligational authority was reserved, showing the project location, the amount of the full appropriation, and the amount available for reserve. PBS officials told us that the obligational authority reserved, $591.63 million, represented their thinking of the funding necessary to meet the $680.5 million before they knew how much would be available in end of the fiscal year unobligated carryover funds from other budget activities. PBS officials told us that, as of November 1996, it was their opinion that each of the 11 projects listed above had a probability of experiencing a schedule slippage that would move the planned construction contract award date beyond fiscal year 1997. Therefore, they felt that reserving the obligational authority of these projects would not delay their overall progress. Our discussions with PBS officials, both in headquarters and the regional offices, and with officials of AOUSC confirmed that with one exception, discussed below, the schedule slippage on each project was sufficient to delay the construction contract award past the close of fiscal year 1997. In the one instance where the delay was solely because the project’s funding was moved to the reserve—the Las Vegas, Nevada, courthouse—the delay of the construction contract award was about 3 weeks, from September 26 to October 16, 1997. The GSA Project Manager told us that the delay did not affect the construction award amount because the contractor agreed to a contract at the price he bid in September 1997. The scheduled construction contract award dates at the time each project was identified for possible inclusion in the reserve, the current construction contract award dates as of the spring of 1998, and reasons for the delays are presented in table 8. Congress provided new obligational authority for the projects and programs in the $680.5 million reserve for fiscal year 1998. Therefore, the FBF revenues received in fiscal year 1998 are now available to be obligated for the budget activities used to create the $680.5 million reserve in fiscal year 1997. OMB and PBS officials have stated that the actions taken through the fiscal year 1998 budget will eliminate the impact of the rent estimating problem on the FBF. However, as noted below, elimination of funding for new construction and modernization and reduced funding for building operations and basic building repair and alteration for fiscal year 1998 could have adverse effects on the FBF. In September 1996, GSA submitted proposed new construction and modernization programs for fiscal year 1998 to OMB totaling about $1.4 billion. However, according to GSA officials, OMB budget decisions required that $680.5 million of fiscal year 1998 budget authority be used to offset the funds reserved in fiscal year 1997 so that previously funded projects could proceed. Congress appropriated no fiscal year 1998 funding for new construction or modernization. In addition, in discussing the impact of the fiscal year 1998 budget decision, a GSA official, in responding to a question during an April 24, 1997, congressional hearing, stated “Absent direct appropriations and with the requirement to earmark $680 million in FY 98 Federal Building Fund budget authority to prior year capital projects, GSA will operate below prudent funding levels for building operations and repair and alterations for FY 98.” It is not clear how many, if any, of the proposed new construction or modernization projects would have been included in the President’s budget or funded by Congress in fiscal year 1998 had it not been for the overestimation problem. However, to the extent the overestimation problem resulted in lack of funding for new projects and these proposed projects are funded in the future, the government could experience cost changes. For example, additional costs could occur from price changes in the future, which could, of course, vary depending upon general and local economic and construction industry conditions. In addition, delays in basic repair and alteration work could also result in additional future cost to the extent prices for these services increase in the future and to the extent delays cause further deterioration. The maintenance of government-owned assets has been a long-standing concern. In 1993, the U.S. Advisory Commission on Intergovernmental Relations reported that maintenance often does not receive adequate attention, especially in times of tight budgets, and that deferring maintenance can result in poor-quality facilities, reduced public safety, higher subsequent repair cost, and poor service to the public. As we stated in our testimony on March 5, 1998, the actions PBS has under way and planned to improve its rental revenue estimation process address the weaknesses that we and others have identified and, if effectively implemented, these actions should help improve future revenue estimates. The actions taken by PBS to establish an obligational reserve to prevent the overobligation of the FBF revenue did not delay 10 of the 11 new construction projects included in the reserve. The construction contract award amount for one project, which was delayed for about 3 weeks, was not affected by the delay. Finally, although both OMB and PBS have stated that the impact of the FBF funding problem will be resolved by the end of fiscal year 1998, we believe that it could affect the FBF obligational authority beyond fiscal year 1998. We did not quantify the possible obligational impact; however, the delay in construction and modernization projects could result in price changes in the future, which could vary depending upon general and local economic and construction industry conditions. In addition, deferred maintenance could result in increased future cost. On July 30, 1998, we requested comments on a draft of this report from the Administrator, GSA. On August 6, 1998, we received oral comments from the Chief Financial Officer, Public Buildings Service, and other PBS staff. These officials generally agreed with the information in the report. We are sending copies of this report to the Ranking Minority Member of your Subcommittee; the Chairmen and the Ranking Minority Members of the Senate Committee on Environment and Public Works and the House Committee on Transportation and Infrastructure; and the Administrator of GSA. Copies will be made available to others upon request. Major contributors to this report are Ronald King, Assistant Director; Thomas Johnson Evaluator-in-Charge; Thomas Keightley, Evaluator-in-Charge; and Hazel Bailey, Communications Analyst. If you have any questions about the report, please call me on (202) 512-8387. Bernard L. Ungar Director, Government Business Operations Issues The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO reviewed the General Services Administration's (GSA) actions in responding to and managing the recent funding problems experienced by its Federal Buildings Fund (FBF), focusing on: (1) verifying, to the extent practical, the amounts GSA attributed to each reason for overstimation of the FBF rental revenue projections for fiscal years 1996, 1997, and 1998; (2) whether the Public Buildings Service's corrective actions appeared to address GSA's identified reasons for the overestimation; and (3) budgetary impact of the overestimation on projects and programs in the FBF. GAO noted that: (1) GSA informed Congress that it expected the total overestimation of rental revenue for fiscal years 1996 and 1997 to be $847 million; (2) GAO verified, to the extent practical given available support, six of GSA's identified seven reasons for the overestimation and the linkage of specific dollar amounts of the overestimation to each of the six reasons; (3) GSA was unable to provide documentation showing how it developed the $86 million it attributed to the remaining reason--the fiscal year (FY) 1995 rent revenue estimate being higher than actual revenues; (4) GAO and others identified several weaknesses in GSA's rental revenue estimation process, such as the lack of documented policy and procedures for the rental revenue estimation process and the lack of supporting documentation necessary to verify forecast information and assumptions; (5) GSA has taken or plans to take corrective actions that, if effectively implemented, should help improve future rental revenue estimates; (6) for FY 1997, GSA took action to prevent the overobligation of FBF revenue by creating a reserve to ensure that obligational authority totalling $680.5 million would not be used until revenue was available to cover those obligations; (7) this action had the potential to affect the projects and programs from which obligational authority was withheld; (8) recent statements by GSA and Office of Management and Budget officials indicated that the impact of the rent estimating problem on the FBF will be resolved by actions taken through the FY 1998 budget; (9) although the $680.5 million appropriated in FY 1998 replenishes the $680.5 million to prior projects, GAO does not believe it necessarily mitigates the effects of not funding GSA's proposed FY 1998 program of new construction and modernization work; (10) GSA has stated that the overestimation problem contributed to a reduction in funding for building operations and basic building repair and alteration; and (11) this reduction could also result in changes in future costs for the same reasons previously mentioned as well as increased repair costs due to more extensive deterioration over time. |
We identified 34 areas where agencies, offices, or initiatives may have similar or overlapping objectives or may provide similar services to the same populations; or where government missions are fragmented across multiple agencies or programs (see table 1). Overlap and fragmentation among government programs or activities can be harbingers of unnecessary duplication. The areas identified below are not intended to represent the full universe of duplication, overlap, or fragmentation within the federal government. Our future work will examine other areas of government for potential duplication, overlap, and fragmentation. As table 1 shows, many of the issues we identified are focused on activities that are contained within single departments or agencies. In those cases, agency officials can generally achieve cost savings or other benefits by implementing existing GAO recommendations or by undertaking new actions suggested in our March 1 report. However, a number of issues we have identified span multiple organizations and therefore may require higher-level attention by the executive branch or enhanced congressional oversight or legislative action. For example: Teacher quality programs: In fiscal year 2009, the federal government spent over $4 billion specifically to improve the quality of our nation’s 3 million teachers through numerous programs across the government. We identified 82 distinct programs designed to help improve teacher quality, either as a primary purpose or as an allowable activity, administered across 10 federal agencies. The proliferation of programs has resulted in fragmentation that can frustrate agency efforts to administer programs in a comprehensive manner, limit the ability to determine which programs are most cost effective, and ultimately increase program costs. In 2009, we recommended that the Secretary of Education work with other agencies as appropriate to develop a coordinated approach for routinely and systematically sharing information that can assist federal programs, states, and local providers in achieving efficient service delivery. The Department of Education has established working groups to help develop more effective collaboration across Education offices, and has reached out to other agencies to develop a framework for sharing information on some teacher quality activities, but it has noted that coordination efforts do not always prove useful and cannot fully eliminate barriers to program alignment, such as programs with differing definitions for similar populations of grantees, which create an impediment to coordination. Congress could help eliminate some barriers through legislation, particularly through the pending reauthorization of the Elementary and Secondary Education Act of 1965 and other key education bills. Specifically, to minimize any wasteful fragmentation and overlap among teacher quality programs, Congress may choose either to eliminate programs that are too small to evaluate cost effectively or to combine programs serving similar target groups into a larger program. Education has already proposed combining 38 programs into 11 programs in its reauthorization proposal, which could allow the agency to dedicate a higher portion of its administrative resources to monitoring programs for results and providing technical assistance. Military health system: The responsibilities and authorities for the Department of Defense’s (DOD) military health system are distributed among several organizations within DOD with no central command authority or single entity accountable for minimizing costs and achieving efficiencies. Under the military health system’s current command structure, the Office of the Assistant Secretary of Defense for Health Affairs, the Army, the Navy, and the Air Force each has its own headquarters and associated support functions. Annual military health system costs have more than doubled from $19 billion in fiscal year 2001 to $49 billion in 2010 and are expected to increase to over $62 billion by 2015. DOD has made varying levels of progress in implementing limited actions to consolidate certain common administrative, management, and clinical functions. However, to reduce duplication in its command structure and eliminate redundant processes that add to growing defense health care costs, DOD could take action to further assess alternatives for restructuring the governance structure of the military health system. A May 2006 report by the Center for Naval Analyses showed that if DOD and the services had chosen to implement one of the three larger-scale alternative concepts studied by DOD, the department could have achieved significant savings. Our adjustment of those projected savings into 2010 dollars indicates those savings could range from $281 million to $460 million annually depending on the alternative chosen and numbers of military, civilian, and contractor positions eliminated. DOD officials said that they generally agreed with the facts and findings in our analysis. Federal data centers: According to the Office of Management and Budget (OMB), the number of federal data centers grew from 432 in 1998 to more than 2,000 in 2010. These data centers often house similar types of equipment and provide similar processing and storage capabilities, raising concerns about the provision of redundant capabilities, the underutilization of resources, and the significant consumption of energy. While the total annual federal spending associated with data centers has not yet been determined, the Federal Chief Information Officer has found that operating data centers is a significant cost to the federal government, including hardware, software, real estate, and cooling costs. For example, according to the Environmental Protection Agency, the electricity cost to operate federal servers and data centers across the government is about $450 million annually. In February 2010, OMB launched the Federal Data Center Consolidation Initiative to guide federal agencies in developing and implementing data center consolidation plans. As part of this initiative, OMB directed federal agencies to prepare an inventory of their data center assets and a plan for consolidating these assets by August 30, 2010, and to begin implementing them in fiscal year 2011. Moving forward, it will be important for individual agencies to move quickly to correct any missing items in their data center consolidation plans, establish sound baselines so that progress and efficiencies can be measured, begin their consolidation efforts, track their progress, and report to OMB on their progress over time. Sustained monitoring by Congress could help ensure progress is realized. DOD and VA electronic heath record systems: Although DOD and the Department of Veterans Affairs (VA) have many common health care business needs, the departments have separate efforts to modernize their electronic health record systems. Specifically, DOD has obligated approximately $2 billion over the 13-year life of its Armed Forces Health Longitudinal Technology Application and requested $302 million in fiscal 2011 year funds for a new system. For its part, VA reported spending almost $600 million from 2001 to 2007 on eight projects as part of its Veterans Health Information Systems and Technology Architecture modernization. In April 2008, VA estimated an $11 billion total cost to complete the modernization by 2018. Efforts by the departments to jointly identify and develop common information technology solutions to address their mutual health care needs could result in system development and operation cost savings while supporting higher-quality health care for service members and veterans. We identified several actions that DOD and VA could take to overcome barriers they face in modernizing their electronic health record systems, including revising the departments’ joint strategic plans and defining and implementing a process for identifying and selecting joint information technology investments. Officials from both DOD and VA agreed with these recommendations. Domestic ethanol production: Congress supported domestic ethanol production through a $5.4 billion tax credit program in 2010 and through a renewable fuel standard that applies to transportation fuels used in the United States. The ethanol tax credit and the renewable fuel standard can be duplicative in stimulating domestic production and use of ethanol, and can result in substantial loss of revenue to the Treasury. The ethanol tax credit was recently extended at 45 cents per gallon through December 31, 2011. The tax credit will cost $5.7 billion in forgone revenues in 2011. Because the fuel standard allows increasing annual amounts of conventional biofuels through 2015, which ensures a market for a conventional corn starch ethanol industry that is already mature, Congress may wish to consider whether revisions to the ethanol tax credit are needed, such as reducing, modifying, or phasing out the tax credit. Interagency and agencywide contracts: Agencies have created numerous interagency and agencywide contracts using existing statutes, the Federal Acquisition Regulation, and agency-specific policies. With the proliferation of these contracts, however, there is a risk of unintended duplication and inefficiency. Interagency and agencywide contracting was responsible for at least $54 billion of the approximately $540 billion that was obligated governmentwide for goods and services in fiscal year 2009. However, the federal government does not have a clear, comprehensive view of whether these contracts are being utilized in an efficient and effective manner. In addition, agencies may be unaware of existing contract options that could meet their needs and may be awarding new contracts when use of an existing contract would suffice. Government contracting officials and representatives of vendors have expressed concerns about potential duplication among the interagency and agencywide contracts across government. Some vendors stated they offer similar products and services on multiple contracts and that the effort required to be on multiple contracts results in extra costs to the vendor, which they pass to the government through increased prices. Some vendors stated that the additional cost of being on multiple contracts ranged from $10,000 to $1,000,000 per contract due to increased bid and proposal and administrative costs. Requiring business case analyses for new multiagency and agencywide contracts and ensuring agencies have access to up-to-date and accurate data on the available contracts will promote the efficient use of interagency and agencywide contracting and, by reducing the costs associated with duplicate contracts, help the government better leverage its purchasing power when buying commercial goods and services. OMB reported in August 2010 that it planned to issue overarching guidance that would address the need for agencies to prepare business cases describing the need for a new multiagency or agencywide contract, the value added by its creation, and the agency’s suitability to serve as an executive agent. Additionally, improvements are still needed regarding the accuracy of the federal contracts database in order to determine whether the contracts are being used in an efficient and effective manner. Continued congressional oversight of this issue is warranted. Domestic food assistance: The federal government spent more than $62.5 billion on 18 domestic food and nutrition assistance programs in fiscal year 2008. Programs’ spending ranged from $4 million for the smallest program to more than $37 billion for the largest. The Department of Agriculture’s (USDA) Food and Nutrition Service oversees most of these programs—including the five largest. These programs help ensure that millions of low-income individuals have consistent, dependable access to enough food for an active, healthy life. However, we have found that some of these programs provide comparable benefits to similar or overlapping populations which can lead to inefficient use of federal funds, duplication of effort, and confusion among those seeking services. For example, individuals eligible for groceries through the Commodity Supplemental Food Program are also generally eligible for groceries through the Emergency Food Assistance Program and for targeted benefits that are redeemed in authorized stores through the largest program, the Supplemental Nutrition Assistance Program (formerly the Food Stamp Program). In addition, most of the 18 programs have specific and often complex legal requirements and administrative procedures that often require applicants who seek assistance from multiple programs to submit separate applications for each program and provide similar information which can create unnecessary work for both providers and applicants and may result in the use of more administrative resources than needed. Additionally, little is known about the effectiveness of 11 of the 18 programs because they have not been well studied. In April 2010, we recommended that USDA identify and develop methods for addressing potential inefficiencies and reducing unnecessary overlap among its smaller food assistance programs while ensuring that those who are eligible receive the assistance they need. To date, USDA has not taken action on this recommendation. One of the possible methods for reducing program inefficiencies would entail USDA broadening its efforts to simplify, streamline, or better align eligibility procedures and criteria across programs to the extent that it is permitted by law. Such efforts could result in sizable administrative cost savings since they are a large part of program costs. In addition, options such as consolidating or eliminating overlapping programs have the potential to reduce administrative costs but may not reduce spending on benefits unless fewer individuals are served as a result Employment and training programs: In fiscal year 2009, 47 federal employment and training programs spent about $18 billion to provide services, such as job search and job counseling, to program participants. Most of these programs are administered by the Departments of Labor, Education, and Health and Human Services (HHS). We found that 44 of the 47 programs overlap with at least one other program in that they provide at least one similar service to a similar population. Our review of three programs among the largest—Temporary Assistance for Needy Families (TANF), Employment Service, and Workforce Investment Act (WIA) Adult programs—found that they provide some of the same services to the same population through separate administrative structures. Although the extent to which individuals receive the same services from these programs is unknown due to limited data, these programs maintain parallel administrative structures to provide some of the same services such as job search assistance to low-income individuals. At the state level, the TANF program (which also provides a wide range of other services) is typically administered by the state human services or welfare agency, while the Employment Service and WIA Adult programs are typically administered by the state workforce agency and provided through one-stop centers. Agency officials acknowledged that greater efficiencies could be achieved in delivering services through these programs but said factors such as the number of clients that any one-stop center can serve and one-stop centers’ proximity to clients, particularly in rural areas, could warrant having multiple entities provide the same services. Colocating services and consolidating administrative structures may increase efficiencies and reduce costs, but implementation can be challenging. Some states have colocated TANF employment and training services in one-stop centers where Employment Service and WIA Adult services are provided. Three states—Florida, Texas, and Utah—have gone a step further by consolidating the agencies that administer these programs, and state officials said this reduced costs and improved services, but they could not provide a dollar figure for cost savings. States and localities may face challenges to colocating services, such as limited office space. In addition, consolidating administrative structures may be time consuming and any cost savings may not be immediately realized. To facilitate further progress by states and localities in increasing administrative efficiencies in employment and training programs, we recommended in 2011 that the Secretaries of Labor and HHS work together to develop and disseminate information that could inform such efforts. As part of this effort, Labor and HHS should examine the incentives for states and localities to undertake such initiatives, and, as warranted, identify options for increasing such incentives. Labor and HHS agreed they should develop and disseminate this information. HHS noted that it lacks legal authority to mandate increased TANF-WIA coordination or create incentives for such efforts. Sustained oversight by Congress could help ensure progress is realized. Given today’s fiscal environment, our work summarizes 47 additional areas—beyond those directly related to duplication, overlap, or fragmentation—describing other opportunities for agencies or Congress to consider taking action that could either reduce the cost of government operations or enhance revenue collections for the Treasury. These cost- saving and revenue-enhancing opportunities also span a wide range of federal government agencies and mission areas (see table 2). Examples of opportunities for agencies or Congress to consider taking action that could either reduce the cost of government operations or enhance revenue collections include: DOD spare parts: We have identified weaknesses in DOD’s inventory management practices, including problems in accurately forecasting demand for spare parts. Most recently, we reviewed the Defense Logistics Agency inventory levels and reported in 2010 that the Agency, over a period of 3 fiscal years, averaged $1 billion of inventory annually that has been identified as excess. Since our work has consistently shown that the greatest opportunities to minimize investment in unneeded inventory are at the initial stages of the inventory management process when acquisition decisions are being made, DOD could limit future costs by focusing its efforts on better managing on-order inventory, with a view toward reducing on-order inventory levels that are not needed for current needs or projected demand. Recently, Congress required DOD to submit a comprehensive plan for improving the inventory management systems of the military departments and the Defense Logistics Agency, with the objective of reducing the acquisition and storage of inventory that is excess to requirements. In November 2010, DOD submitted its plan to Congress and stated in its plan that it has already reduced unneeded inventory and that further reductions are possible. For example, DOD reported that $10.3 billion (11 percent) of its secondary inventory has been designated as excess and categorized for potential reuse or disposal. While DOD’s plan is an important step in improving inventory management practices, successful implementation will be challenging and will require sustained oversight by DOD as well as collaboration among the services and the Defense Logistics Agency. Continued congressional attention is warranted. Corrosion: DOD estimates that corrosion costs the department over $23 billion each year. Corrosion—which can take such varied forms as rusting; pitting; calcium or other mineral buildup; degradation from exposure to ultraviolet light; and mold, mildew, and other organic decay—if left unchecked, can degrade the readiness and safety of equipment and facilities and can result in substantial, sometimes avoidable costs. The Defense Science Board Task Force estimated in a 2004 report that 30 percent of corrosion costs could be avoided through proper investment in prevention and mitigation of corrosion during design, manufacture, and sustainment. According to DOD, increased corrosion prevention and control efforts are needed to adequately address the wide-ranging and expensive effects of corrosion on equipment and infrastructure. However, DOD did not fund about one-third of acceptable corrosion projects for fiscal years 2005 through 2010. If the projects accepted by DOD’s Office of Corrosion Policy and Oversight from fiscal years 2005 through 2010 had been fully funded, DOD potentially could have avoided $3.6 billion in corrosion-related costs— assuming those projects achieved the same level of cost-effectiveness as was estimated for all accepted projects in those years. If the Corrosion Office wishes to convince DOD and congressional decision makers that more fully funding its corrosion prevention programs could provide such a significant return on investment, the Corrosion Office needs to complete the validation of return on investment estimates in order to demonstrate the costs and benefits of its corrosion prevention and control projects. Noncompetitive contracts: Federal agencies generally are required to award contracts competitively, but a substantial amount of federal money is being obligated on noncompetitive contracts annually. Federal agencies obligated approximately $170 billion on noncompetitive contracts in fiscal year 2009 alone. While there has been some fluctuation over the years, the percentage of obligations under noncompetitive contracts recently has been in the range of 31 percent to over 35 percent. Although some agency decisions to forego competition may be justified, we found that when federal agencies decide to open their contracts to competition, they frequently realize savings. For example, the Department of State (State) awarded a noncompetitive contract for installation and maintenance of technical security equipment at U.S. embassies in 2003. In response to our recommendation, State subsequently competed this requirement, and in 2007 it awarded contracts to four small businesses for a total savings of over $218 million. In another case, we found in 2006 that the Army had awarded noncompetitive contracts for security guards, but later spent 25 percent less for the same services when the contracts were competed. In July 2009, OMB called for agencies to reduce obligations under new contract actions that are awarded using high-risk contracting authorities by 10 percent in fiscal year 2010. These high-risk contracts include those that are awarded noncompetitively and those that are structured as competitive but for which only one offer is received. While sufficient data are not yet available to determine whether OMB’s goal was met, we are currently reviewing the agencies’ savings plans to identify steps taken toward that goal, and will continue to monitor the progress agencies make toward achieving this and any subsequent goals set by OMB. Undisbursed grant balances: Past audits of federal agencies by GAO and Inspectors General, as well as agencies’ annual performance reports, have suggested grant management challenges, including failure to conduct grant closeouts and undisbursed balances, are a long-standing problem. In August 2008, we reported that during calendar year 2006, about $1 billion in undisbursed funding remained in expired grant accounts in HHS’s Payment Management System—the largest civilian grant payment system, which multiple agencies use. In August 2008, we recommended that OMB instruct all executive departments and independent agencies to track undisbursed balances in expired grant accounts and report on the resolution of this funding in their annual performance plan and Performance and Accountability Reports. As of January 13, 2011, OMB had not issued governmentwide guidance regarding undisbursed balances in expired grant accounts. Social Security offsets: Social Security covers about 96 percent of all U.S. workers; the vast majority of the remaining 4 percent are public employees who work for federal, state, and local government. Although these workers do not pay Social Security taxes on their noncovered government earnings, they may still be eligible for Social Security benefits through their spouses’ or their own earnings from other jobs that Social Security does cover. Two Social Security provisions—the Government Pension Offset, which generally applies to spouse and survivor benefits, and the Windfall Elimination Provision, which applies to retired worker benefits— attempt to take noncovered employment into account when calculating the Social Security benefits. However, these provisions have been difficult to administer because the Social Security Administration (SSA) does not have the pension data it needs to perform these calculations accurately. In April 1998, we recommended that SSA work with the Internal Revenue Service (IRS) to revise the reporting of pension information on IRS Form 1099R, so that SSA would be able to identify people receiving a pension from noncovered employment, especially in state and local governments. However, IRS did not believe it could make the recommended change without new legislative authority. Extending mandatory Social Security coverage for all state and local workers has been proposed among other options for addressing Social Security’s long-term financial deficit. While this would eventually make the Government Pension Offset and Windfall Elimination Provision offsets obsolete, they would still be needed for many years to come for existing employees and beneficiaries, and we continue to believe that it is important to apply these laws consistently and equitably. Hence, we have suggested that Congress consider giving IRS the authority to collect the information that SSA needs on government pension income to administer the Government Pension Offset and Windfall Elimination Provision requirements accurately and fairly. The President’s 2011 budget proposal contains a provision that would address the need for more complete and accurate information on noncovered state and local pensions, and it estimates savings of $2.9 billion over 10 years. The Congressional Budget Office’s 2009 Budget Options, Volume 2, has a similar provision and estimates savings of $2.4 billion over 10 years. Customs fee collections: The U.S. Customs and Border Protection (CBP) collects user fees to recover certain costs incurred for processing, among other things, air and sea passengers, and various private and commercial land, sea, air, and rail carriers and shipments. These fees are deposited into the Customs User Fee Account. We discovered that CBP has a $639.4 million unobligated balance in its Customs User Fee Account as a result of excess collections from a temporary fee increase and elimination of North American Free Trade Agreement country exemptions from January 1, 1994, to September 30, 1997. Clarifying the availability of unobligated balances in CBP’s Customs User Fee Account could enable Congress to revise the agency’s future appropriations, thereby producing a one-time savings of up to $640 million. We first identified these unobligated balances in 2008. CBP officials stated at that time that although they formerly believed they needed additional authorization to spend these balances, it later appeared that the funds may be used as authorized by law. However, when we discussed these unobligated balances again in 2009 and 2010, CBP officials said they requested assistance from OMB to clarify the availability of these funds but OMB has not responded to their request. We believe this is an issue that Congress may wish to address since these unobligated balances have remained in CBP’s Customs User Fee account for more than 10 years. Congress could clarify the purposes for which the $640 million in unobligated balances is available and take action as appropriate. Addressing the gap between taxes owed and paid: The net tax gap, which is the difference between the amount of taxes owed and the amount paid voluntarily and timely less late payments and IRS collection results, was last estimated by IRS to be $290 billion for tax year 2001. Experts believe it may be larger. Our work has identified a number of areas where IRS or Congress could take action to better collect owed revenue, including: Business nonfilers: Historically, the IRS has identified several million businesses each year that may have failed to file tax returns—more than it can thoroughly investigate. IRS has had difficulty determining if these businesses are still active and thus required to file a tax return. As a result, IRS has pursued many inactive businesses, which has not been a productive use of its resources. Recently, IRS has begun to use some third-party data such as information required about certain payments as indicators of business activity. However IRS has not used private sector data that it could obtain to verify taxpayer statements about whether a business is active and a tax return should have been filed. A number of private companies maintain business activity data, such as data on a business’s gross sales and number of employees. Our analysis of Dun and Bradstreet data showed they could be used to identify business activity that IRS was not aware of. For two states, we analyzed 2007 data on the businesses that IRS initially identified as potential nonfilers but later determined were not liable to file returns. Of these, we found 7,688 businesses where IRS data indicated little or no business activity, but Dun and Bradstreet data showed business activity as measured by sales totaling $4.1 billion. In addition to other improvements in its business nonfiler program, we recommended that IRS study the feasibility and cost- effectiveness of using non-IRS, private data to verify taxpayer statements. IRS agreed with the recommendation. Electronic filing: The percentage of tax returns filed electronically has increased from 52 percent in 2005 to 71 percent in 2010. However, in 2010, IRS still processed 40 million tax returns filed on paper. Electronic filing benefits taxpayers by reducing processing errors and expediting their refunds. Increasing electronic filing would also reduce IRS’s return processing costs and increase revenue by facilitating enforcement. As noted in a December 2010 GAO report, IRS estimated savings of $3.10 per return for returns filed electronically versus paper in fiscal year 2009. Our prior work has shown that IRS has three opportunities to increase electronic filing of individual income tax returns: (1) requiring tax software identification numbers would help inform research into how the pricing and attributes of different software products affect taxpayers’ willingness to use software and file electronically, allowing IRS to better promote electronic filing; (2) working with taxpayers and their representatives to reduce the number of rejected electronic returns could reduce the number of frustrated taxpayers who opt to print and mail in their rejected electronic returns, leaving IRS to identify and correct any errors and process the paper returns, thereby losing the benefits of electronic filing; and (3) requiring software vendors to encode relevant information in a bar code that would be embedded on all paper returns printed from tax software and mailed would enable IRS to obtain electronic information, such as a taxpayer’s Social Security number and address, from the return. While not as beneficial as electronic filing, bar coding would still provide efficiencies over data transcription and enable more information to be available electronically. Having more or all tax return information available electronically could help IRS target audits on noncompliant taxpayers, avoid burdening compliant taxpayers with unnecessary audits, make more productive use of IRS’s audit resources, and—according to IRS officials—increase annual tax revenue by $175 million. Adjusting civil tax penalties: The Internal Revenue Code has over 150 civil penalties that potentially deter taxpayer noncompliance. A number of civil tax penalties have fixed dollar amounts—either a specific dollar amount, or a minimum or maximum amount—that are not indexed for inflation. Over time, the lack of indexing can decrease the real value of IRS assessments and collections significantly. We found in August 2007 that adjusting civil tax penalty fixed-dollar amounts for inflation from 2000 to 2005 would have increased IRS collections by an estimated $38 million to $61 million per year based on a limited number of penalties we reviewed. We reported that Congress may want to consider requiring IRS to periodically adjust for inflation, and round appropriately, the fixed- dollar amounts of the civil penalties to account for the decrease in real value over time and so that penalties for the same infraction are consistent over time. Although Congress has increased the amount of some fixed penalties since our report, only two penalties are to be adjusted for inflation on a periodic basis. Consequently, we continue to believe Congress should consider requiring IRS to periodically adjust all fixed penalties for inflation. Unneeded real property: Many federal agencies hold real property they do not need, including property that is excess or underutilized. Excess and underutilized properties present significant potential risks to federal agencies because they are costly to maintain. For example, in fiscal year 2009, agencies reported underutilized buildings accounted for over $1.6 billion in annual operating costs. In a June 2010 Presidential Memorandum to federal agencies, the administration established a new target of saving $3 billion through disposals and other methods by the end of fiscal year 2012; the President reiterated this goal in his 2012 budget. However, federal agencies continue to face obstacles to disposing of unneeded property, such as requirements to offer the property to other federal agencies, then to state and local governments and certain non profits at no cost. If these entities cannot use the property, agencies may also need to comply with costly historic preservation or environmental cleanup requirements before disposing of the property. Finally, community stakeholders may oppose agencies’ plans for property disposal. OMB could assist agencies in meeting their property disposal target by implementing our April 2007 recommendation of developing an action plan to address key problems associated with disposing of unneeded real property, including reducing the effect of competing stakeholder interests on real property decisions. In conclusion Mr. Chairman, Ranking Member Cummings, and Members of the Committee, given the challenges noted above, careful, thoughtful actions will be needed to address many of the issues discussed in our March 1 report, particularly those involving potential duplication, overlap, and fragmentation among federal programs and activities. These are difficult issues to address because they may require agencies and Congress to re-examine within and across various mission areas the fundamental structure, operation, funding, and performance of a number of long- standing federal programs or activities with entrenched constituencies. Some of these areas are also included in our 2011 High-Risk Series update on which we testified before your committee in February 2011. Further, in January 2011, the President signed the GPRA Modernization Act of 2010, updating the almost two-decades-old Government Performance and Results Act (GPRA). Implementing provisions of the new act—such as its emphasis on establishing outcome-oriented goals covering a limited number of crosscutting policy areas—could play an important role in clarifying desired outcomes, addressing program performance spanning multiple organizations, and facilitating future actions to reduce unnecessary duplication, overlap, and fragmentation. Continued oversight by OMB and Congress will be critical to ensuring that unnecessary duplication, overlap, and fragmentation are addressed. As the nation rises to meet the current fiscal challenges, GAO will continue to assist Congress and federal agencies in identifying actions needed to reduce duplication, overlap, and fragmentation; achieve cost savings; and enhance revenues. In our future annual reports, we will look at additional federal programs and activities to identify further instances of duplication, overlap, and fragmentation as well as other opportunities to reduce the cost of government operations and increase revenues to the government. We plan to expand our work to more comprehensively examine areas where a mix of federal approaches is used, such as tax expenditures and direct spending. Likewise, we will continue to monitor developments in the areas we have already identified. Issues of duplication, overlap, and fragmentation will also be addressed in our routine audit work during the year as appropriate and summarized in our annual reports. Thank you, Mr. Chairman, Ranking Member Cummings, and Members of the Committee. This concludes my prepared statement. I would be pleased to answer any questions you may have. For further information on this testimony or our March 1 report, please contact Patricia Dalton, Chief Operating Officer, who may be reached at (202) 512-5600, or [email protected]; and Janet St. Laurent, Managing Director, Defense Capabilities and Management, who may be reached at (202) 512-4300, or [email protected]. Specific questions about individual issues may be directed to the area contact listed at the end of each area summary in the report. Contact points for our Congressional Relations and Public Affairs offices may be found on the last page of this statement. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | This testimony discusses our first annual report to Congress responding to a new statutory requirement that GAO identify federal programs, agencies, offices, and initiatives--either within departments or governmentwide--that have duplicative goals or activities. This work will inform government policymakers as they address the rapidly building fiscal pressures facing our national government. Our annual simulations of the federal government's fiscal outlook show continually increasing levels of debt that are unsustainable over time, absent changes in the federal government's current fiscal policies. Since the end of the recent recession, the gross domestic product has grown slowly and unemployment has remained at a high leveWhile the economy is still recovering and in need of careful attenwidespread agreement exists on the need to look not only at the near term but also at steps that begin to change the long-term fiscal path as soon as possible without slowing the recovery. With the passage of time, the window to address the fiscal challenge narrows and the magnitude of the required changes grows. This testimony is based on our March 1, 2011, report and addresses two key issues: (1) federal programs or functional areas where unnecessary duplication, overlap, or fragmentation exists, the actions needed to address such conditions, and the potential financial and other benefits of doing so; and (2) other opportunities for potential cost savings or enhanced revenues. The issues raised in the report were drawn from our prior and ongoing work. We identified 81 areas for consideration--34 areas of potential duplication, overlap, or fragmentation as well as 47 additional cost-saving and revenue-enhancing areas. The 81 areas span a range of federal government missions such as agriculture, defense, economic development, energy, general government, health, homeland security, international affairs, and social services. Within and across these missions, our report touches on hundreds of federal programs, affecting virtually all major federal departments and agencies. By reducing or eliminating unnecessary duplication, overlap, or fragmentation and by addressing the other cost-saving and revenue-enhancing opportunities contained in the report, the federal government could yield tens of billions of tax dollars annually and help agencies provide more efficient and effective services. However, these actions will require some difficult decisions, and sustained attention by the administration and the Congress. In some cases, there is sufficient information to estimate potential savings or other benefits if actions are taken to address individual issues. In other cases, estimates of cost savings or other benefits would depend upon what congressional and executive branch decisions were made, including how certain of our recommendations are implemented. Nevertheless, considering the amount of program dollars involved in the issues we have identified, even limited adjustments could result in significant savings. Additionally, information on program performance, the level of funding in agency budgets devoted to overlapping or fragmented programs, and the implementation costs that might be associated with program consolidations or terminations, are factors that could impact actions to be taken as well as potential savings. We identified 34 areas where agencies, offices, or initiatives may have similar or overlapping objectives or may provide similar services to the same populations; or where government missions are fragmented across multiple agencies or programs. Overlap and fragmentation among government programs or activities can be harbingers of unnecessary duplication. The areas identified below are not intended to represent the full universe of duplication, overlap, or fragmentation within the federal government. Our future work will examine other areas of government for potential duplication, overlap, and fragmentation. Given today's fiscal environment, our work summarizes 47 additional areas--beyond those directly related to duplication, overlap, or fragmentation--describing other opportunities for agencies or Congress to consider taking action that could either reduce the cost of government operations or enhance revenue collections for the Treasury. These cost-saving and revenue-enhancing opportunities also span a wide range of federal government agencies and mission areas. |
Pandemics occur when an influenza virus mutates into a novel strain that is highly transmissible among humans, leading to outbreaks worldwide. Because there is little or no pre-existing immunity in the population, the strain is highly pathogenic, thus causing disease among those who become infected. Infected individuals may be capable of transmitting the virus strain for 1 to 2 days before developing symptoms. Pandemics arise periodically but unpredictably and can cause successive waves of disease lasting for up to 3 years. In recent years, the H5N1 strain and other strains of the influenza virus have emerged or re-emerged. Experts are concerned because of similarities between the H5N1 strain and the H1N1 strain, which caused the 1918-19 pandemic. For example, research suggests that both the H5N1 and H1N1 strains prompt an over-reaction of the inflammatory response in humans, causing rapid and severe damage to the lungs. Although the H5N1 strain has not been easily transmitted among humans, influenza experts believe that H5N1 or another new influenza strain may eventually mutate to become highly transmissible. Pharmaceutical interventions available during a pandemic include vaccines and antivirals. Pharmaceutical interventions are the primary methods used to prevent the spread of disease as well as to reduce morbidity and mortality caused by the influenza virus. See table 1. Vaccination is the primary method for preventing infection with the influenza virus. Vaccines reduce the severity of disease or provide immunity by causing the body to produce protective antibodies to fight off a particular virus strain. In order for a vaccine to be most effective, it needs to be well-matched to a particular strain of the influenza virus so that the antibodies formed in response to the vaccine protect against that strain. However, existing strains of the influenza virus can mutate into new strains; in part, this is why a new vaccine is created each year for the upcoming influenza season. Much of what is known about the anticipated effectiveness of a pandemic vaccine is based on evidence from the annual seasonal vaccine. During a pandemic, it may be necessary to use a vaccine that was developed prior to a pandemic and therefore may not be well-matched to the pandemic-causing strain. This vaccine, called a pre-pandemic vaccine, is developed using an influenza strain that experts believe is likely to cause the next pandemic. Research exploring the use of a pre-pandemic vaccine based on strains of the H5N1 virus suggests that it may provide some protection against serious illness and death. In contrast, a pandemic vaccine would be developed against an identified pandemic- causing strain and would likely provide better protection against the pandemic strain. It is likely that seasonal influenza vaccine manufacturers will produce the vaccine used during a pandemic. However, for the 2007-08 influenza season, only five vaccine manufacturers were licensed to produce seasonal influenza vaccine for the United States and only one manufacturer produced its vaccine from start to finish in facilities within U.S. borders. We also recently reported that experts are concerned that countries without domestic manufacturing capacity will not have access to vaccine in the event of a pandemic if the countries with manufacturing capacity prohibit the export of pandemic vaccine until their own needs are met. Antivirals can reduce symptoms and help prevent the spread of influenza by suppressing the growth of the influenza virus. Unlike the immune response triggered by a vaccine, antivirals target the virus itself. For example, some antivirals interfere with the virus’s ability to attach to cells, thereby preventing infection of human cells. Antivirals also differ from vaccines in that they do not need to be reformulated to match a specific influenza strain in order to be effective. In addition, antivirals can be manufactured and stockpiled in advance, making them potentially available at the beginning of a pandemic. HHS currently maintains a stockpile of antivirals in the SNS. However, as we have previously reported, there are limitations associated with relying on antivirals during a pandemic. For example, the effectiveness of antivirals during seasonal influenza has been limited if they are used more than 48 hours after the onset of symptoms in an infected individual. For prophylactic use against seasonal influenza in healthy individuals, antivirals may not be as effective if they are not taken throughout the entire time an outbreak is present in a community. Some influenza strains have become resistant to the antivirals currently approved for prevention and treatment, and thus, the antivirals may not always be effective in preventing disease. In addition, antivirals, like vaccines, take several months to produce, and the lead time needed to scale up production capacity may make it difficult to meet any large-scale, unanticipated demand immediately. As we recently reported, current antiviral production capacity is inadequate to meet expected demand during a pandemic. Further, antivirals can be expensive to stockpile and difficult to administer, depending on the form in which they are given. For example, Tamiflu is given as a capsule or liquid and is relatively easy to administer, whereas, Relenza, is more difficult to administer because it is a powder that must be inhaled using a special device. Since 2000, we and others have reported that federal, state, and local officials need to have information on target groups that have priority for receiving pharmaceutical interventions to know how, where, and to whom to distribute the interventions. We reported that having established target groups is particularly crucial in times of limited supply, such as during a pandemic, when a lack of specific guidance makes it difficult for federal, state, and local officials to plan. For example, in a prior report, we noted that health officials in one state did not know exactly how many individuals were considered a priority for receiving a vaccine. In that case we found that it took state officials nearly a month to compile data on high-risk individuals, to decide how many doses of vaccine were needed in local areas, and to receive and ship vaccine to counties. State and local officials rely on federal guidance when making decisions on which groups should be targeted first for vaccination. For example, in a prior report on the 2004-05 influenza season, when the United States lost approximately half of its seasonal vaccine supply because of manufacturing difficulties, we found that CDC quickly revised its recommendations on who should be prioritized for vaccine. CDC’s changes decreased the targeted population from approximately 188 million to 98 million. State and local officials we spoke with for this report told us that they quickly adopted CDC’s revised recommendations. Since the terrorist attacks on September 11, 2001, public health departments and hospitals have been considered vital elements of emergency preparedness and response efforts. Surge capacity in public health departments and hospitals will be critical to pandemic response given the large number of people expected to require medical care. During a pandemic, hospitals will need to provide care for influenza patients as well as continue providing care for other patients. A pandemic will put a severe strain on the health care system, which already is easily overwhelmed by seasonal influenza outbreaks. Seasonal influenza results in more than 200,000 hospital admissions and 36,000 deaths in the United States every year, and hospitals were stretched to capacity in some past seasonal influenza outbreaks. A severe pandemic would overwhelm hospitals in the United States. For example, using HHS’s planning assumptions, authors of one study estimated that influenza patients would need the equivalent of 191 percent of available staffed non- ICU beds and 461 percent of available staffed ICU beds. A pandemic would occur in the context of existing health care provider shortages. Shortages of health care providers, including physicians and nurses, have been reported for many years by GAO and others. For example, the Association of American Medical Colleges recently released a report summarizing studies issued by 15 states between 2000 and 2007 regarding physician shortages in the United States. That report found that many of these states reported shortages of physicians in specialties such as primary care, cardiology, and endocrinology. Similarly, a recent survey of chief executive officers by the American Hospital Association found that as of December 2006, hospitals across the country reported having an estimated 116,000 registered nurse vacancies. That survey also found that nearly half of emergency departments are operating at or above capacity. Partly in response to these workforce shortages, Congress passed the Pandemic and All-Hazards Preparedness Act (PAHPA) in December 2006. Among other things, the law requires the Secretary of HHS by 2009 to identify strategies to recruit, retain, and protect the public health workforce from workplace exposures during public health emergencies, which would include pandemics. In addition, PAHPA established the Office of the Assistant Secretary for Preparedness and Response to coordinate activities between HHS and other federal departments, agencies, and offices and state and local officials responsible for emergency preparedness. Nonpharmaceutical interventions are measures used to reduce the impact of a communitywide infectious disease outbreak without the use of pharmaceuticals. Examples of nonpharmaceutical interventions include isolation, quarantine, social distancing, and infection control (see table 2). Slowing the spread of disease during a pandemic will be particularly important given anticipated shortages of pharmaceutical interventions and the expectation that a severe pandemic will overwhelm the health care system. Experts have suggested that nonpharmaceutical interventions can help the health care system by reducing the anticipated influx of patients by limiting the rate of disease transmission (see fig. 1). In the past, nonpharmaceutical interventions have been used in some cases to successfully slow the spread of infectious disease outbreaks. For example, during the 1918-19 pandemic, local public health officials relied on nonpharmaceutical interventions—including rules forbidding overcrowding in streetcars and bans on public gatherings—to slow the spread of disease. More recently, during the global outbreak of severe acute respiratory syndrome (SARS) in 2003, nonpharmaceutical interventions were also implemented to slow the spread of disease. For example, we reported that nonpharmaceutical interventions, such as closing two hospitals to new admissions, appeared to be useful in Canada’s management of the SARS outbreak. Public health emergencies such as the SARS outbreak in 2003 and the anthrax incidents in 2001 have demonstrated that communication with the public about a public health emergency by federal officials is a critical component of national preparedness. In July 2003, we reported that effective communication between health care providers and the public reinforced the need to adhere to infectious disease control measures and that rapid and frequent communications regarding SARS helped slow its spread. In addition, in October 2003, we reported that the media and the public looked to CDC as the source for health-related information during the anthrax incidents, but that CDC was not always able to successfully convey the information that it had. As with the SARS outbreak and anthrax incidents, a pandemic will generate immediate, intense, and sustained demand for information. The public will want information quickly about the risks and status of the pandemic, what they can do to stay healthy, what is being done by the government to protect them, and where to go for medical services. Very technical points and sensitive political issues will need to be explained to the general public. If accurate and consistent information is not available and disseminated in a timely and efficient manner, rumors, myths, and misinformation may lead to unnecessary public anxiety and could result in mistrust of, and noncompliance with, the public health and medical measures that are recommended to save lives. Once a pandemic begins, HHS plans to make accessible to state and local jurisdictions federal stockpiles of antivirals and pre-pandemic vaccine until a pandemic vaccine becomes widely available. According to HHS, public-sector stockpiles of antivirals are intended to be used primarily for the treatment of sick individuals. HHS intends to oversee the distribution and administration of federally owned pre-pandemic vaccine to individuals identified as members of the critical workforce; that is, workers in sectors that are necessary for society to continue functioning. HHS also plans to provide jurisdictions with doses of the pandemic vaccine as they become available. HHS recommends that state and local jurisdictions follow its list of targeted groups in administering the pandemic vaccine. However, HHS faces challenges with implementing its strategy for using pharmaceutical interventions, such as the lack of vaccine manufacturing capacity within U.S. borders and the length of time experts anticipate will be needed to manufacture a pandemic vaccine. Additionally, we and others have reported since 2000 how problems can arise if potential target groups are not established in advance. In 2008, HHS released guidance on prioritizing target groups for pandemic vaccine and draft guidance on antiviral use during a pandemic. HHS has not yet released draft guidance for public comment on prioritizing target groups for pre-pandemic vaccine. Until a pandemic vaccine becomes widely available, one part of HHS’s strategy for using pharmaceutical interventions involves distributing antivirals in the SNS to state and local jurisdictions. HHS has established a national goal of stockpiling 75 million treatment courses of antivirals in public-sector stockpiles—meaning those in the SNS and in jurisdictional stockpiles. As of May 2008, HHS had stockpiled 44 million courses of antivirals for treatment in the SNS and is subsidizing the purchase of 31 million treatment courses by state and local jurisdictions for storage in their own stockpiles. As of May 2008, state and local jurisdictions had collectively stockpiled nearly 22 million treatment courses of antivirals. Of the federally stockpiled antivirals, HHS has reserved 6 million courses for containment of an initial outbreak. For example, these 6 million courses may be used to respond to initial outbreaks abroad and parts of the United States experiencing the earliest cases. Officials told us that after the department distributes these initial 6 million courses of antivirals, it plans to deliver the remaining antivirals in the SNS to all jurisdictions simultaneously for treatment of individuals sick with influenza. According to HHS’s guidance, state and local jurisdictions will receive their allotments of antivirals on a per-capita basis and should prepare to receive their share of antivirals when a pandemic begins, either in the United States or overseas. According to HHS officials, the decision to release antivirals from the SNS will be made by the Secretary of HHS in conjunction with the Director of CDC. HHS officials estimate that it will take between 7 days and 1 month for all antivirals to be distributed to jurisdictions. HHS officials also told us that they have conducted several exercises to test HHS’s plan to distribute antivirals to these jurisdictions during a pandemic. Antivirals from the SNS will be delivered to one location within each jurisdiction. According to HHS officials, state and local jurisdictions will distribute both the SNS antivirals and antivirals stored in their own stockpiles throughout their respective areas using pandemic-specific distribution plans. HHS officials told us that the stockpiles of antivirals owned by state and local jurisdictions will provide the jurisdictions with more immediate access to the drugs during the initial stages of a pandemic. Because these stockpiles will be entirely under each jurisdiction’s control, officials there may choose to use some of these antivirals as prophylaxis—as proposed in HHS’s draft guidance on antiviral use during a pandemic—in an attempt to slow the spread of the pandemic by providing them to healthy individuals who have been exposed to the pandemic-causing strain. However, to ensure that stockpiles are not rapidly depleted, HHS currently recommends that jurisdictions use antivirals only for treatment. HHS also advises jurisdictions to begin deploying their respective antiviral stockpiles immediately when a pandemic has been confirmed. In June 2008, HHS released draft guidance for the use of antivirals during a pandemic in the Federal Register for public comment. The draft guidance is consistent with HHS’s previous recommendation that public-sector stockpiles be used primarily for treatment of individuals sick with influenza. In its draft guidance, HHS also acknowledged that more antivirals will be needed than will be available in public-sector stockpiles particularly if antivirals are used for prophylaxis. HHS proposes in its draft guidance that the private sector stockpile 110 million additional courses. HHS also suggests that antivirals in the private-sector stockpile be targeted for prophylactic use for health care and emergency services personnel, and in some circumstances, for persons with compromised immune systems as well as those living in group settings. The purchasing, allocation, and distribution of private-sector stockpiles would be the responsibility of the owner of those stockpiles. HHS’s strategy also involves releasing federally owned pre-pandemic vaccine to specific locations in state and local jurisdictions for administration when it has been determined that sustained transmission of the pandemic virus has occurred. HHS intends to oversee distribution and administration of pre-pandemic to members of the critical workforce identified by a federal interagency group—the National Infrastructure Advisory Council. Workers considered critical consist of those necessary to maintain national or homeland security, economic survival, and the public health and welfare. These employees include emergency service providers, such as law enforcement, banking and financing personnel, and health care providers. The National Infrastructure Advisory Council estimates that the critical workforce includes about 20 million people. HHS has a goal of stockpiling enough pre-pandemic vaccine to cover this group. As of May 2008, HHS had purchased and stockpiled enough pre- pandemic vaccine for about 13 million people. HHS’s strategy for using pre-pandemic vaccine is to keep society functioning until a pandemic vaccine becomes widely available. State and local jurisdictions will receive allotments of pre-pandemic vaccine on a per-capita basis. According to HHS officials, stockpiles of pre-pandemic vaccine will be released for simultaneous distribution to selected sites in each jurisdiction. Currently, each vaccine manufacturer stores the doses of pre-pandemic vaccine that it produces. According to HHS, each manufacturer is assigned to supply this vaccine to certain jurisdictions using its established distribution channels. HHS officials also told us that they have a longer-term plan to distribute vaccine using a single distributor, based on CDC’s Vaccine Management Business Improvement Project. According to HHS officials, this centralized distribution system would be incorporated with its existing Vaccine Ordering and Distribution System, which allows for federal tracking of vaccine distribution. HHS anticipates having a centralized distribution system in place around 2010. HHS officials told us that utilizing this type of system would be beneficial during the early stages of a pandemic, when it is expected that maintaining central control of and securing vaccine will be a high priority. HHS plans to provide pandemic vaccine as it becomes available to state and local jurisdictions for use among target groups. HHS has developed guidance for the prioritization system for administration of the pandemic vaccine. HHS has divided the entire U.S. population into four broad categories—homeland and national security, health care and community support services, critical infrastructure, and the general population. Within each category, groups are clustered into five tiers that correspond to the vaccination priority—or target group—for that specific category. (See table 3 for target groups for a severe pandemic.) These targeted groups were derived through consideration of four vaccination program objectives: (1) protecting those who are essential to the pandemic response and provide care for persons who are ill; (2) protecting those who maintain essential community services; (3) protecting children; and (4) protecting workers who are at greater risk of infection because of their job. In its guidance, HHS also proposed that not all targeted groups be vaccinated in every pandemic, depending on the severity of the pandemic. For a less severe pandemic, for example, individuals in tiers 2 and 3 in the category of critical infrastructure would not be targeted for vaccination. HHS also noted that the guidance will need to be reassessed periodically before a pandemic occurs to consider factors such as changes in vaccine production capacity. During a pandemic, guidance will also be modified based on additional factors that will not be known until a pandemic occurs, including the characteristics of pandemic illness. HHS officials told us that should a pandemic occur in the near future, pandemic vaccine will likely be distributed from vaccine manufacturers directly to state and local jurisdictions using the same distribution systems the manufacturers regularly use for seasonal influenza vaccine. As with pre-pandemic vaccine, HHS anticipates that eventually multiple manufacturers will produce pandemic vaccine. However, it anticipates utilizing a single, centralized distributor. HHS expects to have a centralized distribution system in place around 2010. HHS faces three challenges with implementing its strategy for using pharmaceutical interventions during a pandemic. The first challenge is associated with uncertainties about the effectiveness and clinical outcomes of the pharmaceutical interventions. For example, the uncertainty concerning which influenza strain will cause the next pandemic raises the possibility that the pre-pandemic vaccine currently being developed will not offer protection against the pandemic strain. Also, because the actual pandemic-causing strain has not yet surfaced, researchers can only estimate what amount of vaccine will actually be needed to stimulate a sufficient human immune response. Similarly, the appropriate dosage of antivirals or the exact length of the treatment course needed to make them effective will not be known until the actual pandemic-causing strain emerges. Further, the ability of influenza viruses to develop resistance to antivirals also raises questions about their effectiveness. In 2005, a group of global experts on antivirals noted that studies have suggested that different strains of the H5N1 avian influenza virus have developed resistance to different antivirals. There is also the potential for adverse outcomes that may result from large-scale administration of a newly developed vaccine, such as what occurred during the “swine flu” outbreak of 1976. The government’s success in vaccinating large numbers of the public with the swine flu vaccine was negated by the development of Guillain-Barré syndrome among hundreds of immunized individuals, leading to several deaths. This adverse event only became apparent when the vaccine had been administered to large numbers of people. A second challenge concerns difficulties with the production of pharmaceutical interventions, particularly vaccines. The United States lacks vaccine manufacturing capacity; for example, we found that for the 2007-08 influenza season only one influenza vaccine manufacturer had its production processes entirely within U.S. borders. Additionally, in 2007 we found that the lack of U.S. vaccine manufacturing capacity is cause for concern among experts because it is possible that countries without domestic manufacturing capacity will not have access to vaccine in the event of a pandemic if the countries with domestic manufacturing capacity prohibit the export of the pandemic vaccine until their own needs are met. According to HHS, exacerbating the lack of manufacturing capacity is the length of time experts anticipate will be needed to manufacture a pandemic vaccine. HHS estimates that it may take as long as 20 to 23 weeks after the start of the pandemic for the first doses of pandemic vaccine to become available. Figure 2 shows how pharmaceutical manufacturers would proceed to develop and produce pandemic vaccine as well as when initial batches of vaccine are likely to become available. In response to this lack of manufacturing capacity, HHS has established the long-term goal of domestically producing enough pandemic vaccine for 300 million people within 6 months of having a reference strain of the pandemic virus. HHS expects to reach this level of manufacturing capacity around 2010. The department is currently making large investments in domestic vaccine manufacturing capacity for this purpose. (See app. II for a description of these investments.) HHS is doing this in part by supporting vaccine research with contracts that require manufacturers to establish vaccine-producing facilities within U.S. borders. Through these contracts, one U.S. facility has expanded its manufacturing capacity and is expected to double its existing capacity by 2009 and triple its capacity by 2011. A second facility was recently established in the United States and is expected to manufacture a licensed product in 2010. HHS officials told us there had also been progress in expanding domestic manufacturing capacity for antivirals. The third challenge HHS faces involves difficulties in stockpiling and distributing pharmaceutical interventions. The high costs of purchasing and storing antivirals calls into question HHS’s plan to rely on state and local jurisdictions to acquire and store their own stockpiles of antivirals. For example, officials from one state we spoke with told us that the state was facing financial difficulty in determining how it will purchase its share of antivirals and in identifying and paying for adequate storage space. HHS officials have acknowledged that the cost of purchasing antivirals is high, but have also noted that the contract price HHS has negotiated for state and local jurisdictions is better than the retail price. No federal funding has been made available to aid state and local jurisdictions in building and maintaining storage capacity. In addition, should a pandemic occur in the near future, HHS plans to utilize multiple distributors for pre-pandemic and pandemic vaccines, allowing manufacturers to use existing processes with which they are familiar. However, HHS acknowledged that this process also has multiple weaknesses. For example, the current distribution plan requires extensive coordination between HHS and multiple manufacturers and distributors. It also requires that states and local jurisdictions manage vaccine shipments from multiple sources, which may complicate receipt and storage activities. In response, HHS is planning to centralize its distribution system through a single distributor. HHS has made progress on revising its 2005 guidance to state and local jurisdictions for identifying target groups for the use of pandemic vaccine, but has not finalized guidance for using antivirals and pre-pandemic vaccine. Since 2000, GAO and others have reported on the importance of having pre-established target groups for pharmaceutical interventions to avoid problems deciding who should receive these interventions. In addition, during times of shortage, state and local public health officials look to the federal government for guidance, including when making decisions on which groups should be targeted for prioritization. For example, during the seasonal influenza vaccine shortage of 2004-05, state and local officials immediately adopted the revised guidance on who should be targeted for vaccination as recommended by CDC. State and local public health officials and others have stressed that federal guidance on target groups is needed to aid in their pandemic planning efforts. HHS first published target groups for pandemic vaccine and antivirals in the HHS Pandemic Influenza Plan in November 2005. These initial groups were identified to support a goal of reducing morbidity and mortality among those at greatest risk for developing complications from influenza, such as the elderly. Since the publication of the HHS Pandemic Influenza Plan, there has been wide recognition that other factors should be considered, such as protecting those critical workers needed to keep society functioning, including health care and law enforcement personnel. In addition, recent expansion in the production of antivirals has increased the amount available. Thus, HHS, in consultation with other federal agencies, was tasked by the National Strategy for Pandemic Influenza Implementation Plan and the HHS Pandemic Influenza Implementation Plan to revise the groups outlined in the HHS Pandemic Influenza Plan. In July 2008, HHS released guidance on prioritizing target groups for pandemic vaccine. HHS released draft guidance for public comment in the Federal Register on how antivirals may be used during a pandemic in June 2008. However, HHS has not yet released draft guidance identifying target groups for pre-pandemic vaccine. HHS officials told us they are working on draft guidance for pre-pandemic vaccine in collaboration with other federal agencies, such as DHS. According to officials, target groups for pre-pandemic vaccine are likely to resemble those for pandemic vaccine, but with more of a focus on the critical workforce rather than on the general population. HHS officials said a tiered structure, such as that used for the pandemic vaccine, would only be needed if a pandemic occurs before HHS has reached its goal of stockpiling enough doses for 20 million people. HHS has initiated efforts to improve the surge capacity of health care providers, but these efforts will be challenged during a severe pandemic. Surge capacity of health care providers will be hindered by existing shortages of health care providers and by the potentially high absentee rates of providers during a pandemic. Inadequate staffing of health care facilities will be likely, and the ability to deliver health care consistent with established standards of care may be compromised. HHS’s efforts include plans to supplement the number of health care providers with medical and nursing students. Given the uncertain effectiveness of efforts to increase surge capacity, HHS has developed guidance to assist health care facilities in planning for altered standards of care; that is, for providing care while allocating scarce equipment, supplies, and personnel in a way that saves the largest number of lives in mass casualty events, such as pandemics. In a severe pandemic, existing health-care provider shortages would worsen as health care providers become infected through exposure to infected patients or reach exhaustion because of longer working hours. The federal government assumes absenteeism among all workers, including health care providers, could be as high as 40 percent. During the 2003 SARS outbreak (a disease that has a high mortality rate and poses a high risk for health care workers similar to a pandemic), health care workers accounted for more than 20 percent of the infected cases. During the epidemics in Toronto and Hong Kong, 51 percent and between 28 percent and 50 percent, respectively, of health care providers who treated SARS patients became infected with the SARS virus. Studies have shown that during extreme public health emergencies, such as a pandemic, some health care workers may be unable or unwilling to report to work. For example, a survey of public health department workers, including communicable disease staff, nurses, and physicians, at three public health departments in Maryland found that approximately 46 percent would be likely not to report to work during a pandemic outbreak. Similarly, in a survey of hospital personnel, including doctors and nurses, only half responded that they would be willing to report to work during a pandemic. Those who said they may be unlikely to report to work cited fear of contracting an illness as the reason. These potential workforce shortages during a pandemic will affect care for all patients, not just those with influenza. HHS has initiated many efforts to increase the number of health care workers during a public health emergency by supplementing the workforce with federal response teams and by encouraging mutual aid between states. However, HHS faces challenges in improving surge capacity during a severe pandemic because of the widespread effects of a pandemic and the existing shortages of health care providers. HHS has planned four types of efforts to improve surge capacity during a pandemic. First, the HHS Pandemic Influenza Plan recommends that health care facilities use personnel available locally to increase the number of health care providers during emergencies. These recommendations include using trainees (such as medical and nursing students), patients’ family members, and retired health care providers to provide support for essential patient care at times of severe staffing shortages. The plan recommends that hospital clinical administrators take on patient care responsibilities and that facilities recruit health care providers from other medical settings, such as medical offices and day surgery centers, to assist with patient care in the hospital setting. Additionally, the plan recommends that health care providers be cross- trained to provide support for essential patient care at times of severe staffing shortages. To assist with this effort, HHS’s Agency for Healthcare Research and Quality has developed a video to help train health care workers who are not respiratory care specialists to provide basic respiratory care and ventilator management to adult patients during mass casualty events. In addition, the HHS Pandemic Influenza Plan recommends deployment of federal medical responders, such as members of the National Disaster Medical System, during the early stages of a pandemic to supplement the number of health care providers. Second, the HHS Pandemic Influenza Plan encourages state and territory officials to use the Emergency System for Advance Registration of Volunteer Health Professionals program, which enables state and territory officials to quickly identify licensed volunteer professionals to work in areas with shortages. This program is state-based systems that provide advanced registration and the credentialing information of clinicians needed to augment health care facilities during a declared emergency. The program enables the sharing of pre-registered health care professionals across state lines. According to HHS, as of February 2008, 40 state and territorial jurisdictions had begun to implement the program; all states and territories are required to have this program fully operational by August 2008. Third, HHS has advised state officials to incorporate the Emergency Management Assistance Compact (EMAC) in their plans as another vehicle for obtaining medical assistance during a pandemic. Once a governor declares a state of emergency, a state can request that EMAC address its need for resources, such as health care providers. EMAC personnel will find states that have health care providers who can be deployed across state lines. EMAC was established in 1996 and is administered by the National Emergency Management Association. All 50 states, the District of Columbia, Puerto Rico, and the U.S. Virgin Islands have enacted legislation providing authority to join EMAC. Fourth, HHS encourages state and local officials to use other mechanisms to expand surge capacity of health care providers for providing care to less severely ill patients during a pandemic. These mechanisms would encourage home care of less severely ill patients and include “telehealth” (also known as “telemedicine”), which allows health care providers in hospitals to care for and monitor patients at home with the use of electronic information and telecommunications technologies; and call centers (similar to nurse advice lines), which will allow patients at home to contact health care providers in hospitals in order to obtain medical advice regarding home care. HHS faces several challenges in its efforts to increase surge capacity of health care providers during a pandemic. There are concerns that the use of untrained personnel may reduce the capacity of trained health care providers to deliver needed care. For example, officials from one professional association told us that using such individuals would require training and supervision, which would actually increase the workload of the health care facilities’ staff. They also told us that cross-training personnel to provide support for essential patient care during a mass casualty event may be infeasible because health care providers will be busy caring for patients in their own areas of expertise. Cross-training of health care providers needs to be done in advance, but this may be infeasible because it would take providers away from their daily patient- care responsibilities, and this may be difficult to do given current workforce shortages. Furthermore, health care providers from other areas may not be available for deployment in a severe pandemic. Members of response teams, such as those of the National Disaster Medical System, already have full-time jobs in health care. Therefore, these teams would not necessarily add to the nation’s overall number of health care providers who would be available to treat influenza patients. We were told by HHS and FEMA officials that the National Disaster Medical System response teams will not likely be deployed during a pandemic outbreak because of the widespread nature of a pandemic and the need for those responders in their own regions. Similarly, while the EMACs make it easier for health care providers to work in states other than those in which they are licensed, given the widespread nature of pandemics, health care providers likely will be needed in their own home regions. During a severe pandemic, inadequate staffing of health care facilities will be likely despite efforts to improve surge capacity. Thus, the ability to deliver health care consistent with established standards of care for all patients may be compromised. HHS officials told us they believe that decisions on the allocation of scarce resources—such as equipment, supplies and personnel—are best made at the local level. Therefore, the HHS Pandemic Influenza Implementation Plan recommends that health care facilities plan ahead for providing altered standards of care; that is, for providing care while allocating scarce resources in a way that saves the largest number of lives in mass casualty events. With altered standards of care, instead of treating the sickest or most injured patients first, health care providers would identify and treat patients who have a critical need for treatment and would be likely to survive. Complicating conditions, such as an underlying chronic disease that may impact an individual’s ability to survive, would be considered in the decision-making process. Resources being used by current patients, such as those recovering from surgery, would also become part of the overall resource allocation decisions and might be re-allocated to patients with a more critical need for treatment and a higher likelihood to survive. Altered standards of care would be implemented on a temporary basis. Once the event wanes and more resources become available, provision of health care would return to established standards of care used in normal situations. HHS has issued two guidance documents, Altered Standards of Care in Mass Casualty Events and Mass Medical Care with Scarce Resources: A Community Planning Guide, to assist health care facilities to plan for providing altered standards of care. Altered Standards of Care in Mass Casualty Events provides health care facilities guiding principles for developing altered standards of care. Additionally, it includes a discussion of the authority to activate the use of altered standards of care and the associated legal and regulatory issues, including the possible need for liability protection for health care providers and facilities. Mass Medical Care with Scarce Resources expands on the Altered Standards of Care in Mass Casualty Events report. It provides a discussion of the circumstances that communities would face as a result of a mass casualty event, approaches and strategies that could be used to provide the most appropriate standards of care possible under the circumstances, examples of tools and resources available to help state and local officials in their planning process, ethical considerations in planning for a mass casualty event, and a pandemic case study. PAHPA calls for HHS’s Assistant Secretary for Preparedness and Response to lead and coordinate HHS emergency preparedness and response activities. Accordingly, the Assistant Secretary is engaged in efforts to increase the number and enhance the preparedness level of health care providers for public health emergencies. As part of this effort, HHS officials told us that they have begun to examine issues related to recruitment, retention, and protection of the public health workforce with the goal of identifying strategies to overcome workforce shortages. In addition, to encourage health professionals to enter employment in a state or local public health agency, PAHPA authorizes HHS to award grants to states to assist in operating public health workforce loan repayment programs for individuals who serve in health professional shortage areas or in areas at high risk of a public health emergency. PAHPA also authorized HHS to develop Centers of Public Health Preparedness at accredited schools of public health. HHS intends that these centers will help to train and educate health professionals to prepare for and respond to public health emergencies, including a pandemic. As part of this effort, CDC will develop core emergency preparedness and response curriculums, identify performance goals, and develop health systems research projects. HHS has already incorporated standardized benchmarks and performance measures into existing grant programs. HHS will rely on state and local jurisdictions to utilize nonpharmaceutical interventions to help slow the spread of disease and to lessen the burden on the nation’s health care system until a pandemic vaccine is widely available. HHS has developed guidance and is investing in research on the general use and effectiveness of nonpharmaceutical interventions, thereby helping jurisdictions make more informed decisions. According to HHS, the findings from this research will be used to update existing guidance. However, HHS faces difficulties in helping state and local jurisdictions overcome implementation challenges, such as identifying steps for ensuring community compliance. The authority to implement nonpharmaceutical interventions—such as decisions on school closures—to slow the spread of disease and lessen the burden on the nation’s health care system until a pandemic vaccine is available rests with state and local jurisdictions. To assist state and local authorities with their current planning efforts for using nonpharmaceutical interventions, HHS published a guidance document in February 2007—the Interim Pre-pandemic Planning Guidance: Community Strategy for Pandemic Influenza Mitigation in the United States – Early, Targeted, Layered Use of Nonpharmaceutical Interventions. HHS officials told us that the recommendations in the guidance are for pre-pandemic contingency planning and are intended to provide state and local jurisdictions with a conceptual framework to guide their planning. In this guidance, HHS introduces its “community mitigation framework” that is based upon a targeted, layered strategy involving the direct application of multiple, partially-effective nonpharmaceutical interventions, initiated early and maintained consistently throughout a pandemic. Specifically, HHS’s guidance describes four interventions: (1) isolation (either at home or in a health care setting) and treatment (as appropriate) with antivirals of all individuals with confirmed or probable infections; (2) voluntary home quarantine of members of households exposed to the disease and consideration of combining this intervention with antivirals, provided sufficient amounts are available and can readily be distributed; (3) school closures (including public and private schools as well as colleges and universities) accompanied by closures of other public settings (e.g., shopping malls and movie theaters) to prevent out-of-school social contacts; and (4) adult social distancing to reduce contact among adults in the community and workplace. HHS officials and other experts have acknowledged the significance of implementing certain nonpharmaceutical interventions in order to maximize the available public health benefit while minimizing adverse secondary effects of the interventions. Thus, HHS recommends that state and local jurisdictions consider the severity of the pandemic when making decisions about how to respond to the outbreak. For example, for a less severe pandemic, HHS recommends voluntary home isolation of sick individuals, but generally does not recommend measures that may be more burdensome, such as voluntary quarantine of exposed household members, school closures, and adult social distancing. HHS recommends that state and local jurisdictions implement those additional measures and others in a more severe pandemic. Department officials and experts have also stressed the importance of balancing the need to intervene early enough for nonpharmaceutical measures to be effective, while at the same time not causing unnecessary hardship by implementing them too early. HHS and other federal agencies released guidance in March 2008—the Federal Guidance to Assist States in Improving State-Level Pandemic Influenza Operating Plans—that included information to assist state and local jurisdictions in determining when to implement certain nonpharmaceutical interventions. For example, this guidance recommends implementing voluntary quarantine and administering antivirals to individuals exposed to the pandemic virus when a case of novel influenza is detected in an area, including before sustained human-to-human transmission has been established. Once a pandemic is underway, HHS anticipates providing technical assistance to state and local jurisdictions on the implementation of nonpharmaceutical interventions. This technical assistance would include assessing the specific epidemiological characteristics of the pandemic, such as how the pandemic-causing strain is transmitted, and consulting with state and local jurisdictions on the effectiveness of the nonpharmaceutical interventions that had been implemented. Because it is not possible to accurately predict the severity of a pandemic, HHS officials told us the recommendations in the guidance may change significantly during an actual pandemic, based on data HHS gathered from providing technical assistance as well as from data from initial outbreak investigations or from routine surveillance systems. HHS officials acknowledge that the recommendations in its guidance are not specific because the scientific evidence on the use and effectiveness of nonpharmaceutical interventions is limited, and therefore inconclusive. The research to date using mathematical modeling and analysis of historical data of past pandemics suggests that utilizing multiple nonpharmaceutical interventions simultaneously and early in a pandemic may aid in slowing disease transmission. For example, historical studies of the 1918-19 pandemic describe how some cities reduced death rates by successfully implementing multiple nonpharmaceutical interventions, including social distancing, mandated mask wearing, and case isolation. However, because of incomplete historical records, researchers are not able to determine precisely, where, when, and for how long these interventions were implemented. HHS has supported several research initiatives to establish a stronger evidence base concerning the implementation and effectiveness of nonpharmaceutical interventions, thereby helping jurisdictions to make more informed decisions. For example, in October 2006, HHS awarded $5.2 million to support eight research projects on topics ranging from the role hand hygiene can play in reducing disease transmission to examining upper respiratory infections in families. According to HHS, the findings from this research will be used to update existing guidance. HHS and other experts have stressed the need for additional research to, for example, better inform the assumptions used in mathematical models. HHS listed other key areas for further research in its guidance, such as understanding fundamental questions regarding influenza transmission and the potential psychosocial effects of certain nonpharmaceutical interventions, such as prolonged voluntary home quarantine and social distancing. HHS faces difficulties in helping state and local jurisdictions implement nonpharmaceutical interventions. First, as HHS acknowledged in its guidance, there is the potential for state and local jurisdictions to implement these interventions in an uncoordinated, untimely, and inconsistent manner, thereby dramatically reducing their effectiveness. For example, if one jurisdiction implements a voluntary quarantine of sick individuals and a neighboring jurisdiction does not, the overall movement of sick individuals in the area may not be sufficiently reduced. HHS hopes that state and local jurisdictions will follow its guidance and act in concert, but HHS cannot compel jurisdictions to do so. Second, HHS faces the challenge of helping state and local jurisdictions identify specific thresholds for implementing and ending nonpharmaceutical interventions, such as at what point to close schools. The Federal Guidance to Assist States in Improving State-Level Pandemic Influenza Operating Plans provides general guidance to state and local jurisdictions on when to consider beginning to implement nonpharmaceutical interventions. However, this guidance does not provide details on when to implement specific interventions. For example, the guidance recommends state and local officials begin to consider closing schools when transmission of a pandemic virus occurs, but does not identify a specific absentee rate at which officials should take action. Experts have noted that determining specific triggers is difficult, partly because the data currently available are imperfect and sparse, requiring decision-makers to make assumptions regarding the transmission rate of the pandemic-causing strain as well as the effects of other community behaviors during the pandemic. In addition, state and local officials generally do not have the capabilities to collect the data that federal authorities will need to develop specific triggers during an actual pandemic. For example, one local official noted that one method of determining specific community triggers would be to use prevalence rates, which measure the percentage of the population infected with disease. However, state and local areas do not have surveillance systems capable of providing this level of detail in real-time. Third, HHS faces the challenge of helping state and local jurisdictions convince residents to comply with its requests regarding nonpharmaceutical interventions. This task is especially difficult because restrictions on public activities to combat a pandemic may need to be in place for several months. During the 1918-19 pandemic, nonpharmaceutical interventions were implemented for 2 to 8 weeks. However, researchers have suggested that such interventions would need to be implemented for a longer period for a future pandemic in order to prevent another increase in transmission after the interventions are discontinued. In the 1918-19 pandemic, nonpharmaceutical interventions were lifted. In some cases, the public became fatigued with the interventions, leading to public opposition and noncompliance when authorities found it necessary to reimpose the restrictions. A fourth challenge HHS faces is that these restrictions may have negative impacts on the nation’s economy and on the financial well-being of individual households. For example, nonpharmaceutical interventions may exacerbate worker absenteeism as parents stay home to care for their children when schools are closed. This could eventually result in disruptions in the provision of essential services, such as law enforcement. Similarly, lengthy nonpharmaceutical interventions could financially strain individuals and families. For example, while an HHS-sponsored study on public perceptions regarding a pandemic found a generally high willingness to comply with public health recommendations, it also found a decrease in reported ability to comply with recommended measures when financial constraints were considered. Thus, 57 percent of respondents said they would have problems complying with recommended measures because of financial difficulties if they had to be out of work for 1 month, with 76 percent reporting problems if they had to miss 3 months. A fifth challenge for HHS is the lack of trust by U.S. citizens of federal government public health authorities. A recent study found that only 40 percent of the U.S. population would trust federal government public health authorities as a source for accurate information. The authors of this study assert that this lack of trust may have been exacerbated by the public’s negative perceptions of the government’s response to Hurricane Katrina in 2005 and that the U.S. population may now be less willing to cooperate with some public health requirements in the future, including isolation of sick individuals. HHS has made progress by establishing roles, responsibilities, and procedures for communicating messages to the general public during a pandemic. HHS has also developed pandemic educational materials to communicate messages to the general public before and during a pandemic and has identified ways to disseminate these materials. In addition, HHS has engaged the general public on pandemic issues to better understand public perceptions and knowledge. Nonetheless, communicating sensitive and complex issues to the general public during a pandemic will be challenging. HHS has assigned roles and responsibilities, and developed procedures, for how HHS plans to communicate with the general public about a pandemic. Under the National Response Framework, HHS is the lead federal agency for public health and medical services, and as such, HHS is the federal agency responsible for communicating with the general public about the public health and medical aspects of a pandemic before and during an outbreak. In addition, the HHS Pandemic Influenza Plan identified activities that should be undertaken to prepare HHS to communicate with the general public before and during a pandemic. In November 2006, HHS completed the U.S. Department of Health and Human Services Pandemic Influenza Communications Plan which lays out detailed roles, responsibilities, and procedures to guide HHS communications with the general public. For example, this plan assigned HHS’s Office of the Assistant Secretary for Public Affairs responsibility for coordinating pandemic health messages across all HHS agencies and with state and local communications staff in order to ensure that all HHS agencies work closely together to make public statements that are timely, consistent, and accurate. HHS has named spokespersons within HHS to deliver messages to the public before and during an outbreak. HHS has trained federal, state, local, and private sector public affairs officials to communicate with the general public about a pandemic. The Crisis and Emergency Risk Communication training modules developed by HHS clarify the role of spokespersons, describe the psychology of communicating during a crisis, and provide best practices for working with the media during a crisis. HHS has held 10 Crisis and Emergency Risk Communication training sessions for nearly 500 senior federal officials and public affairs staff, and 11 regional training sessions for approximately 900 state and local leaders. Two additional trainings are scheduled in 2008. HHS also held Crisis and Emergency Risk Communication training sessions in June 2007 for Red Cross leaders and in January 2007 for stakeholders. Nearly 900 training sites participated in these sessions via the Internet. During a pandemic, the HHS communications effort will operate out of its Emergency Communications Center. The center’s capabilities include originating or accessing video feeds, news conferencing, posting mass electronic mailings, responding to media telephone inquiries, receiving, vetting, and clearing messages to be released by HHS. HHS will use a departmental public affairs conference line to provide telephone connections for public affairs staff throughout the department. These phone connections will allow HHS public affairs personnel to work from dispersed sites during the crisis, coordinate messages, receive guidance or direction, and provide information to those needing it. The DHS National Incident Communications Conference Line will also be used by HHS to exchange information with other federal agencies. In addition, the Office of the Assistant Secretary for Public Affairs conducts media outreach to strengthen the relationship between the media and HHS and to support pandemic planning and education. Periodic briefings are scheduled between senior department officials, including the HHS Secretary, and members of the press. For example, in early 2007 HHS held a series of roundtable discussions on pandemics with the major broadcast and cable television networks, wire services, and bloggers to raise awareness of pandemics; the secretaries of HHS and Department of Agriculture participated. HHS press-office staff members also talk to the media regularly to answer questions and provide updates on pandemic planning and related issues. In January 2007, HHS began holding a series of tabletop exercises with key media leaders and senior government officials in six major cities to facilitate effective communication to help insure the timely dissemination of accurate information to the general public through the use of media outlets during a pandemic. HHS has developed and disseminated educational materials for communicating critical information to the general public and is in the process of developing additional materials. HHS has identified some of the critical information that the general public will require during a pandemic and has developed message maps—communications tools used to help organize complex information—to convey that information in a concise format before an outbreak. HHS has developed 82 message maps. HHS’s message maps are each designed to distill three primary, easily understood messages on issues such as the differences between avian influenza, pandemic influenza, and seasonal influenza, as well as what HHS is doing to prepare for a pandemic. Each of these primary messages has three supporting messages that can be used as appropriate to provide context for the issue being mapped. HHS message maps take the form of a series of questions and answers and are made public so that spokespersons from across the government or from private organizations can use the maps to convey accurate and consistent background information to their constituents before an outbreak. Table 4 shows an example of an HHS message map. HHS has several means of disseminating information regarding a pandemic. HHS manages www.pandemicflu.gov, the official U.S. government Web site for disseminating information on pandemics to the public before and during a pandemic. The Web site is updated with new information as it becomes available and provides the public, public health and emergency preparedness officials, government and business leaders, school systems, and local communities with comprehensive governmentwide information on a pandemic. In addition, HHS will use a variety of other information systems to distribute pandemic information including telephone hotlines, such as 1-800-CDC-INFO; educational sessions through teleconferencing, such as the Clinician Outreach and Communication Activity to which the public can call-in; satellite informational broadcasts; and radio and television public service announcements. HHS has developed public service announcements for use on television and radio that urge the general public to learn about and prepare for a pandemic and has created an archive of materials—video footage, posters, and fact sheets—for conveying key pandemic messages to the general public. HHS also has developed planning checklists for specific audiences—such as medical providers, schools, and businesses—to raise awareness and to assist these audiences in preparing for a pandemic. For example, the planning checklists identify issues that should be considered, such as storing additional infection control supplies (such as hand cleansing products and tissues); establishing pandemic-specific policies, procedures, and roles and responsibilities; planning to maintain continuity of operations; coordinating activities with local stakeholders; practicing infection control; and developing communications plans. HHS officials told us that communicating messages to the general public during a pandemic will be challenging despite the department’s preparations. The first challenge is that a pandemic will create an immediate, intense, and sustained demand for information from both the general public and the groups to whom the public will be turning for information, such as the media and health care community. In addition, the general public will likely turn to numerous sources other than HHS for information, including other federal agencies, state and local authorities, the media, health care providers, the Internet, hotlines, employers, peers, family, and community leaders. HHS will not be able to ensure that messages delivered to the general public by non-HHS entities are coordinated and consistent with HHS messages, and the communications may cause confuse the general public. A second challenge concerns the public’s reception to HHS’s communications. HHS has found a low level of public understanding on pandemic issues, some unwillingness to comply under certain circumstances with the messages that HHS plans to deliver, and anxiety over particular messages (such as why pre-pandemic vaccines and some antivirals will not be made available to the general public). For example, a nationally representative survey on pandemic issues found that 58 percent of the general public in the United States did not know what a pandemic is. The survey also found that the public is less willing or is unable to follow some of the recommendations that HHS plans to communicate during a pandemic. For example, HHS plans to recommend that sick individuals who do not require hospital care observe voluntary home isolation and treatment; however, 24 percent of the people surveyed said that they did not have someone to take care of them in their homes. The same study also found that 35 percent of respondents would go to work if requested by their employer even if public health officials recommended that people stay at home during a pandemic. Furthermore, HHS tabletop exercises have identified several issues that will prove challenging when communicating with the public during a pandemic, particularly the sensitivity of certain messages, the use of specialized public health terms in the messages, and the inadequacy of HHS message maps to address the complexity of the issues being communicated. Discussions during these tabletop exercises will help HHS to develop plans to resolve these identified challenges. For example, HHS’s messages will have to communicate clearly the difference between specialized terms such as isolation and quarantine, and the meaning of the phrase “altered standards of care.” Because of the complexity of the issues in its message maps, HHS plans to develop additional educational materials to distribute to the public before a pandemic in order to make these complexities more comprehensible. Although HHS has made progress in identifying issues that need to be addressed and in funding research and vaccine production, significant challenges remain, many of which are beyond HHS’s control or which cannot be quickly addressed. Such challenges include coping with the potentially high absentee rate among health care providers during a pandemic and the length of time it will take to develop a pandemic vaccine once the virus is identified. One important activity, however, that is within HHS’s control that HHS could address before a pandemic is finalizing the guidance on how limited pharmaceutical interventions should be used during a pandemic. A severe pandemic, such as that of 1918-19, has the potential to result in widespread illness and death and is expected to overwhelm the nation’s ability to respond. According to HHS, initial batches of the most effective protective measure—a pandemic vaccine—may take as long as 20 to 23 weeks after the start of the pandemic to become available. Although the federal government has provided some guidance, final decisionmaking will fall on state and local officials who will have to decide how to allocate pharmaceutical interventions and whom interventions should go to first, and when. HHS, in consultation with other federal agencies, has been tasked with revising guidance to assist state and local jurisdictions in identifying groups that should be considered a priority for receiving limited pharmaceutical interventions. In 2008, HHS released guidance on prioritizing target groups for pandemic vaccine and draft guidance for public comment on how antivirals may be used during a pandemic. However, HHS has not yet released draft guidance for public comment on prioritizing target groups for pre-pandemic vaccine. We and others have reported since 2000 how problems related to pandemic planning—such as those problems with the distribution and administration of pharmaceutical interventions—can arise if target groups are not established in advance. This lack of essential information could slow the initial response at the state and local levels and complicate the general public’s understanding of the necessity for rationing these interventions. Additionally, the general public should continue to be engaged in the process of priority setting, as public participation is an essential component for acceptance of tough decisions that will be required unless and until greater capacity or a universal vaccine can be developed. To improve the nation’s preparedness for a pandemic, we are recommending that the Secretary of HHS expeditiously finalize guidance to assist state and local jurisdictions to determine how to effectively use limited supplies of antivirals and pre-pandemic vaccine in a pandemic, including prioritizing target groups for pre-pandemic vaccine. HHS provided written comments on a draft of this report which we have reproduced in appendix III. HHS also provided technical comments, which we have incorporated as appropriate. In its comments, HHS noted that it has taken and plans to take additional actions related to our recommendation since we provided the draft report to the department for its review. HHS indicated that the final guidance for pandemic vaccine allocation was released on July 23, 2008, and that this guidance describes the groups who should be targeted and prioritized for receiving pandemic vaccine. HHS also indicated that the department released draft guidance on how antivirals may be used during a pandemic in June 2008, and that HHS will release for public comment proposed draft guidance on pre-pandemic vaccine allocation in the near future. We updated the text of the report to reflect these developments. We also revised the wording of our recommendation in light of HHS’s comment that HHS recommends that antivirals in public-sector stockpiles should be used primarily for the treatment of individuals sick with influenza. We first identified the need for finalized guidance on how limited pharmaceutical interventions should be used during a pandemic, including target groups where appropriate, in 2000. We believe that finalizing guidance on the use of pharmaceutical interventions will be crucial for responding to a pandemic outbreak and that the necessary guidance documents should be finalized as soon as possible. Throughout its comments, HHS described aspects of its pandemic preparedness activities that it believed could be presented more clearly in our report and presented additional details about its activities. We have revised the language in the report to reflect HHS’s comments where it was necessary. In particular, we revised our discussion of pharmaceutical interventions to clarify our presentation of the three types of pharmaceuticals and how pre-pandemic vaccine will be distributed and administered during a pandemic. We also revised the report to reflect HHS’s objection to our statement that the use of antivirals early in a pandemic could slow the spread of the pandemic. HHS commented that the magnitude of the impact of pharmaceuticals on pandemic spread is uncertain given “…limited countermeasure supplies, unclear effectivenesss, and operational challenges…” Many of HHS’s comments addressed the scope of the department’s actions in relation to the responsibilities of states and local jurisdictions. For example, HHS noted that it will directly oversee the administration of pre- pandemic vaccine to members of the critical workforce, rather than fully delegate that task. For antivirals, HHS agreed that states are free to administer antivirals in their own stockpiles to anyone they like, but also noted that state plans have been reviewed by CDC to ensure that the plans reflect the national recommendation to use antivirals primarily for treatment of individuals sick with influenza. Thirdly, HHS emphasized that health care personnel surge capacity in a pandemic is a local responsibility. Although the 2005 HHS Pandemic Influenza Plan recommends deployment of federal medical responders to supplement the number of health care providers, HHS noted that the federal government does not have adequate health care personnel to provide surge capacity. On that topic, HHS also noted that its planning documents for allocating scarce health care resources were intended as “…planning documents for consideration by communities, not for the purposes of establishing definitive standards.” Finally, HHS proposed alternate terms for some of the concepts in our report (we have noted these instances in the report). For example, HHS disagreed with our use of the term “altered standards of care” and said that the more appropriate term is “standards of care appropriate to the situation.” Because we believe that “altered standards of care” is an accurate description of what may happen as the result of the allocation of scarce health care resources in a pandemic emergency and because HHS used this phrase in its guidance to state and local jurisdictions, we did not make this change. We are sending copies of this report to the Secretary of HHS and to interested congressional committees. We will also make copies available to others on request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. The National Response Framework lays out, in part, the manner in which the federal government responds to domestic incidents. The plan is a guide for an all-hazards response, categorizing the types of federal assistance into specific emergency support functions. Primary and supporting agencies are listed for each emergency support function. “Emergency Support Function #8 – Public Health and Medical Services Annex” of the National Response Framework directs the Department of Health and Human Services (HHS) to provide support as the primary agency, with 16 other agencies, including the Departments of Homeland Security and Agriculture. The National Response Framework replaced the National Response Plan in March 2008, which, in turn, replaced the Federal Response Plan in April 2005. The Federal Response Plan, originally drafted in 1992 and revised in 1999, established the process and structure for the federal government’s provision of assistance in response to any major disaster or emergency declared under the Robert T. Stafford Disaster Relief and Emergency Assistance Act (Stafford Act). The purpose of the Stafford Act is “to provide an orderly and continuing means of assistance by the federal government to state and local governments in carrying out their responsibilities to alleviate the suffering and damage which result” from disasters and emergencies. National Strategy for Pandemic Influenza On November 1, 2005, the President of the United States released the National Strategy for Pandemic Influenza, which provides a framework for future planning efforts for how the country will prepare for, detect, and respond to an influenza pandemic. The strategy reflects the federal government’s approach to the pandemic threat and is based on three main types of activities: (1) preparedness and communication, (2) surveillance and detection, and (3) response and containment. National Strategy for Pandemic Influenza Implementation Plan On May 3, 2006, the President of the United States released the National Strategy for Pandemic Influenza Implementation Plan, which further clarifies the roles and responsibilities of governmental and nongovernmental entities—including federal, state, local, and tribal authorities and regional, national, and international stakeholders—and provides preparedness guidance for all segments of society. This plan addresses the following topics: U.S. government planning and response; international efforts and transportation and borders; protecting human health; protecting animal health; law enforcement, public safety, and security; and institutional considerations. The federal government has identified approximately 300 action items to address the threat of a pandemic. These items include 199 action items led or co-led by HHS. As stated in the plan’s preface, the plan will be reviewed on a continuous basis and revised as appropriate to reflect changes in the understanding of the threat and the development of new technologies. Since the release of the implementation plan, the Homeland Security Council released the National Strategy for Pandemic Influenza Implementation Plan One Year Summary on July 17, 2007. This document summarizes the federal government’s efforts to prepare for an influenza pandemic. Because HHS has primary responsibility for coordinating the nation’s response to public health emergencies under “Emergency Support Function #8,” the department has developed the HHS Pandemic Influenza Plan. The first part of this plan provides HHS’s strategic plan for dealing with an influenza pandemic. This includes information on recommendations on the use of vaccines and antivirals, legal authorities, key HHS activities, HHS research activities, and international partnerships on avian and pandemic influenza. Preparing for and responding to a pandemic will not be purely a federal responsibility; it will primarily be a local response. And because a pandemic is likely to occur in multiple areas simultaneously, resources cannot be geographically shifted as is often done with other emergencies; every community will need to rely on its own planning and resources to respond to the outbreak. Therefore, the second part of the HHS Pandemic Influenza Plan consists of 11 supplements that provide guidance to state and local officials on response elements necessary for preparation for a pandemic (see table 5). The third part of the plan, which details the critical actions items for which HHS has the lead as described in the National Strategy for Pandemic Influenza Implementation Plan, was produced as a separate plan—the Pandemic Influenza Implementation Plan—and was released in November 2006. The Pandemic Influenza Implementation Plan also includes a second part that contains the HHS agencies’ operational plans. The HHS Pandemic Influenza Plan will be reviewed on a continuous basis and revised as appropriate to reflect changes in the understanding of the threat and new technologies. HHS has released five updates regarding the progress of the department’s preparedness efforts on March 13, 2006; June 29, 2006; November 13, 2006; July 18, 2007; and March 17, 2008, respectively. Homeland Security Presidential Directive-21: Public Health and Medical Preparedness On October 18, 2007, the President of the United States released the Homeland Security Presidential Directive-21: Public Health and Medical Preparedness, which provides a strategy for protecting the health of the U.S. population against all disasters, including a pandemic. This directive describes four critical components of public health and medical preparedness: biosurveillance, countermeasure distribution (including pharmaceuticals), mass casualty care, and community resilience. All four critical components will include coordination of efforts at the federal, state, and local levels, as well as with private sector, public health, and medical disaster response resources. Guidance on Allocating and Targeting Pandemic Influenza Vaccine On July 23, 2008, HHS, in coordination with DHS, released the Guidance on Allocating and Targeting Pandemic Influenza Vaccine. This guidance provides a framework to state and local jurisdictions on how to allocate limited supplies of pandemic vaccine to targeted groups, with the goal of providing this vaccine to all who choose to receive it. According to the guidance, groups targeted for vaccination varies depending on the severity of the pandemic. According to HHS officials, it is important to have a stockpile of pharmaceutical interventions, when possible, for use during the early stages of a pandemic. HHS allotted portions of its total fiscal year 2006 appropriation for pandemic-related purposes—$5.683 billion—to the acquisition and development of pharmaceutical interventions. Specifically, approximately $1.1 billion was targeted for investment in antivirals and approximately $3.2 billion was dedicated for vaccines. HHS has also established goals for amounts of pharmaceutical interventions to be stockpiled nationally (see table 6). HHS has invested millions of dollars into the stockpiling of antivirals to achieve its two goals for antivirals. Table 7 summarizes the approximate number of courses stockpiled as of May 2008. In addition, in March 2006, HHS allotted $200 million dollars to the development of additional antivirals, and in January 2007, the department awarded a 4-year contract of about $103 million for further development of the new antiviral peramivir. HHS has also awarded contracts to purchase pre-pandemic vaccines from manufacturers to add to the federal stockpile. See table 8 for HHS’s efforts to stockpile pre-pandemic vaccines. HHS officials told us that the greatest challenge to preparing for an influenza pandemic and implementing its plans for using pharmaceutical interventions is the lack of vaccine manufacturing capacity within the United States. We found in prior work that the lack of U.S. vaccine manufacturing capacity is cause for concern among experts because it is possible that countries without domestic manufacturing capacity will not have access to vaccines in the event of a pandemic if the countries with domestic manufacturing capacity prohibit the export of the pandemic vaccine until their own needs are met. Table 9 describes other HHS initiatives to establish domestic manufacturing infrastructure for vaccine production. Other HHS activities to enhance domestic vaccine manufacturing capacity include investing in vaccine development and research. For example, HHS has invested over $1 billion in development of a cell-based approach to influenza vaccine manufacturing, which it claims will modernize the current egg-based production process (see table 10). The current manufacturing process uses chicken eggs, and egg-based vaccines can easily become contaminated. Cell-based technology does not have these sterility issues and allows for faster development and greater production capacity. Although cell-based vaccine production has been used for other vaccines, it has not been approved for use in developing influenza vaccines. However, according to HHS, it anticipates that a licensed cell- based influenza vaccine will be manufactured in 2010. Also, in January 2007, HHS awarded contracts totaling approximately $133 million to vaccine manufacturers for development of pre-pandemic vaccines, containing adjuvants—substances that may be added to a vaccine to increase the body’s immune response, thereby necessitating a lower dose of vaccine. In addition to the contact named above, Martin T. Gahart, Assistant Director; George Bogart; Cathleen Hamann; Gay Hee Lee; and Deborah J. Miller made key contributions to this report. Influenza Pandemic: Federal Agencies Should Continue to Assist States to Address Gaps in Pandemic Planning. GAO-08-539. Washington, D.C.: June 19, 2008. Emergency Preparedness: States Are Planning for Medical Surge, but Could Benefit from Shared Guidance for Allocating Scarce Medical Resources. GAO-08-668. Washington, D.C.: June 13, 2008. Influenza Pandemic: Efforts Under Way to Address Constraints on Using Antivirals and Vaccines to Forestall a Pandemic. GAO-08-92. Washington, D.C.: December 21, 2007. Influenza Pandemic: Opportunities Exist to Address Critical Infrastructure Protection Challenges That Require Federal and Private Sector Coordination. GAO-08-36. Washington, D.C.: October 31, 2007. Influenza Vaccine: Issues Related to Production, Distribution, and Public Health Messages. GAO-08-27. Washington, D.C.: October 31, 2007. Influenza Pandemic: Further Efforts Are Needed to Ensure Clearer Federal Leadership Roles and an Effective National Strategy. GAO-07-781. Washington, D.C.: August 14, 2007. Emergency Management Assistance Compact: Enhancing EMAC’s Collaborative and Administrative Capacity Should Improve National Disaster Response. GAO-07-854. Washington, D.C.: June 29, 2007. Influenza Pandemic: DOD Combatant Commands’ Preparedness Efforts Could Benefit from More Clearly Defined Roles, Resources, and Risk Mitigation. GAO-07-696. Washington, D.C.: June 20, 2007. Influenza Pandemic: Efforts to Forestall Onset Are Under Way; Identifying Countries at Greatest Risk Entails Challenges. GAO-07-604. Washington, D.C.: June 20, 2007. Emergency Management: Most School Districts Have Developed Emergency Management Plans, but Would Benefit from Additional Federal Guidance. GAO-07-609. Washington, D.C.: June 12, 2007. Avian Influenza: USDA Has Taken Important Steps to Prepare for Outbreaks, but Better Planning Could Improve Response. GAO-07-652. Washington, D.C.: June 11, 2007. The Federal Workforce: Additional Steps Needed to Take Advantage of Federal Executive Boards’ Ability to Contribute to Emergency Operations. GAO-07-515. Washington, D.C.: May 4, 2007. Influenza Pandemic: DOD Has Taken Important Actions to Prepare, but Accountability, Funding, and Communications Need to be Clearer and Focused Departmentwide. GAO-06-1042. Washington, D.C.: September 21, 2006. Influenza Pandemic: Applying Lessons Learned from the 2004-05 Influenza Vaccine Shortage. GAO-06-221T. Washington, D.C.: November 4, 2005. Influenza Vaccine: Shortages in 2004-05 Season Underscore Need for Better Preparation. GAO-05-984. Washington, D.C.: September 30, 2005. Influenza Pandemic: Challenges in Preparedness and Response. GAO-05-863T. Washington, D.C.: June 30, 2005. Influenza Pandemic: Challenges Remain in Preparedness. GAO-05-760T. Washington, D.C.: May 26, 2005. Infectious Disease Preparedness: Federal Challenges in Responding to Influenza Outbreak. GAO-04-1100T. Washington, D.C.: September 28, 2004. Bioterrorism: Public Health Response to Anthrax Incidents of 2001. GAO-04-152. Washington, D.C.: October 15, 2003. Infectious Diseases: Gaps Remain in Surveillance Capabilities of State and Local Agencies. GAO-03-1176T. Washington, D.C.: September 24, 2003. SARS Outbreak: Improvements to Public Health Capacity Are Needed for Responding to Bioterrorism and Emerging Infectious Diseases. GAO-03-769T. Washington, D.C.: May 7, 2003. Bioterrorism: Preparedness Varied across State and Local Jurisdictions. GAO-03-373. Washington, D.C.: April 7, 2003. Hospital Emergency Departments: Crowded Conditions Vary among Hospitals and Communities. GAO-03-460. Washington, D.C.: March 14, 2003. Nursing Workforce: Emerging Nurse Shortages Due to Multiple Factors. GAO-01-944. Washington. D.C.: July 10, 2001. Nursing Workforce: Recruitment and Retention of Nurses and Nurse Aides Is a Growing Concern. GAO-01-750T. Washington, D.C.: May 17, 2001. Flu Vaccine: Supply Problems Heighten Need to Ensure Access for High- Risk People. GAO-01-624. Washington, D.C.: May 15, 2001. | The emergence of the H5N1 avian influenza virus (also known as "bird flu") has raised concerns that it or another virus might mutate into a virulent strain that could lead to an influenza pandemic. Experts predict that a severe pandemic could overwhelm the nation's health care system, requiring the rationing of limited resources. GAO was asked to provide information on the progress of the Department of Health and Human Services's (HHS) plans for responding to a pandemic, including analyzing how HHS plans to (1) use pharmaceutical interventions to treat infected individuals and protect the critical workforce and (2) use nonpharmaceutical interventions to slow the spread of disease. To conduct this work, GAO reviewed government documents and scientific literature, and interviewed HHS officials, state and local public health officials, and subject-matter experts on pandemic response. HHS plans to make existing federal stockpiles of pharmaceutical interventions available for distribution once a pandemic begins. These interventions would include antivirals, which are drugs to prevent or reduce the severity of infection, and pre-pandemic vaccines, which are vaccines produced prior to a pandemic and developed from influenza strains that have the potential to cause a pandemic. HHS has established a national goal of stockpiling 75 million treatment courses of antivirals in the Strategic National Stockpile and in jurisdictional stockpiles. According to HHS, these public sector stockpiles are intended to be used primarily for the treatment of individuals sick with influenza. HHS intends to oversee the distribution and administration of pre-pandemic vaccine to individuals identified as members of the critical workforce. Members of the critical workforce--estimated to be about 20 million--include workers in sectors that are considered necessary to keep society functioning, such as health care and law enforcement personnel. HHS's strategy for using pre-pandemic vaccine is to keep society functioning until a pandemic vaccine--a vaccine specific to the pandemic-causing strain--becomes widely available. HHS anticipates that initial batches of a pandemic vaccine may not be available until 20 to 23 weeks after the start of the pandemic. As batches of the pandemic vaccine become available, HHS plans for state and local jurisdictions to provide it to members of targeted groups based on factors such as occupation and age, instead of making it available to the general public. HHS faces challenges implementing its strategy for using pharmaceutical interventions during a pandemic, including the lack of vaccine manufacturing capacity in the United States. HHS is currently making large investments to expand domestic vaccine manufacturing capacity. In 2008, HHS released guidance on prioritizing target groups for pandemic vaccine and draft guidance on antiviral use during a pandemic. HHS has not yet released draft guidance for public comment on prioritizing target groups for pre-pandemic vaccine. HHS will rely on state and local jurisdictions to utilize nonpharmaceutical interventions, such as isolation of sick individuals and voluntary home quarantine of those exposed to the pandemic strain. To assist state and local jurisdictions with implementing nonpharmaceutical interventions, HHS has developed guidance that describes the department's "community mitigation framework." The framework involves the early initiation of multiple nonpharmaceutical interventions, each of which is expected to be partially effective and to be maintained consistently throughout a pandemic. HHS faces difficulties, including helping jurisdictions develop ways to ensure community compliance. HHS is investing in several initiatives to increase the nation's knowledge about the general use and effectiveness of nonpharmaceutical interventions. The findings from this research will be used to update existing guidance. |
The Coast Guard, which became a part of DHS on March 1, 2003, has a wide variety of both security and nonsecurity missions. (See table 1.) The Coast Guard’s equipment includes 141 cutters, approximately 1,400 small patrol and rescue boats, and about 200 aircraft. Coast Guard services are provided in a variety of locations, including ports, coastal areas, the open sea, and in other waterways like the Great Lakes and the Mississippi River. The Coast Guard’s installations range from small boat stations providing search and rescue and other services to marine safety offices that coordinate security and other activities in the nation’s largest ports. As an organization that is also part of the armed services, the Coast Guard has both military and civilian positions. At the end of fiscal year 2002, the agency had over 42,000 full-time positions—about 36,000 military and about 6,600 civilians. The Coast Guard also has about 7,200 reservists who support the national military strategy and provide additional operational support and surge capacity during emergencies, such as natural disasters. In addition, about 36,000 volunteer auxiliary personnel assist in a wide range of activities from search and rescue to boating safety education. The events of September 11th caused the Coast Guard to direct its efforts increasingly into maritime homeland security activities, highlighted by the Coast Guard’s establishing a new program area: Ports, Waterways, and Coastal Security (coastal security). Prior to September 11th, activities related to this area represented less than 10 percent of the Coast Guard’s operating budget, according to Coast Guard officials. In the fiscal year 2004 budget request, coastal security represents about one-quarter of the Coast Guard’s planned operating budget. Other mission areas, most notably drug interdiction, have declined substantially as a percentage of the operating budget. The emphasis the Coast Guard placed on security after September 11th has had varying effects on its level of effort among all of its missions, as measured by the extent to which multiple-mission resources (cutters, other boats, and aircraft) are used for a particular mission. The most current available data show that some security-related missions, such as migrant interdiction and coastal security, have grown significantly since September 11th. Other missions, such as search and rescue and aids to navigation remained at essentially the same levels as they were before September 11th. However, the level of effort for other missions, most notably the interdiction of illegal drugs and fisheries enforcement, is substantially below pre-September 11th levels. Missions such as coastal security, and migrant interdiction have experienced increased levels of effort. Coastal security has seen the most dramatic increase from pre-September 11th levels. (See fig. 1.) For example, it went from 2,400 resource hours during the first quarter of 1999, peaked at 91,000 hours during the first quarter of fiscal year 2002 (immediately after September 11, 2001), and most recently stood at nearly 37,000 hours for the first quarter of fiscal year 2003. In figure 1, as well as the other resource figures that follow, we have added a line developed by using linear regression techniques to show the general trend for the period. It is important to note that while such lines depict the trend in resource hours to date, they should not be taken as a prediction of future values. Other activity indicators, such as sea marshal boardings, also demonstrate an increased level of emphasis. Before September 11th, such boardings were not done; but there were over 550 boardings during the first quarter of 2003. Similarly, vessel operational control actions have risen by 85 percent since the fourth quarter of fiscal year 2001. Given the emphasis on homeland security, it is not surprising that efforts to interdict illegal immigrants have also increased. For example, during the first quarter of 2003, the level of effort in this area was 28 percent higher than it was for the comparable period in 1998. Some of the Coast Guard’s traditional missions, such as providing aids to navigation and search and rescue, have been the least affected by the increased emphasis on security. (See fig. 2.) While resource hours for both of these missions have declined somewhat since the first quarter of fiscal year 1998, the overall pattern of resource use over the past 5 years has remained consistent. Although search and rescue boats and buoy tenders were used to perform homeland security functions immediately after September 11th, these activities did not materially affect the Coast Guard’s ability to carry out its search and rescue or aids to navigation missions. Search and rescue boats were initially redeployed for harbor patrols after the September 11th terrorist attacks; but the impact on the mission was minimal because the deployments occurred during the off-season, with respect to recreational boating. Similarly, some boats that normally serve as buoy tenders—an aids to navigation function—were used for security purposes instead; but they were among the first to be returned to their former missions. For the first quarter of fiscal year 2003, the number of resource hours spent on these missions was very close to the number spent during the comparable quarter of fiscal year 1998. Performance measurement data further demonstrates the relatively minimal impact on these missions resulting from the Coast Guard’s emphasis on homeland security. For example, for search and rescue, the Coast Guard was within about half a percentage point of meeting its target for saving mariners in distress in 2002. Likewise, data show that with respect to its aid to navigation mission, in 2002 the Coast Guard was about 1 percent from its goal of navigational aid availability. A number of missions have experienced declines in resource hours from pre-September 11th levels, including drug interdiction, fisheries enforcement (domestic and foreign), marine environmental protection, and marine safety. In particular, drug enforcement and fisheries enforcement have experienced significant declines. Compared with the first quarter of 1998, resource hours for the first quarter of fiscal year 2003 represent declines of 60 percent for drug interdiction and 38 percent for fisheries enforcement. (See fig. 3.) In fact, resource hours for these areas were declining even before the events of September 11th; and while they briefly rebounded in early 2002, they have since continued to decline. A Coast Guard official said the recent decline in both drug enforcement and fisheries can be attributed to the heightened security around July 4, 2002, and the anniversary of the September 11th terrorist attacks, as well as the deployment of resources for military operations. They said the decline will likely not be reversed during the second quarter of 2003 because of the diversion of Coast Guard cutters to the Middle East and the heightened security alert that occurred in February and March 2003. The reduction in resource hours over the last several years in drug enforcement is particularly telling. In the first quarter of 1998, the Coast Guard was expending nearly 34,000 resource hours on drug enforcement, and as of first quarter of 2003, the resource hours had declined to almost 14,000 hours—a reduction of nearly two-thirds. Also, both the number of boardings to identify illegal drugs and the amount of illegal drugs seized declined from the first quarter of fiscal year 2000. The Coast Guard’s goal of reducing the flow of illegal drugs based on the seizure rate for cocaine has not been met since 1999. During our conversations with Coast Guard officials, they explained that the Office of National Drug Control Policy set this performance goal in 1997, and although they recognize that they are obligated to meet these goals, they believe the goals should be revised. Our review of the Coast Guard’s activity levels in domestic fishing shows U.S. fishing vessel boardings and significant violations identified are both down since 2000. Similarly, the Coast Guard interdicted only 19 percent as many foreign vessels as it did in 2000. The reduced level of effort dedicated to these two missions is likely linked to the Coast Guard’s inability to meet its performance goals in these two areas. For instance, in 2002 the Coast Guard did not meet its goal of detecting foreign fishing vessel incursions, and while there is no target for domestic fishing violations, there were fewer boardings and fewer violations detected in 2002 than in 2000. Recently, the Coast Guard Commandant stated that the Coast Guard intends to return the level of resources directed to law enforcement missions (drug interdiction, migrant interdiction, and fisheries enforcement) to 93 percent of pre-September 11th levels (using a baseline of the 8 quarters prior to September 11, 2001) by the end of 2003 and 95 percent by the end of 2004. However, in the environment of heightened security and the continued deployment of resources to the Middle East, these goals will likely not be achieved, especially for drug interdiction and fisheries enforcement, which are currently far below previous activity levels. The Coast Guard’s budget request for fiscal year 2004 does not contain initiatives or proposals that would substantially alter the current levels of effort among missions. The request for $6.8 billion represents an increase of about $592 million, or about 9.6 percent in nominal dollars, over the enacted budget for fiscal year 2003. The majority of this increase covers pay increases for current or retired employees or continues certain programs already under way, such as upgrades to information technology. About $168.5 million of the increase would fund new initiatives, most of which relate either to homeland security or to search and rescue. As such, these initiatives do not represent substantial shifts in current levels of effort among missions. However, the 2004 budget request does address a long-standing congressional concern about the Coast Guard’s search and rescue mission. The search and rescue initiative is part of a multiyear effort to address shortcomings in search and rescue stations and command centers. In September 2001, the Department of Transportation Office of the Inspector General reported that readiness at search and rescue stations was deteriorating. For example, staff shortages at most stations required crews to work an average of 84 hours per week, well above the standard (68 hours) established to limit fatigue and stress among personnel. The initiative seeks to provide appropriate staffing and training to meet the standards of a 12-hour watch and a 68-hour work week. The Congress appropriated $14.5 million in fiscal year 2002 and $21.7 million in fiscal year 2003 for this initiative. The increased amount requested for fiscal year 2004 ($26.3 million) for search and rescue would pay for an additional 390 full-time search and rescue station personnel and for 28 additional instructors at the Coast Guard’s motor lifeboat and boatswain’s mate schools. The Coast Guard faces fundamental challenges in balancing resource use among its missions and accomplishing everything that has come to be expected of it. We have already described how the Coast Guard has not been able, in its current environment, to both assimilate its new homeland security responsibilities and restore levels of effort for all other missions. Several other challenges further threaten the Coast Guard’s ability to balance these diverse missions. For example, the Coast Guard’s Deepwater Project has already experienced delays in delivery of key assets and could face additional delays if future funding falls behind what the Coast Guard had planned. Such delays could also seriously jeopardize the Coast Guard’s ability to carry out a number of security and nonsecurity missions. Similarly, for the foreseeable future, the Coast Guard must absorb the cost of implementing a variety of newly mandated homeland security tasks by taking resources from ongoing activities. Funding for these tasks are not provided in the fiscal year 2004 budget request. The Coast Guard also faces the constant possibility that future terror alerts, terrorist attacks, or military actions will likely require it to shift additional resources to homeland security missions. Finally, the Coast Guard’s transition to DHS brings additional challenges, particularly with respect to establishing effective communication links and building partnerships both within DHS and with external agencies. Such challenges raise serious concerns about the Coast Guard’s ability to accomplish all of its responsibilities and balance the level of effort among all missions in an environment where it strives to be “all things to all people,” and attempts to do so as one of many agencies in a cabinet department whose primary mission is homeland security. In past work, we have pointed to several steps that the Coast Guard needs to take in such an environment. These include continuing to address opportunities for operational efficiency, especially through more partnering and developing a comprehensive blueprint or strategy for balancing and monitoring resource use across all of its missions. Under current funding plans, the Coast Guard faces significant potential delays and cost increases in its $17 billion Integrated Deepwater Project. This project is designed to modernize the Coast Guard’s entire fleet of cutters, patrol boats, and aircraft over a 20-year period. Given the way the Coast Guard elected to carry out this project, its success is heavily dependent on receiving full funding every year. So far, that funding has not materialized as planned. Delays in the project, which have already occurred, could jeopardize the Coast Guard’s future ability to effectively and efficiently carry out its missions, and its law enforcement activities— that is, drug and migrant interdiction and fisheries enforcement—would likely be affected the most, since they involve extensive use of deepwater cutters and aircraft. Under the project’s contracting approach, the responsibility for Deepwater’s success lies with a single systems integrator and its contractors for a period of 20 years or more. Under this approach, the Coast Guard has started on a course potentially expensive to alter. It is based on having a steady, predictable, annual funding stream of $500 million in 1998 dollars over the next 2 to 3 decades. Already the funding provided for the project is less than the amount the Coast Guard planned for. The fiscal year 2002 appropriation for the project was about $28 million below the planned level, and the fiscal year 2003 appropriated level was about $90 million below the planning estimate. Further, the President’s fiscal year 2004 budget request for the Coast Guard is not consistent with the Coast Guard’s deepwater funding plan. If the requested amount of $500 million for fiscal year 2004 is appropriated, it would represent another shortfall of $83 million, making the cumulative shortfall about $202 million in the project’s first 3 years, according to Coast Guard data. If appropriations hold steady at $500 million (in nominal dollars) through fiscal year 2008, the Coast Guard estimates that the cumulative shortfall will reach $626 million. The shortfalls in the last 2 fiscal years (2002 and 2003) and their potential persistence could have serious consequences. The main impact is that it would take longer and cost more in the long run to fully implement the deepwater system. For example, due to funding shortfalls experienced to date, the Coast Guard has delayed the introduction of the new maritime patrol aircraft by 19 months and slowed the conversion and upgrade program for its 110-foot patrol boats. According to the Coast Guard, if the agency continues to receive funding at levels less than planned, new asset introductions—and the associated retirement of costly, less capable Coast Guard resources—will continue to be deferred. The cost of these delays will be exacerbated by the accompanying need to invest additional funds in maintaining current assets beyond their planned retirement date because of the delayed introduction of replacement capabilities and assets, according to the Coast Guard. For example, delaying the maritime patrol aircraft will likely require some level of incremental investment to continue safe operation of the current HU-25 jet aircraft. Similarly, a significant delay in the scheduled replacement for the existing 270-foot medium endurance cutter fleet could require an unplanned and expensive renovation for this fleet. System performance—and the Coast Guard’s capability to effectively carry out its mission responsibilities—would also likely be impacted if funding for the Deepwater Project does not keep pace with planning estimates. For example, Coast Guard officials told us that conversions and upgrades for its 110-foot patrol boats would extend its operating hours from about 1,800 to 2,500 per year. Once accomplished, this would extend the time these boats could devote to both security and nonsecurity missions. As with the maritime patrol aircraft, reductions in funding levels for the project have slowed the conversions and upgrades for these vessels, which in turn, has prevented enhancements in mission performance that newer vessels would bring. Coast Guard officials also said that with significant, continuing funding shortfalls delaying new asset introductions, at some point, the Coast Guard would be forced to retire some cutters and aircraft—even as demand for those assets continues to grow. For example, in 2002, two major cutters and several aircraft were decommissioned ahead of schedule due to their deteriorated condition and high maintenance costs. The Coast Guard has also been tasked with a myriad of new homeland security requirements, but funding to implement them is not provided in either the enacted fiscal year 2003 budget or the fiscal year 2004 budget request. As a result, the Coast Guard will have to meet many of these requirements by pulling resources from other activities. Under the Maritime Transportation Security Act (MTSA), signed into law in November 2002, the Coast Guard must accomplish a number of security- related tasks within a matter of months and sustain them over the long term. For example, MTSA requires the Coast Guard to be the lead agency in conducting security assessments, developing plans, and enforcing specific security measures for ports, vessels, and facilities. In the near term, the Coast Guard must prepare detailed vulnerability assessments of vessels and facilities it identifies to be at high risk of terrorist attack. It must also prepare a National Maritime Transportation Security Plan that assigns duties among federal departments and agencies and specifies coordination with state and local officials—an activity that will require substantial work by Coast Guard officials at the port level. The Coast Guard must also establish plans for responding to security incidents, including notifying and coordinating with local, state, and federal authorities. Because the fiscal year 2004 budget request was prepared before MTSA was enacted, it does not specifically devote funding to most of these port security responsibilities. Coast Guard officials said that they will have to absorb costs related to developing, reviewing, and approving plans, including the costs of training staff to monitor compliance, within their general budget. Coast Guard officials expect that the fiscal year 2005 budget request will contain funding to address all MTSA requirements. In the meantime, officials said that the Coast Guard would have to perform most of its new port security duties without additional appropriation, and that the funds for these duties would come from its current operations budget. The costs of these new responsibilities, as well as the extent to which they will affect resources for other missions, are not known. Security alerts, as well as actions needed in the event of an actual terrorist attack, can also affect the extent to which the Coast Guard can devote resources to missions not directly related to homeland security. For example, Coast Guard officials told us that in the days around September 11, 2002, when the Office of Homeland Security raised the national threat level from “elevated” to “high risk,” the Coast Guard reassigned cutters and patrol boats in response. In February 2003, when the Office of Homeland Security again raised the national threat level to high risk, the Coast Guard repositioned some of its assets involved in offshore law enforcement missions, using aircraft patrols in place of some cutters that were redeployed to respond to security-related needs elsewhere. While these responses testify to the tremendous flexibility of a multi-mission agency, they also highlight what we found in our analysis of activity-level trends—when the Coast Guard responds to immediate security needs, fewer resources are available for other missions. The Coast Guard’s involvement in the military buildup for Operation Enduring Freedom in the Middle East further illustrates how such contingencies can affect the availability of resources for other missions. As part of the buildup, the Coast Guard has deployed eight 110-foot boats, two high-endurance cutters, four port security units, and one buoy tender to the Persian Gulf. These resources have come from seven different Coast Guard districts. For example, officials from the First District told us they sent four 110-foot patrol boats and three crews to the Middle East. These boats are multi-mission assets used for fisheries and law enforcement, search and rescue and homeland security operations. In their absence, officials reported, the First District is using other boats previously devoted to other tasks. For instance, buoy tenders have taken on some search and rescue functions, and buoy tenders and harbor tug/icebreakers are escorting high-interest vessels. Officials told us that these assets do not have capabilities equivalent to the patrol boats but have been able to perform the assigned mission responsibilities to date. The creation of DHS is one of the largest, most complex restructurings ever undertaken, and the Coast Guard, as one of many agencies joining the department, faces numerous challenges, including organizational, human capital, acquisition, process and technology issues. One particularly formidable challenge involves establishing effective communication links and building partnerships both within DHS and with external organizations. While most of the 22 agencies transferred to DHS report to under secretaries for the department’s various directorates, the Coast Guard remains a separate entity reporting directly to the Secretary of DHS. According to Coast Guard officials, the Coast Guard has important functions that will require coordination and communication with all of these directorates, particularly the Border and Transportation Security Directorate. For example, the Coast Guard plays a vital role with Customs, Immigration and Naturalization Service, the Transportation Security Administration, and other agencies that are organized in the Directorate of Border and Transportation Security. Because the Coast Guard’s homeland security activities require interface with these and a diverse set of other agencies organized within several DHS directorates, communication, coordination, and collaboration with these agencies is paramount to achieve department-wide results. Effective communication and coordination with agencies outside the department is also critical to achieving the homeland security objectives, and the Coast Guard must maintain numerous relationships with other public and private sector organizations outside DHS. For example, according to Coast Guard officials, the Coast Guard will remain an important participant in the Department of Transportation’s (DOT) strategic planning process, since the Coast Guard is a key agency in helping to maintain the maritime transportation system. Also, the Coast Guard maintains navigation systems used by DOT agencies such as the Federal Aviation Administration. In the homeland security area, coordination efforts will extend well beyond our borders to include international agencies of various kinds. For example, the Coast Guard, through its former parent agency, DOT, has been spearheading U.S involvement in the International Maritime Organization. This is the organization that, following the September 11th attacks, began determining new international regulations needed to enhance ship and port security. Also, our work assessing efforts to enhance our nation’s port security has underscored the formidable challenges that exist in forging partnerships and coordination among the myriad of public and private sector and international stakeholders. In previous work, we have examined some of the implications of the Coast Guard’s new operating environment on the agency’s ability to fulfill its various missions. This work, like our testimony today, has pointed to the difficulty the Coast Guard faces in devoting additional resources to nonsecurity missions, despite the additional funding and personnel the agency has received. In particular, we have recommended that the following actions be taken as a more candid acknowledgment of the difficulty involved: Opportunities for increased operational efficiency need to be explored. Over the past decade, we and other outside organizations, along with the Coast Guard, have studied Coast Guard operations to determine where greater efficiencies might be found. These studies have produced a number of recommendations, such as shifting some responsibilities to other agencies. One particular area that has come to the forefront since September 11th is the Coast Guard’s potential ability to partner with other port stakeholders to help accomplish various security and nonsecurity activities involved in port operations. Some effective partnerships have been established, but the overall effort has been affected by variations in local stakeholder networks and limited information-sharing among ports. A comprehensive blueprint or strategy is needed for setting and assessing levels of effort and mission performance. One important effort that has received relatively little attention, while the Coast Guard has understandably put its homeland security responsibilities in place, is the development of a plan that proactively addresses how the Coast Guard should manage its various missions in light of its new operating reality. The Coast Guard’s adjustment to its new post-September 11th environment is still largely in process, and sorting out how traditional missions will be fully carried out alongside new security responsibilities will likely take several years. But it is important to complete this plan and address in it key elements and issues so that it is both comprehensive and useful to decision makers who must make difficult policy and budget choices. Without such a blueprint, the Coast Guard also runs the risk of continuing to communicate that it will try to be “all things to all people” when, in fact, it has little chance of actually being able to do so. The Coast Guard has acknowledged the need to pursue such a planning effort, and the Congress has directed it to do so. Coast Guard officials told us that as part of the agency’s transition to DHS, they are updating the agency’s strategic plan, including plans to distribute all resources in a way that can sustain a return to previous levels of effort for traditional missions. In addition, the Congress placed a requirement in MTSA for the Coast Guard to submit a report identifying mission targets, and steps to achieve them, for all Coast Guard missions for fiscal years 2003 to 2005. However, this mandate is not specific about the elements that the Coast Guard should address in the report. To be meaningful, this mandate should be addressed with thoroughness and rigor and in a manner consistent with our recent recommendations; it requires a comprehensive blueprint that embodies the key steps and critical practices of performance management. Specifically, in our November 2002 report on the progress made by the Coast Guard in restoring activity levels for its key missions, we recommended an approach consisting of a long-term strategy outlining how the Coast Guard sees its resources—cutters, boats, aircraft, and personnel—being distributed across its various missions, a time frame for achieving this desired balance, and reports with sufficient information to keep the Congress apprised not only of how resources were being used, but what was being accomplished. The Coast Guard agreed that a comprehensive strategy was needed, and believes that it is beginning the process of developing one. Table 2 provides a greater explanation of what this approach or blueprint would entail. The events of recent months heighten the need for such an approach. During this time, the budgetary outlook has continued to worsen, further emphasizing the need to look carefully at the results being produced by the nation’s large investment in homeland security. The Coast Guard must be fully accountable for investments in its homeland security missions and able to demonstrate what these security expenditures are buying and their value to the nation. At the same time, recent events also demonstrate the extent to which highly unpredictable homeland security events, such as heightened security alerts, continue to influence the amount of resources available for performing other missions. The Coast Guard needs a plan that will help the agency, the Congress, and the public understand and effectively deal with trade-offs and their potential impacts in such circumstances. | The Coast Guard is one of 22 agencies being placed in the new Department of Homeland Security (DHS). With its key roles in the nation's ports, waterways, and coastlines, the Coast Guard is an important part of enhanced homeland security efforts. But it also has important nonsecurity missions, such as search and rescue, fisheries and environmental protection, and drug and migrant interdiction. GAO has conducted a number of reviews of the Coast Guard's missions and was asked to testify about the Coast Guard's most recent level of effort for its various missions and the major operational and organizational challenges facing the agency during its transition into the newly created DHS. Data on the most recent levels of effort for the Coast Guard's various missions show clearly the dramatic shifts that have occurred among its missions since the September 11, 2001, attacks. Predictably, levels of effort related to homeland security remain at much higher levels than before September 11th. Other missions, such as search and rescue, have remained at essentially the same levels. In contrast, several other missions--most notably fisheries enforcement and drug interdiction--dropped sharply after September 11th and remain substantially below historical levels. Continued homeland security and military demands make it unlikely that the agency, in the short run, can increase efforts in the missions that have declined. Further, the fiscal year 2004 budget request contains little that would substantially alter the existing levels of effort among missions. The Coast Guard faces fundamental and daunting challenges during its transition to the new department. Delays in the planned modernization of cutters and other equipment, responsibility for new security-related tasks as directed under the Maritime Transportation Security Act (MTSA), and mandatory responses to unexpected events, such as terrorist attacks or extended terror alerts, will have an impact on the Coast Guard's ability to meet its new security-related responsibilities while rebuilding its capacity in other missions. Also, as one of the agencies being merged into the new department, the Coast Guard must deal with a myriad of organizational, human capital, acquisition, and technology issues. The enormity of these challenges requires the development of a comprehensive blueprint or strategy that addresses how the Coast Guard should balance and monitor resource use among its various missions in light of its new operating reality. |
B-2 operational requirements specify that the weapon system have “low-observable” (stealth) characteristics and sufficient range and payload capability to deliver precision-guided conventional or nuclear weapons anywhere in the world with enhanced survivability. The B-2 combines conventional and state-of-the-art technology, such as special shaping and radar-absorbing materials, to achieve low-observability characteristics, high aerodynamic efficiency, and a large payload capacity. The blending of these technologies makes the aircraft complex and costly to develop, produce, and in some respects maintain. In the early 1990s, the number of B-2s to be acquired was reduced from 132 to 20 operational aircraft. The 20 aircraft include 15 production aircraft and 5 of 6 test aircraft that are to be modified to a fully capable operational configuration. In March 1996, the President directed that the one remaining test aircraft be upgraded to a fully capable operational configuration, bringing the total operational B-2s to be acquired to 21. B-2 development started in 1981. Production of long lead-time aircraft components began in 1986 and flight testing was initiated in 1989. The lengthy development and test program, which has been implemented concurrently with the production program for about 11 years, required the Air Force to devise a mechanism for initially accepting partially capable aircraft until their full capability could be demonstrated in the test program. Therefore, the Air Force agreed to accept the 15 production aircraft in 3 configurations—10 in a training configuration, 3 in an interim configuration, and 2 in the fully capable configuration known as block 30. The block 30 configuration is planned to be the first fully capable configuration that would meet all the essential employment capabilitiesdefined by the Air Force. All aircraft not delivered in the block 30 configuration, including test aircraft, have to be modified extensively to make them fully capable. Some of the aircraft in a training configuration have been modified to an interim configuration. The modification efforts began in 1995 and are scheduled to be complete in July 2000. The total production period for the 21 aircraft, including modifications to bring all B-2s into the fully capable configuration, is expected to be about 14 years. Flight testing was planned to take 4 years but has taken about 8 years and is not yet completed. The Air Force extended the estimated completion of flight testing from July 1997 to March 1998. The Air Force’s estimate of the total program cost for the B-2 program has changed less than 1 percent since 1994; however, the estimate has been affected by changes made by both Congress and the Air Force. Table 1.1 shows the Air Force’s 1994-96 cost estimates for the development, procurement, and military construction of the B-2 as reported in annual selected acquisition reports. Through fiscal year 1997, the Air Force was appropriated $43,178 million, or 96 percent, of the $44,754 million total program estimate. This leaves $1,576 million to be appropriated for fiscal years 1998-2004. The December 1996 estimate included costs to complete the program for 20 operational B-2s and other changes in the program. In the last 3 fiscal years, Congress added $734 million to the B-2 program—$125 million to preserve the B-2 industrial base, $493 million to upgrade the first test aircraft to operational status, and $116 million to enhance the block 30 capabilities. Enhanced capabilities include making the B-2 capable of launching the Joint Stand Off Weapon and a near-precision conventional penetrating bomb. The Air Force changes decreased various elements of the estimated development and procurement costs. Those decreases exceeded congressional additions, resulting in the overall net reduction in the total B-2 cost estimate. For example, between fiscal year 1997 and 1998, estimates for B-2 spares, support, and nonrecurring air vehicle cost decreased over $900 million. Spare parts estimates were reduced by $358 million because the Air Force now plans to fly fewer and shorter aircraft sorties and because the methodology for computing spare parts requirements changed. Interim contractor support estimates were also reduced by $142 million because parts reliability, according to the Air Force, has been better than anticipated. Other support decreases totaling $170 million covered peculiar support equipment, data, and training items. In addition to changes in the B-2 support estimate, the Air Force decreased its estimate for nonrecurring air vehicle cost by $237 million. According to the Air Force, the estimate of development cost reported in the December 1996 B-2 Selected Acquisition Report (included in table 1.1) and the fiscal year 1998 President’s budget (app. I) is understated by $89 million. Air Force officials said that without these funds, two of the test aircraft would not be upgraded to fully capable aircraft, leaving only 19 fully capable B-2s. B-2 estimated costs could increase if (1) the flight test program is extended beyond March 1998, (2) more B-2 performance deficiencies are identified during the time remaining in the acquisition program, and (3) additional development and procurement activities are initiated to better maintain the low-observable features of the B-2s. The flight test program was not fully completed as scheduled on July 1, 1997, and the Air Force plans to extend flight testing with one test aircraft through March 1998. The Air Force is currently defining detailed testing that will be required and has included $28 million in the fiscal year 1998 budget to cover the extended flight test program. Any additional extension of testing, however, could increase the estimated B-2 cost. Some of the areas to be further tested are terrain-following/terrain-avoidance radar performance in the rain, mission effectiveness tests of the low-observable features, ground and flight tests of the environmental control system and auxiliary certain tests of the defensive management system. We plan to report on the results of B-2 testing after the Air Force issues reports scheduled for late 1997 and early 1998. Working on flight tests, aircraft production, and modifications concurrently has created the need for further corrections of deficiencies after fully capable aircraft are delivered and could cause development costs to increase. As of May 1997, the Air Force officials had identified 13 corrections that cannot be incorporated into up to nine aircraft during production or during the modification process. They estimate another 60 deficiencies could be identified that could impact the B-2s. These officials added that new corrections that cannot be incorporated during the modification process would be incorporated by retrofitting the aircraft at some future time. The cost estimate for production includes over $500 million in reserves (fiscal year 1993 and prior year funds) that are available for cost overruns and other anticipated costs. However, the development estimate includes only $12 million in reserves to correct deficiencies in the test aircraft. Air Force officials said that if significantly more or costly deficiencies are identified, development costs could increase. The Air Force has concluded it could not effectively deploy B-2s to forward operating locations without sheltering the aircraft to preserve and maintain its low-observable features. Accordingly, if permanent or temporary shelters must be developed and built at selected forward operation locations or additional support equipment must be acquired to meet deployment and maintenance requirements, additional costs will be incurred. According to the Air Force, the B-2 achieved initial operational capability on April 1, 1997, with interim aircraft capable of flying nuclear and limited conventional missions. The interim B-2 is supposed to be capable of participating in nuclear or conventional warfare either from its main operating base at Whiteman Air Force Base, Missouri, or from a forward operating location outside the continental United States. While the B-2’s performance met requirements for initial operations, the aircraft are unable to meet intended deployment requirements because some low-observable features require substantial maintenance and the aircraft are more sensitive to climate and moisture than expected. As a result, the Air Force has eliminated the deployment requirement for interim aircraft and is evaluating potential actions to allow deployment when fully capable aircraft are delivered. Full operational capability of the B-2 is planned to be achieved in 1999. The Air Force demonstrated that interim B-2 aircraft can carry and deliver unguided Mk 84 bombs or the precision-guided Global Positioning System (GPS) aided munition (GAM) in the conventional role or B-83/B-61 nuclear weapons in the nuclear role. Reports of flight tests and demonstrations indicated the GAM to be an effective all-weather weapon in attacking fixed targets with near-precision accuracy. In one demonstration, 3 B-2s destroyed 16 targets using 16 GAMs dropped from over 40,000 feet. In addition, the interim aircraft have automatic terrain-following capability as low as 600 feet and some of the capabilities of the planned defensive management system. According to Air Force officials, the demonstrated capabilities are more than adequate to perform the mission defined for the interim configuration when operating from Whiteman Air Force Base, the B-2’s main operating base. The Air Force decided it was unrealistic to deploy the B-2 without shelters, as planned, because some low-observable materials are not as durable as expected and require lengthy maintenance, some in an environmentally controlled shelter after each flight. In addition, B-2s must be kept in shelters because of their sensitivity to moisture, water, and other severe climatic conditions. Air Force operational requirements for the B-2 intended for both the interim and fully capable B-2s to be capable of deploying to forward operating locations, without shelters, in all types of weather and climates. The Air Force is reviewing specific B-2 deployment requirements and working to resolve deployment-related problems by the time the B-2s are scheduled to be fully capable in 1999. The operational test report for the interim aircraft stated the aircraft need frequent and lengthy maintenance and are sensitive to extreme climates and moisture. Tests showed that some low-observable materials on the aircraft were damaged each time the aircraft flew and that repair of those materials accounted for 39 percent of the 80 maintenance man-hours per flight hour experienced by the B-2 during flight testing. This is about three times greater than the next largest contributor to maintenance man-hours, which was aircraft structures. The current goal for total maintenance man-hours per flying hour is 60 hours, and the ultimate goal is 50 hours. The actual B-2 maintenance man-hours per flying hour at Whiteman Air Force Base averaged 124 hours over 12 months ending in March 1997. A major factor in maintenance of low-observable materials is the long time required to repair the damaged materials and aircraft surfaces. During operational testing of the interim configuration, low-observable materials took from 30 to 80 hours to repair and cure, and the processes require a shelter with a temperature and humidity controlled environment for proper curing. Problems with low-observable materials have also affected the percentage of time the B-2 was partially or fully capable of completing a mission, which was significantly less when low observability was considered. When low observability was not considered, the mission-capable rate was 66 percent for a 12-month period ending March 1997. However, when low-observability problems were considered for the same period of time, the rate dropped significantly to 26 percent. Testing indicated that B-2s are also sensitive to extreme climates, water, and humidity—exposure to water or moisture can damage some of the low-observable enhancing surfaces on the aircraft. Further, exposure to water or moisture that causes water to accumulate in aircraft compartments, ducts, and valves can cause systems to malfunction. If accumulated water freezes, it can take up to 24 hours to thaw and drain. Air Force officials said it is unlikely that the aircraft’s sensitivity to moisture and climates or the need for controlled environments to fix low-observability problems will ever be fully resolved, even with improved materials and repair processes. Therefore, if B-2s are to be deployed, some form of aircraft sheltering at a forward operating location will likely become a requirement in the future. Air Force test officials stated that maintenance of low-observable features is an issue that requires significant further study and that the percentage of maintenance hours required to repair low-observable materials would increase even more before there are reductions. They said technological improvements in materials and repair processes will be required. Air Combat Command considers low-observable maintainability to be its number one supportability issue, and the Air Force has efforts underway to develop new materials, procedures, and support equipment. It is currently changing some of the materials on the aircraft to improve durability and reduce repair times. It has also established procedures to monitor conditions of low-observable materials on the operational aircraft and developed a model that characterizes the operational impacts of material degradations so that repairs can be prioritized relative to the operational requirements of the B-2s. In commenting on a draft of this report, the Department of Defense generally agreed with the report. The Department’s comments are presented in their entirety in appendix II, along with our evaluation of them. To identify cost issues, we reviewed annual cost and budgetary estimates, financial and management reports, contract cost reports, program schedules and plans, and other documents. We compared annual estimates from 1995 to 1997, identifying increases and decreases and the basis for the changes. We interviewed Air Force, Defense Contract Management Command, and contractor financial and technical mangers to obtain explanations and information on cost issues and risks remaining in the B-2 program that were not included in the official reports and documents reviewed. To identify operational issues, we reviewed Air Force B-2 contract and operational requirements documents and operational test reports. We discussed deficiencies and planned development and corrective actions with Air Force B-2 Program, Test, and Operational Command officials to determine the nature and extent of problems, the impact of problems on operations, and schedules for achieving full capability. We performed our review from November 1996 through July 1997 in accordance with generally accepted government auditing standards. We are sending copies of this report to the Secretaries of Defense and the Air Force, the Director of the Office of Management and Budget, and other interested parties. We will make copies available to others upon request. Please contract me on (202) 512-4841 if you or your staff have any questions concerning this report. Major contributors to this report are listed in appendix III. The following are GAO’s comments on the Department of Defense’s (DOD) letter, dated July 14, 1997. 1. DOD officials told us they plan to address the funding shortfall during the fiscal year 1999 DOD planning and budgeting process, which is incomplete at this time. 2. Cost growth risks will continue until the extent of changes needed as a result of the remaining test effort has been defined by the Air Force and it has some assurance that needed changes can be completed with existing program resources. 3. DOD’s comments addressed 13 deficiencies already identified but did not address the potential impact of an additional 60 deficiencies that DOD suggested could occur. Although the cost estimate for development and production includes some provisions for correcting deficiencies that have not yet been defined, the amounts included are intended to accommodate corrections of deficiencies that are relatively minor. If the Air Force identifies any deficiencies that involve significant costs to correct, cost estimates could increase. 4. Design requirements for the B-2 include provisions for the B-2 aircraft to be deployed, without shelters, in all types of temperatures and climates. The operational test report for the interim B-2 concluded the B-2 must be sheltered or exposed only to the most benign environments (low humidity, no precipitation, moderate temperatures). According to B-2 Combined Test Force officials, permanent shelters at deployed locations are required. Therefore, while DOD commented that it is possible to deploy the B-2, it appears that effective operations from a forward operation location will require additional facilities and equipment not included in the original plan. The Air Force is still working to identify these additional requirements. B-2 Bomber: Status of Efforts to Acquire 21 Operational Aircraft (GAO/NSIAD-97-11, Oct. 2, 1996). B-2 Bomber: Status of Cost, Development, and Production (GAO/NSIAD-95-164, Aug. 4, 1995). B-2 Bomber: Cost to Complete 20 Aircraft Is Uncertain (GAO/NSIAD-94-217, Sept. 8, 1994). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a legislative requirement, GAO reviewed the B-2 bomber, focusing on the current status of cost and operational issues. GAO noted that: (1) the total cost of the B-2 appears to have stabilized; (2) the Air Force has reported that the total estimated B-2 acquisition costs (development, procurement, and military construction) decreased from $44,946 million in early 1995 to $44,754 million in early 1997; (3) the estimated cost declined even though Congress added new requirements to the B-2 program and provided additional funds of $734 million in fiscal years 1995, 1996, and 1997; (4) Air Force officials advised GAO that the $44,754 million in cost reported to Congress was understated by $89 million; (5) they said that the impact of the understatement would be that two of the test aircraft would not be fully upgraded to block 30, making them less than fully capable; (6) through fiscal year 1997, Congress appropriated funds for about 96 percent of the estimated total cost of $44,754 million; (7) although the cost estimate has not changed substantially since 1995, costs could increase if: (a) the flight test program is extended beyond March 1998; (b) more performance deficiencies than predicted are identified during the remaining portions of the acquisition program; and (c) unplanned development and procurement activities become necessary to better maintain the low-observable features of the B-2s; (8) the Air Force declared on April 1, 1997, that the B-2s in an interim configuration had achieved initial operational capability; (9) however, the Air Force decided it was unrealistic to plan on deploying the interim aircraft to forward operating locations because of difficulties being experienced in maintaining low-observable characteristics at the B-2's main operating base; and (10) the Air Force is reviewing specific B-2 deployment requirements and working to resolve deployment-related problems by the time the B-2s are scheduled to be fully capable in 1999. |
CMS’s quality bonus payment demonstration includes several key changes from the quality bonus system established by PPACA. Specifically, PPACA required CMS to provide quality bonus payments to MA plans that achieve 4, 4.5, or 5 stars on a 5-star quality rating system developed by CMS. In contrast, the demonstration significantly increases the number of plans eligible for a bonus, enlarges the size of payments for some plans, and accelerates payment phase-in. In announcing the demonstration, CMS stated that the demonstration’s research goal is to test whether scaling bonus payments to the number of stars MA plans receive under the quality rating system leads to larger and faster annual quality improvement for plans at various star rating levels compared with what would have occurred under PPACA. In March 2012, we reported that CMS’s Office of the Actuary (OACT) estimated that the demonstration would cost $8.35 billion over 10 years— an amount that is at least seven times larger than that of any other Medicare demonstration conducted since 1995 and greater than the combined budgetary effect of all those demonstrations. The cost is largely for quality bonus payments more generous than those prescribed in PPACA. Plans are required to use these payments to provide their enrollees enhanced benefits, lower premiums, or reduced cost-sharing.We also found that the additional Medicare spending will mainly benefit average-performing plans—those receiving 3 and 3.5-star ratings—and that about 90 percent of MA enrollees in 2012 and 2013 would be in plans eligible for a bonus payment. As we noted in our report, while a reduction in MA payments was projected to occur as a result of PPACA’s payment reforms, OACT estimated that the demonstration would offset more than 70 percent of these payment reductions projected for 2012 alone and more than one-third of the reductions for 2012 through 2014. Our March 2012 report also identified several shortcomings of the demonstration’s design that preclude a credible evaluation of its effectiveness in achieving CMS’s stated research goal. Notably, the bonus payments are based largely on plan performance that predates the demonstration. In particular, all of the performance data used to determine the 2012 bonus payments and nearly all of the data used to determine the 2013 bonus payments were collected before the demonstration’s final specifications were published. In addition, under the demonstration’s design, the bonus percentages are not continuously scaled. For example, in 2014, plans with 4, 4.5, and 5 stars will all receive the same bonus percentage. Finally, since all plans may participate in the demonstration, there is no adequate comparison group for determining whether the demonstration’s bonus structure provided better incentives for improving quality than PPACA’s bonus structure. We therefore concluded that it is unlikely that the demonstration will produce meaningful results. Given the findings from our program review of the demonstration’s features, we recommended in our March 2012 report that the Secretary of Health and Human Services (HHS), who heads the agency of which CMS is a part, cancel the demonstration and allow the MA quality bonus payment system authorized by PPACA to take effect. We further recommended that if that bonus payment system does not adequately promote quality improvement, HHS should determine ways to modify it, which could include conducting an appropriately designed demonstration. HHS did not agree. It stated that, in contrast to PPACA, the demonstration establishes immediate incentives for quality improvement throughout the range of quality ratings. Regarding their proposed evaluation of the demonstration, HHS did not consider the timing of data collection to be a problem and said that the comparison group it would use would enable them to determine the demonstration’s impact. We continue to believe that, given the problems we cited, the demonstration should be canceled. In addition to our March 2012 report, we sent a letter on July 11, 2012, to HHS regarding CMS’s authority to conduct the demonstration. In our letter, we stated that CMS had not established that the demonstration met the criteria set forth in the Social Security Amendments of 1967, as amended—the statute under which CMS is conducting the demonstration. Specifically, the statute authorizes the Secretary to conduct demonstration projects to determine whether changes in payment methods would increase the efficiency and economy of Medicare services through the creation of additional incentives, without adversely affecting quality. However, features of the demonstration, particularly those regarding the timing of data collection for plan star ratings, call into question whether the demonstration includes additional incentives to increase the efficiency and economy of Medicare services and raise concerns about the agency’s ability to determine whether the payment changes under the demonstration result in increased efficiency and economy compared to the payment methods in place under PPACA. In 2003, Congress authorized the establishment of three types of MA coordinated care plans for individuals with special needs: dual-eligible special needs plans (D-SNP), which are exclusively for beneficiaries eligible for both Medicare and Medicaid; institutional special needs plans for individuals in nursing homes, and chronic condition special needs plans for individuals with severe or disabling chronic conditions. Of the three types of SNPs, D-SNPs are by far the most common, accounting for about 80 percent of SNP enrollment as of September 2012. The approximately 9 million dual-eligible beneficiaries are particularly costly to both Medicare and Medicaid in part because they are more likely than other Medicare beneficiaries to be disabled, report poor health status, and have limitations in activities of daily living. Furthermore, their care must be coordinated across Medicare and Medicaid, and each program has its own set of covered services and requirements. In September 2012, we reported that the 2012 D-SNP contracts with state Medicaid agencies that we reviewed varied considerably in their provisions for integration of benefits. Two-thirds of the 124 contracts between D-SNPs and state Medicaid agencies that were submitted to CMS for 2012 did not expressly provide for the integration of any benefits. To carry out the requirement in the Medicare Improvements for Patients and Providers Act of 2008 that each D-SNP contract provide or arrange CMS guidance required that, at a for Medicaid benefits to be provided,minimum, contracts list the Medicaid benefits that dual-eligible beneficiaries could receive directly from the state Medicaid agency or the state’s Medicaid managed care contractor(s). Like other MA plans, D-SNPs must cover all the benefits of fee-for- service, with the exception of hospice, and may offer supplemental benefits, such as vision and dental care. In addition, they must develop a model of care that describes their approach to caring for their enrollees. The model of care describes how the plan will address 11 elements, including tracking measureable goals, performing health risk assessments, providing care management for the most vulnerable beneficiaries, and measuring plan performance and outcomes; and D-SNPs must offer the benefits that allow them to actualize these elements. In our September 2012 report, we examined the supplemental benefits offered by D-SNPs and found that D-SNPs provided fewer supplemental benefits than other MA plans. However, the individual services covered under vision and dental benefits were generally more comprehensive than in other MA plans. Despite offering these supplemental benefits somewhat less often than other MA plans, D-SNPs allocated a larger percentage of their rebates—additional Medicare payments received by many plans—to these benefits than other MA plans. They were able to do so largely because they allocated a smaller percentage of rebates to reducing cost-sharing. We could not report on the extent to which benefits specific to D-SNPs and described in the model of care were actually provided to beneficiaries because CMS did not collect the information. For the 15 models of care we reviewed, most did not report—and were not required by CMS to report—the number of beneficiaries who received a risk assessment, for example, or the number or proportion of beneficiaries who would be targeted as “most vulnerable.” However, of the models of care we reviewed, past completion rates for risk assessment varied widely among the 4 plans that provided this information. None of the models of care we reviewed reported the number of beneficiaries that were expected to receive add-on services, such as social support services, that were intended for the most-vulnerable beneficiaries. We found that plans do not use standardized performance measures in their models of care, limiting the amount of comparable information available to CMS. Although the D-SNPs are required to report how they intend to evaluate their performance and measure outcomes, CMS does not stipulate the use of standard outcome or performance measures, making it difficult to use any data it might collect to compare D-SNPs’ effectiveness or evaluate how well they have done in meeting their goals. Furthermore, without standard measures, it would not be possible for CMS to fully evaluate the relative performance of D-SNPs. We concluded that there was little evidence available on how well D-SNPs are meeting their goals of helping dual-eligible beneficiaries to navigate two different health care systems and receive services that meet their individual needs. Consequently, we recommended in our September 2012 report that CMS require D-SNPs to state explicitly in their models of care the extent of services they expect to provide, require D-SNPs to collect and report to CMS standard performance and outcome measures, systematically analyze these data and make the results routinely available to the public, and conduct an evaluation of the extent to which D-SNPs have provided sufficient and appropriate care to their enrollees. HHS agreed with our recommendations and in its comments on a draft of our report, said that it plans to obtain more information from D-SNPs. CMS is embarking on a new demonstration in up to 26 states with as many as 2 million beneficiaries to financially realign Medicare and Medicaid services so as to serve dual-eligible beneficiaries more effectively. CMS has approved one state demonstration— Massachusetts—and continues to work with other states. If CMS systematically evaluates D-SNP performance, it can use information from the evaluation to inform the implementation and reporting requirements of this major new initiative. In contrast to MA plans, which have a financial incentive to control their costs, a small number of Medicare private health plans—called cost plans—are paid on the basis of their reasonable costs incurred delivering Medicare-covered services. Medicare cost plans also differ structurally from MA plans in several ways. For example, cost plans, unlike MA plans, allow beneficiaries to disenroll at any time. Despite their enrollment only totaling under 3 percent of Medicare private health plan enrollment, industry representatives stated that cost plans fill a unique niche by providing a Medicare private health plan option in rural and other areas that traditionally have had few or no MA plans. Under current law, new cost contracts are not being entered into and contracts with existing cost plans cannot be extended or renewed after January 1, 2013 if sufficient MA competition exists in the service area. Additionally, in general, organizations that offer cost plans and MA plans in the same area must close their cost plan to new enrollment. plan offerings by eliminating potentially duplicative plans and those with low enrollment. As part of our 2009 report on cost plans we also described the concerns of officials from Medicare cost plans about converting to MA plans. We found that the most-common concerns cited by these officials from organizations that offered Medicare cost plans were potential future changes to MA payments that may then necessitate closing the plan, difficulty assuming financial risk given their small enrollment, and potential disruption to beneficiaries during the transition. For future contacts regarding this testimony, please call James Cosgrove at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Other individuals who made key contributions include Phyllis Thorburn, Assistant Director; Alison Binkowski; Krister Friday; Gregory Giusto; and Eric Wedum. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | As of August 2012, approximately 13.6 million Medicare beneficiaries were enrolled in MA plans or Medicare cost planstwo private health plan alternatives to the original Medicare fee-for-service program. This testimony discusses work GAO has done that may help inform the Congress as it examines the status of the MA program and the private health plans that serve Medicare beneficiaries. It is based on key background and findings from three previously issued GAO reports on (1) the MA quality bonus payment demonstration, (2) D-SNPs, and (3) Medicare cost plans. This information on cost plans was updated, based on information supplied by CMS, to reflect the status of cost plans in March 2012. In March 2012, GAO issued a report on the Centers for Medicare & Medicaid Services (CMS) Medicare Advantage (MA) quality bonus payment demonstrationa demonstration CMS initiated rather than implementing the quality bonus program established under the Patient Protection and Affordable Care Act (PPACA). Compared to the PPACA quality bonus program, CMSs demonstration increases the number of plans eligible for a bonus, enlarges the size of payments for some plans, and accelerates payment phase-in. CMS stated that the demonstrations research goal is to test whether scaling bonus payments to quality scores MA plans receive increases the speed and degree of annual quality improvements for plans compared with what would have occurred under PPACA. GAO reported that CMSs Office of the Actuary estimated that the demonstration would cost $8.35 billion over 10 yearsan amount greater than the combined budgetary impact of all Medicare demonstrations conducted since 1995. In addition, GAO also found several shortcomings of the demonstration design that preclude a credible evaluation of its effectiveness in achieving CMSs stated research goal. In July 2012, GAO sent a letter to the Secretary of Health and Human Services (HHS), the head of the agency of which CMS is a part, stating that CMS had not established that its demonstration met the criteria in the Social Security Act of 1967, as amended, under which the demonstration is being performed. In September 2012, GAO issued a report on Medicare dual-eligible special needs plans (D-SNP), a type of MA plan exclusively for beneficiaries that are eligible for Medicare and Medicaid. Dual-eligible beneficiaries are costly to Medicare and Medicaid in part because they are more likely than other beneficiaries to be disabled, report poor health status, and have limitations in activities of daily living. GAO found that two-thirds of 2012 D-SNP contracts with state Medicaid agencies that it reviewed did not expressly provide for the integration of Medicare and Medicaid benefits. Additionally, GAO found that compared to other MA plans, D-SNPs provided fewer, but more comprehensive supplemental benefits, such as vision, and were less likely to use rebatesadditional Medicare payments received by many MA plansfor reducing beneficiary cost-sharing. GAO could not report on the extent to which benefits specific to D-SNPs were actually provided to beneficiaries because CMS did not collect the information. GAO also found that plans did not use standardized performance measures, limiting the amount of comparable information available to CMS. In December 2009, GAO issued a report on Medicare cost plans, which, unlike MA plans, are paid based on their reasonable costs incurred delivering Medicare-covered services and allow beneficiaries to disenroll at any time. GAO found that the approximately 288,000 Medicare beneficiaries enrolled in cost plans as of June 2009 had multiple MA options available to them. GAO updated this work using March 2012 data and found that enrollment in cost plans had increased to approximately 392,000 and that 99 percent of Medicare beneficiaries enrolled in cost plans had at least one MA option available to them, although generally fewer options than in 2009. In a March 2012 report on the MA quality bonus payment demonstration, GAO recommended that HHS cancel the MA quality bonus demonstration. HHS did not concur with this recommendation. In a September 2012 report on D-SNPs, GAO recommended that D-SNPs improve their reporting of services provided to beneficiaries and that this information be made public. HHS agreed with these recommendations. |
DOD defines an MWD as any canine bred, procured, or acquired to meet DOD’s requirements to support operations in the protection of installations, resources, and personnel. These requirements include explosive and illegal narcotic detection capabilities, patrol, tracking, and other requirements. As part of their duties, MWDs can be deployed to assist in operations outside of their assigned military installation. MWDs are removed from service when they can no longer perform their duties due to medical or behavior problems, when they are no longer needed by the military, or in other circumstances, such as when a handler dies in action. In 2000, a law commonly known as “Robby’s Law” was enacted to promote the adoption of MWDs after their military service. According to this law, the military shall make an MWD that is suitable for adoption available for adoption at the end of the dog’s “useful life” or when the dog is no longer needed by the department. Robby’s Law has been amended a number of times since first enacted. Most recently, the NDAA for FY 2016 established priorities among the authorized recipients of MWDs that are removed from service. The amendment generally requires that MWDs be made available first to former handlers, who care for and train the MWDs. The amendment gives second priority to others capable of humanely caring for the MWD, and, finally, it gives the lowest priority to law enforcement agencies. After an MWD is adopted, Robby’s Law provides that “the United States shall not be liable for any veterinary expense associated with (an adopted MWD) for a condition of the military animal before transfer” regardless of whether the condition is known at the time of adoption. While DOD is authorized to establish and maintain a veterinary care system for adopted MWDs, no federal funds may be used for this purpose. DOD uses the term “disposition” to describe the process of removing MWDs from service. Disposition of MWDs can be initiated at any military location that has an MWD program. All the military services follow the same process outlined in Air Force Instruction 31-126, which includes the policies and procedures for the MWD program. (See fig. 1.) All decisions regarding the removal of MWDs from service are made by a review board, which includes the Commander of the Air Force’s 341st Training Squadron, a representative of the 341st Training Squadron or designee, a Veterinary Corps Officer (Army veterinarian), and a Veterinary Corps Officer behavioral representative (an Army veterinarian who is trained in animal behavior). Air Force officials told us that the review board may also consult with the Kennel Master, who manages the kennel at the military installation where the MWD is located, as well as the veterinary staff at Joint Base San Antonio, Texas, when making decisions about removing an MWD from service. Air Force officials told us that handlers who are interested in adopting an MWD must communicate their interest to the Kennel Master where the MWD is located. The Kennel Master is responsible for annotating WDMS to show the handler’s interest in adoption, including adding the handler’s name and contact information. The handler is responsible for maintaining contact with the Kennel Master and updating this contact information, if needed. In the event that multiple handlers are interested in adopting the MWD, the Unit Commander of the entity that owns the MWD is responsible for determining which handler is in the best interest of the MWD. Air Force officials told us that in these cases, the most recent handler would typically adopt the MWD. Air Force officials told us that they are in the process of updating their adoption policy. For example, the new policy outlines a method for recording whether or not the MWD was adopted by a former handler. They also told us they plan to update the MWD service record to include a checkbox to indicate whether the MWD was adopted by a handler, and that these service records will be scanned into WDMS. Officials have told us that these procedures will be implemented when the updated Air Force Instruction becomes effective, likely in the spring of 2017. The Army Veterinary Service has the lead responsibility for the medical care of all DOD-owned animals, including MWDs. Specifically, the Army provides medical care for MWDs through its Public Health Command Regions and Activities and the DOD MWD Veterinary Service at Joint Base San Antonio, Texas. During the MWD disposition process, Army Veterinary Corps Officers are responsible for providing a recommendation letter and a consultation/referral form that describes each MWD’s medical condition and suitability for adoption. The Army also maintains a veterinary care system that provides medical care to privately owned animals of individuals with access to medical services at a military installation, including adopted MWDs. The Army charges individuals with privately owned animals for the medical care of their pets. According to Army officials, the charges for veterinary care were developed based on a review of supply costs, estimated manpower costs, historical costs for services, and recommended guidance on cost considerations established by the American Animal Hospital Association. DOD uses three systems to track different types of information about MWDs, including information related to their removal from service. The number of MWDs that have been adopted, transferred, or euthanized has varied over the past 5 years. Officials from the Air Force and Army use three separate systems to track information on MWDs. Two of the systems—WDMS and the Central Repository—are maintained by the Air Force, while ROVR, the electronic medical record system, is maintained by the Army. (See table 1.) Each of these systems has a different role in documenting information related to an MWD’s removal from service. WDMS documents the MWD’s status when it is removed from service, including whether the MWD is adopted, transferred, or euthanized. This status of the MWD can be verified using documents maintained in the Central Repository, which is used to store copies of records for MWDs that have been removed from service—most of which are not contained in WDMS. Lastly, ROVR is used to provide medical information for consideration of an MWD’s removal from service and to document an MWD’s euthanization, if needed. Based on our review of data from these systems and related documentation, the number of MWDs adopted or transferred during 2011 through 2015 varied, with the highest numbers in 2012 and 2013. An Air Force official explained that these higher numbers of adoptions and transfers in 2012 and 2013 were due to a decreased need for MWDs during deployments. The number of euthanized MWDs varied to a lesser extent. (See figure 2.) Some of the adopted MWDs included in these data were likely never deployed outside of their assigned military installations. According to Air Force officials, some MWDs may have been acquired by the military but then did not qualify for enrollment in the MWD program due to performance or medical reasons. Other MWDs were enrolled in the program but were removed from service for similar reasons before they were 3 years old. According to Air Force officials, these dogs were also likely never deployed into service. (See table 2.) Available data for 55 percent of the MWDs adopted in 2014 and 2015 indicate that prevalent medical conditions included skin, dental, and musculoskeletal issues. The potential costs for treating these prevalent medical conditions are difficult to determine due to variations in potential courses of treatment and other factors. However, we did obtain information on recommended preventative care and estimated costs for older breeds used by the MWD program from the chief of staff of a network of private veterinary hospitals. Based on our analysis of electronic medical records with master problem lists—available for approximately 55 percent (421 of 772) of the MWDs adopted in 2014 and 2015—we found that the most prevalent medical conditions were as follows: skin conditions or ear infections, dental disease or injury, arthritis or degenerative joint disease, degenerative lumbo-sacral stenosis. Some MWDs had more than one medical condition, and as a result, they may have been included in more than one category. (See table 3.) An Army veterinarian told us that “skin conditions or ear infections” and “dental disease or injury”—the two most prevalent medical conditions we identified—are unlikely to result in removal from service as these conditions generally can be treated or resolved. (See prevalent medical conditions 1 and 2 in table 3.) The remaining three prevalent medical conditions we identified are associated with musculoskeletal issues and are more likely to result in MWDs’ removal from service. (See prevalent medical conditions 3, 4, and 5 in table 3.) According to an Army veterinarian, these conditions are common in breeds maintained by the MWD program. For example, degenerative lumbo-sacral stenosis is common in German Shepherd dogs, one of the preferred breeds for the MWD program. The potential costs for treating these prevalent medical conditions may vary based on a number of factors, including the course of treatment, the underlying cause for the condition, and geographic location. According to an Army official and representatives from a national network of private veterinary hospitals, there are no standardized medical treatment protocols for animals that would dictate particular courses of treatment for specific medical conditions. Therefore, costs for these conditions would vary. Furthermore, the chief of staff of a network of private veterinary hospitals in New Jersey, which provides free specialty care to adopted MWDs in its area, told us that it would be difficult to estimate treatment costs because some of the prevalent health conditions we identified for MWDs could have different underlying causes, which would serve as the basis for treatment options and costs. For example, lameness could have different root causes, so it would be difficult to estimate treatment costs for this condition without knowing the contributing factors. Adopted MWDs need preventative care regardless of their medical conditions. Based on our analysis, the average age of most MWDs that had electronic medical records with master problem lists in ROVR and were adopted during 2014 and 2015 was about 9 years old, with a range from 1 to 14 years. The chief of staff of a private network of veterinary hospitals in New Jersey provided us with the types of preventative care they recommend for 9-year old Labrador Retrievers, Belgian Malinois, and German Shepherd dogs—the most common breeds used by the MWD program. The chief of staff also provided estimated costs for these procedures, which are specific to this private network of veterinary hospitals. (See table 4.) An Army veterinarian reviewed the information provided by the chief of staff and concurred that the identified procedures and costs were reasonable. Although owners of adopted MWDs are responsible for the costs of their care, some assistance with privately provided veterinary care is available through nonprofit organizations. Individuals with access to DOD medical care may also purchase care for their adopted MWDs at military installations. However, the types of available veterinary services vary by military installation, and some installations do not offer veterinary services. Owners of adopted MWDs may obtain assistance with privately provided veterinary care through nonprofit organizations. Assistance for adopted MWDs is primarily available through the U.S. War Dogs Association, an organization that offers (1) a prescription drug program (free prescription drugs for registered MWDs), (2) free specialty care through Red Bank Veterinary Hospital in New Jersey, and (3) financial assistance of up to $500 for emergency care and up to $100 for euthanasia. About 400 former MWDs were registered with the association as of August 2016, according to the association’s president. In addition to assistance with medical care, the association also finds new homes for adopted MWDs when the owners are no longer able to take care of them. According to Air Force officials, individuals who adopt MWDs receive information about the U.S. War Dogs Association at the time of adoption. These officials told us that this is the only nonprofit organization’s information they provide to individuals adopting MWDs. Other nonprofit organizations that inquire about adopted MWDs are directed to contact the U.S. War Dogs Association. Some assistance with privately provided medical care is also available through other organizations, such as the American Humane organization, which helps cover some medical costs for adopted MWDs when their owners are unable to pay for their care. Officials from this organization told us they currently cover medical care costs for about 21 former MWDs. Information about potential services provided by the American Humane organization is available on its website. Owners of adopted MWDs may purchase veterinary services through DOD if they have access to medical services at military installations. According to an Army official, access to medical care is generally available for active duty servicemembers, their dependents, retirees and their dependents, as well as reservists on active orders. However, the types of veterinary services offered vary by military installation, and some installations do not offer any veterinary services. (See table 5.) The Army’s Public Health Center maintains an interactive map on its website that provides information about the types of veterinary services that are available at military installations. According to an Army official, the link for this interactive map is listed on all veterinary service newsletters, brochures, and posters. This website has also been publicized in an Army newsletter for retired soldiers, surviving spouses, and family. We provided a draft of this report to DOD for comment. DOD concurred with the report and provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committee, and the Secretaries of Defense, the Air Force, and the Army. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff has any questions about this report, please contact me at (202) 512-7114 or at [email protected]. Contact points for our Office of Congressional Relations and Office of Public Affairs can be found on the last page of this report. Other major contributors to this report are listed in the appendix. In addition to the contact named above, Bonnie Anderson, Assistant Director; Danielle Bernstein, Analyst-in-Charge; Jennie Apter; and Kenisha Cantrell made key contributions to this report. Also contributing were Jennifer Rudisill and Mary Denigan-Macauley. | DOD has used MWDs since World War II to assist and protect servicemembers at installations within the United States and at deployment sites worldwide. As of October 2016, about 1,800 MWDs were in service. The Air Force is responsible for procuring and assigning all MWDs for the military. The Army is responsible for the medical care of all military animals, including MWDs. Questions have been raised as to whether MWDs' experiences during deployment may result in conditions that pose future health challenges. Based on those questions, a House Report accompanying the proposed version of the National Defense Authorization Act for Fiscal Year 2017 included a provision for GAO to assess end-of-service veterinary care for MWDs. This report examines (1) how DOD tracks information about MWDs, and how many MWDs were adopted, transferred, or euthanized over the past 5 years (2011-2015); (2) prevalent medical conditions of adopted MWDs for 2014 and 2015; and (3) what assistance is available for individuals who adopt MWDs. GAO obtained and analyzed data from the three systems used to track information on MWDs, observed system demonstrations, interviewed Air Force and Army officials, and reviewed related documentation. GAO also interviewed relevant nonprofit organizations that provide assistance to individuals who adopt MWDs. DOD concurred with the report and provided technical comments, which GAO incorporated as appropriate. The Department of Defense (DOD) uses three systems to track information about Military Working Dogs (MWDs), including information related to their removal from service at which time they can be put up for adoption, transferred to a law enforcement agency, or euthanized for health or behavioral reasons. According to an Air Force official, the number of MWDs adopted or transferred over the past 5 years (2011 through 2015) varied based on changes in deployment needs. The number of euthanized MWDs varied to a lesser extent. Based on medical data available for 421 of 772 MWDs adopted during 2014 and 2015, GAO found that the most prevalent medical conditions included skin and dental issues. An Army veterinarian told GAO that these medical conditions are unlikely to result in MWDs' removal from service as these conditions generally can be treated or resolved. Other prevalent medical conditions, such as arthritis, are associated with musculoskeletal issues, which are more likely to result in MWDs' removal from service. The veterinarian told us these types of musculoskeletal issues are common in breeds maintained by the MWD program, which include Labrador Retrievers, Belgian Malinois, and German Shepherd dogs. While owners of adopted MWDs are responsible for the costs of veterinary care, some assistance with these costs is available through nonprofit organizations, such as the U.S. War Dogs Association. Individuals with access to DOD medical care—such as active-duty servicemembers and their dependents—may also purchase care for their adopted MWDs at veterinary clinics located at military installations. However, the types of veterinary services vary by installation, and some installations do not offer any veterinary services. |
In the early 1990s, the Department of Defense (DOD) conducted two major defense reviews—the 1991 Base Force Review and the 1993 Bottom-Up Review—to assess military force structure requirements in the post-Cold War era. Following these reviews, Congress established the Commission on Roles and Missions of the Armed Forces to determine the appropriateness of current allocations of roles, missions, and functions among the armed forces and make recommendations for changes. Among its recommendations, the Commission called for DOD to conduct a comprehensive strategy and force review at the start of each administration, or every 4 years, to examine an array of force mixes, budget levels, and missions to identify the best force mix. In August 1995, the Secretary of Defense endorsed performing a quadrennial review of the defense program. He expected to complete the first such review in 1997. Congress, noting the Secretary’s intention to complete a Quadrennial Defense Review (QDR) in 1997, identified specific reporting requirements for the review in the National Defense Authorization Act for Fiscal Year 1997. Congress expected the QDR to review the defense strategy of the United States and identify the force structure best suited to implement the strategy. Specifically, the law required a comprehensive examination of defense strategy; active, guard, and reserve component force structure; force modernization plans; infrastructure; budget plans; and other elements of the defense program. The law also required DOD to identify how the force structure would be affected by new technologies anticipated to be available by 2005 and by the changes in doctrine and operational concepts that would result from such technologies. DOD issued its report on the QDR in May 1997. The law also established an independent, nonpartisan panel comprising national security experts from the private sector, known as the National Defense Panel, to review the results of the 1997 QDR and conduct a subsequent study of force alternatives. Congress noted that it was important to provide for an independent review of force structure that extends beyond the time frame of the QDR and explores innovative and forward-thinking ways of meeting emerging challenges. The National Defense Panel issued its report in December 1997 as required by the statute. DOD began the QDR in November 1996 after the presidential election. Although the President was reelected, the QDR was underway for approximately 2 months before a new Secretary of Defense was confirmed in January 1997. Following his confirmation, the Secretary provided guidance to DOD officials concerning the defense strategy and budget assumptions for the QDR. The QDR included participation by the Office of the Secretary of Defense (OSD), the Joint Staff, the services, and the commanders in chief of the combatant commands. DOD organized officials into three tiers that ultimately reported to the Secretary of Defense (see fig. 1.1). The first tier consisted of seven panels that were tasked to conduct analyses between November 1996 and February 1997. The second tier, an Integration Group led by senior OSD and Joint Staff officials, was designed to integrate the seven panels’ results and produce a set of options to implement the defense strategy. The third tier, the Senior Steering Group, cochaired by the Deputy Secretary of Defense and Vice Chairman of the Joint Chiefs of Staff, was to oversee the QDR process and make recommendations to the Secretary of Defense. To assess force structure requirements, DOD’s force structure panel (1) conducted an assessment by modeling two major, overlapping wars on the Korean peninsula and in Southwest Asia in 2006; (2) examined the results of an assessment, led by the Joint Staff, of smaller-scale contingency operations; and (3) led an assessment of the capabilities of U.S. forces against a notional regional great power in 2014. DOD also conducted an analysis of overseas presence and several individual service assessments of some issues not specifically addressed in the other assessments. The modernization panel established task forces to review a number of major planned modernization programs. Its goal was to ensure that future U.S. forces will have equipment that leverages new technologies and supports the modern, joint capabilities cited in Joint Vision 2010, the Chairman of the Joint Chiefs of Staff’s vision for transforming U.S. military capabilities for the future. DOD’s QDR report states that although the threat of global war has receded, the United States will likely face a number of significant challenges between now and 2015. First, the United States will continue to confront regional dangers, including the threat of large-scale, cross-border aggression against allies in key regions by hostile states with significant military power. Moreover, adversaries may use asymmetric means—avoiding conventional military contact—to attack U.S. forces and interests overseas and Americans at home. In addition, failing states may create instability, internal conflict, and humanitarian crises. DOD also concluded that the proliferation of advanced weapons and technologies could increase the number of potential adversaries with significant military capabilities and potentially change the character of military challenges. Of particular concern are the spread of nuclear, biological, and chemical weapons; information warfare capabilities; advanced conventional weapons; stealth capabilities; unmanned aerial vehicles; and capabilities to access or deny access to space. Moreover, U.S. interests will be challenged by a variety of transnational dangers, such as terrorism, illegal drug trade, international organized crime, and the uncontrolled flow of migrants. Finally, the United States will face threats to the homeland from strategic arsenals, intercontinental ballistic missiles, and weapons of mass destruction. According to intelligence sources, it is unlikely that a “global peer competitor” will emerge by 2015 with capabilities that could challenge the United States as the Soviet Union did during the Cold War. Furthermore, it is likely that no regional power or coalition will amass sufficient conventional military strength in the next 10 to 15 years to defeat U.S. forces. However, it is possible that a regional great power or global peer competitor, such as Russia and China, may emerge after 2015. On the basis of DOD’s assessment of the global security environment through 2015, the QDR report cited a defense strategy consisting of three key elements: shape, respond, and prepare. The strategy states that the United States must continue to shape the strategic environment by promoting U.S. interests through a variety of means, including the deployment of forces permanently, rotationally, and temporarily overseas. The United States must also maintain the capability to respond to a full spectrum of military operations ranging from deterring aggression and conducting concurrent smaller-scale contingency operations to fighting and winning two major theater wars nearly simultaneously. The strategy also cited the need to prepare for a future that may include the emergence of new threats and/or a regional great power or global peer competitor by investing now in force modernization, exploiting the potential of advanced technologies, and reengineering DOD’s infrastructure and support activities. According to DOD, the force structure proposed by the QDR sustains the forces and capabilities needed to meet the demands of the strategy in the near term while also beginning to transform the force for the future. The QDR endorsed a force structure that is very similar, although slightly smaller, to that proposed by the Bottom-Up Review. The Secretary of Defense also concluded that DOD should increase procurement funding to $60 billion a year by 2001. To achieve this goal and stay within a $250 billion projected defense budget in constant 1997 dollars, the Secretary directed a reduction of DOD’s infrastructure, cutting almost 200,000 active, reserve, and civilian personnel, and a reduction in funding for some modernization programs, such as the Joint Surveillance and Target Attack Radar System and F-22, F/A-18E/F, Joint Strike Fighter, and MV-22 aircraft. In December 1997, the National Defense Panel reported that the challenges of the twenty first century will require fundamental changes to national security institutions, military strategy, and defense posture by 2020. To make these changes, the Panel stated that the United States must move more quickly to transform its military and national security structures, operational concepts, equipment, and business practices. Specifically, the Panel stated that DOD placed too much emphasis on preparing for the unlikely probability of two major theater wars because it serves as a means to justify the current force structure. The Panel noted that funds now spent on preserving forces could be better spent on preparing for the future, thereby reducing the risk to long-term security. The Panel also said that some of the services’ procurement plans did not advance the transformation of current capability to that needed in the future. It said the procurement budgets of the services remain focused on systems that will be at risk in 2010 to 2020 instead of emphasizing experimentation with a variety of military systems, operational concepts, and force structures. The Panel estimated that $5 billion to $10 billion annually is needed for initiatives in intelligence, space, urban warfare, joint experimentation, and information operations. According to the Panel, these funds should come from acquisition reform and cutting excess infrastructure. However, if these reforms do not materialize, the funds may need to come from reduced operating levels, a smaller force structure, or cancellation of some procurement programs. In response to requests from the Chairman and Ranking Minority Member of the Senate Armed Services Committee and the Chairman of the House Budget Committee, we assessed whether (1) the QDR’s force structure and modernization assessments examined alternatives to the planned force and (2) opportunities exist to improve the structure and methodology of future QDRs. Although we did not evaluate the rationale for the defense strategy cited in the QDR report, we obtained briefings and had discussions with officials in the Office of the Assistant Secretary of Defense for Strategy and Requirements and the Joint Staff about its development and content. We also reviewed reports and interviewed officials in the Defense Intelligence Agency and National Intelligence Council about near-and long-term threats relevant to the strategy. To evaluate the extent to which DOD’s three principal force structure assessments—the two major theater wars, smaller scale contingencies, and future regional great power—analyzed alternatives, we obtained briefings, reviewed documents, and interviewed officials in OSD, the Joint Staff, the services, the U.S. Atlantic Command, and the U.S. Central Command. We also obtained and analyzed key assumptions used in these force assessments, such as assumptions about warning time and level of allied participation, and compared these assumptions with those used by the Bottom-Up Review. Moreover, we discussed the rationale for the assumptions with OSD, Joint Staff, and service officials. To evaluate the reliability of computer-generated data produced by the two campaign models used to assess forces during the QDR—the Tactical Warfare Model (TACWAR) for the two major theater war assessment and the Joint Integrated Contingency Model (JICM) for the war with a regional great power—we examined the process DOD uses to validate the models and the data DOD used as model inputs. We reviewed documents on the TACWAR model from the U.S. Army Training and Doctrine Command Analysis Center as well as documents related to JICM. We also reviewed Defense Modeling and Simulation Office documents and interviewed an Office official on DOD’s process of model verification, validation, and accreditation. In addition, we observed TACWAR demonstrations so that we could better understand how the outputs are generated. Although we did not review or validate the actual computer-generated data used as input to the two models, we reviewed various estimates and conclusions that flowed from that data. More specifically, we interviewed OSD officials about the Joint Data Support System as well as DOD and RAND officials about their verification and validation process and means for maintaining data entered into TACWAR and JICM. Also, we evaluated the steps taken by DOD to ensure the quality of data extracted from a major TACWAR data source, the Deep Attack Weapons Mix Study, as well as other sources that served as input. We believe this to be a reasonable approach to identifying the strengths and limitations of these models and the data because (1) there are credible sources within the defense community such as the TACWAR users group, RAND, Defense Modeling Simulation Office, and Coleman Research that evaluate the models and (2) running test data through the models was not feasible for time and cost reasons. To evaluate the extent to which the modernization review evaluated alternatives, we obtained briefings and interviewed the cochairs of the Modernization Panel from the offices of the Under Secretary of Defense for Acquisition and Technology, Director for Strategic and Tactical Systems, and the Joint Chiefs of Staff. We also interviewed OSD, Joint Staff, and service officials who supported the Modernization Panel, and we were briefed on and reviewed documents related to the results of 7 of DOD’s 17 modernization task forces. Specifically, we reviewed results for theater ballistic missile defense, the Joint Surveillance and Target Attack Radar System, national missile defense, tactical aircraft, ship acquisition, Marine Corps ground forces, and Marine Corps rotary wing forces. OSD officials and panel representatives did not maintain data on the total modernization funding associated with each of the 17 task forces. To determine whether opportunities exist to improve the structure and methodology of future QDRs, we reviewed documents and interviewed officials from the Office of the Assistant Secretary of Defense for Strategy and Requirements and the Director, Program Analysis and Evaluation, concerning the 1997 QDR process. We drew on our analysis of the process and implementation of the force assessment and modernization reviews to identify and summarize factors that hampered DOD’s 1997 QDR process. We also obtained information on studies initiated by DOD following the QDR’s completion and on DOD’s plans to develop a new joint campaign model. We discussed our observations with officials in OSD, the Joint Staff, and the services and obtained their views on the design and implementation of the QDR and ways to improve it. We conducted our review from July 1997 to April 1998 in accordance with generally accepted government auditing standards. The QDR’s major theater war assessment, smaller-scale contingency war game series, and future regional great power assessment used some analytical tools different from those used in the Bottom-Up Review to analyze a broader range of military operations and conduct greater analysis of some key assumptions. These assessments concluded that the current force structure was sufficient to meet the U.S. defense strategy. However, only one—the major theater war force assessment—evaluated any alternative force structures, and they were limited. Furthermore, none of the assessments fully examined the impact of evolving technologies and operational concepts on future force size and structure. As a result, senior DOD officials recommended a force structure without examining some alternatives that would have provided greater assurance that DOD complied with congressional guidance to identify the best suited force. According to the U.S. defense strategy, the United States must be able to fight and win two overlapping major theater wars, preferably in concert with regional allies. As part of the QDR force assessment analysis, DOD modeled the sufficiency of U.S. forces to fulfill this requirement. This effort was more extensive than the analysis done during the Bottom-Up Review in that DOD modeled enemy use of chemical weapons, shorter warning time, and some level of initial engagement in peacetime operations. However, other than the current force, the only force structures modeled were those resulting from 10-, 20-, and 30-percent cuts equally proportioned to each service’s forces, according to Joint Staff and OSD officials. OSD’s Office of Program Analysis and Evaluation and the Warfighting Analysis Division of the Joint Staff’s Director for Force Structure, Resources, and Assessment performed the two major theater war assessment using the TACWAR model and data from the Deep Attack Weapons Mix Study. TACWAR is a theater-level model that assesses force structures and resource allocations within the context of a joint campaign. The model ran on a 12-hour battle cycle, and operators, using their military judgment, could make periodic adjustments to the scenario to correct or revise any results that appeared unrealistic. For example, the model allowed units in a sector to move at their own speed. However, in a realistic situation, units would travel together to protect each other’s flanks. The operators could adjust the speed of the units to ensure that they moved in concert. The results were then weighed against measures of effectiveness drawn from the war game for the Bottom-Up Review. The Deep Attack Weapons Mix Study data came from a recent DOD effort to assess deep attack requirements across the services. A key objective of the study was to analyze weapon mix requirements for DOD’s planned force in 1998, 2006, and 2014 and determine the impact of force structure changes on the weapons mix. TACWAR was developed in the 1970s and has been revised several times. While officials agreed that TACWAR is the best campaign model available at this time, they also acknowledged that it has limitations. For example, it models the ground campaign better than the air or naval campaigns. Also, the model provides an aggregated look at the battlefield, which means it is not very useful for identifying details of the impact of particular weapon systems or force structure changes on the battle or the impact of some new technologies and emerging operational concepts. DOD officials used Deep Attack Weapons Mix Study data because they concluded it was the most current and complete information available on force structure, movement into theater, weapon system capabilities, and target locations. Also, according to officials, given the short time frame available to complete the assessment, it was important that the data was in the necessary format for TACWAR and ready to use. The recently completed study, according to one service official, was the most detailed and comprehensive force and weapon mix analysis conducted by the defense community. During the study, the services repeatedly reviewed and revised the data to ensure its accuracy. As a result, while the services did not participate directly in TACWAR’s major theater war assessment, OSD and Joint Staff officials stated they were satisfied the services had sufficient input to the data used in the analysis. To run the major theater war force assessment, OSD and the Joint Staff made assumptions regarding the threat, battle scenario, and other factors. The threat was based on the Defense Intelligence Agency’s projection of Iraq and North Korea as aggressors in 2006. The scenario was taken from defense guidance. It featured the first major theater war starting after a warning period, followed by the second, overlapping major theater war. Defense guidance also provided many of the operational assumptions for the scenario such as warning times, separation times between the two wars, equipment prepositioned in theater, call-up of reserve forces, allied participation, access to overseas bases, and port and transportation availability. However, other assumptions came from the war game analysis used in the Bottom-Up Review. These included assumptions about the readiness of U.S., allied and aggressor forces; that some forces from the first major theater war would be available for the second war’s counteroffensive; and that some forces were already deployed overseas. Since TACWAR cannot model command, control, communications, computers, intelligence, surveillance, and reconnaissance effectively, the model was adjusted to degrade munitions effectiveness to represent these projected capabilities, according to Joint Staff officials. The success of U.S. forces in the major theater wars was determined by assessing the risk associated with each phase of the battle and the overall campaigns. OSD and the Joint Staff identified several specific tasks as measures of effectiveness in achieving the operational objectives for each war. These tasks included minimizing allied losses, holding battle lines, and affecting important targets. Operators measured the extent to which these tasks were accomplished during each battle phase and for the war in each model run. The operators were also able to gain insights about critical requirements for battle success, operational abilities of each force, and problems that may be encountered in each war. Once the base-case two major theater war scenario was established, OSD modeled the sufficiency of DOD’s planned forces for 2006, including the new or modernized weapons planned for purchase by that time, according to OSD and Joint Staff officials. It also modeled several excursions based on equally proportioned 10-, 20-, and 30-percent reductions to the forces. For example, a 10-percent force reduction meant the elimination of one Navy carrier battle group, one Army active division, and two Air Force fighter wings, along with some Marine Corps and support units. The 20-percent reduction meant the Army and Navy would lose two units each and the Air Force would lose four wings. With the 30-percent reduction, the Army and Navy would lose three units each and the Air Force would lose six wings. There would also be commensurate reductions in Marine Corps and support units. OSD and the Joint Staff also modeled other excursions from the base-case two major theater war scenario. They included shorter warning time, the enemy’s use of chemical weapons in both wars, and a combination of both short warning and the use of chemical weapons. Each of these excursions required DOD to make more assumptions in addition to those already made. The shorter warning excursion assumed the U.S. forces were given fewer days’ notice in advance of the start of the second war than in the base-case scenario. According to a Joint Staff official, the chemical excursion modeled a realistic scenario for the U.S. force and allies, which was neither a best nor worst case situation. This included assumptions about weather conditions, the number and type of weapons, and delivery methods. Information for this scenario was drawn from Defense Intelligence Agency data on the type and number of weapons in the enemies’ inventories and how the enemies would deliver those weapons. Information such as dispersion rates and lethality of chemical agents modeled came from the Army Chemical School. In many of the excursions, OSD and the Joint Staff also modeled the impact of U.S. forces being engaged in various types of operations around the world, such as humanitarian assistance or peacekeeping operations, when the first major theater war started. The Joint Staff was responsible for modeling these excursions, analyzing the results of the battles, and determining the risk levels to assign to the battle based on the accomplishment of the specified tasks. As shown in table 2.1, excursions were run for each of the different force levels—the current projected force and 10-, 20-, and 30-percent reductions—using the base-case two major theater war scenario. However, not all force levels were modeled against all variables because, according to officials, the resulting risks for some force levels would be too high. U.S. forces won the two wars in every excursion modeled, but their effectiveness in achieving all the specific tasks varied to the point that the risks associated with some excursions were unacceptable, according to OSD and the Joint Staff. As a result, DOD officials concluded that a force close in size and structure to the current one would be needed to win two, nearly simultaneous major theater wars in concert with regional allies. However, the analysis also showed that a slightly smaller force would be able to win without a significant increase in risk in the base-case scenario. When chemical weapons or shorter warning times were involved, the current force was necessary to conduct these operations with an acceptable level of risk. Although the analysis showed that a slightly smaller force was able to meet many of the two-war requirements without a significant increase in risk, OSD did not refine the analysis to model other force reductions, like 5 or 15 percent, to see if they would produce viable force options. They also did not model alternatives that would have affected the services’ forces unequally, such as using a small reduction to one service’s forces, but no reduction or even a slight increase to other services’ forces. An OSD official stated that, given the time available to perform this assessment, OSD would not have been able to obtain consensus among the services on what smaller force reductions should look like or how unequal force reductions should be taken. Also, OSD and Joint Staff officials stated that the TACWAR model is not sensitive enough to effectively model slight changes in forces. As a result, information on potential alternatives to the current force was not available to the Secretary of Defense for determining the best-suited force to carry out the strategy. While the major theater war assessment modeled the modernized force planned for 2006, which includes such things as stealth technology and precision-guided missiles, DOD did not fully examine how new technologies might affect future operational concepts or force structure. For example, as a result of its Army Force XXI initiative, the Army plans to begin fielding units that will have an enhanced situational awareness of the battlefield through digital technology by 2006. Also, the Air Force has proposed an alternative concept of operations using massive air strikes at the beginning of a war, with more munitions than currently planned, to rapidly halt the enemy’s advance and provide more time for a ground buildup. Yet, neither was modeled during the major theater war analysis. OSD and Joint Staff officials stated that they did not analyze the effects of new technologies or concepts because the TACWAR model is not sensitive enough to do so. They also stated that the services are not far enough along in their understanding of how new technologies and concepts will affect war-fighting doctrine. According to the U.S. defense strategy, the U.S. military must be prepared to successfully conduct multiple, concurrent smaller-scale contingency operations worldwide in any environment, including one in which an adversary uses nuclear, chemical, or biological weapons. The QDR’s primary assessment of the ability of U.S. forces to respond to such operations was the Dynamic Commitment war game series. This series of conferences and war games was designed to evaluate whether the planned force was sufficient to meet the demands of the full range of military operations from 1997 to 2005 and how engagement in smaller-scale contingencies might affect the forces’ ability to respond to major theater wars. While this assessment provided several insights into how forces were allocated to a wide range of operations, it did not evaluate alternative force structures to identify the force best suited to meet the demands of the defense strategy. During the QDR, DOD expanded on the Bottom-Up Review’s examination of force requirements for smaller-scale contingencies. Smaller-scale contingency operations encompass the full range of military operations other than peacetime engagement activities but short of a major theater war. These operations include peacekeeping, humanitarian assistance, noncombatant evacuations, limited strikes, and disaster relief. DOD expects the demand for such operations will remain high over the next 15 to 20 years and that these operations will pose the most frequent challenge to U.S. forces through 2015. According to the QDR, U.S. forces must also be able to withdraw from these contingencies, reconstitute, and then deploy to a major theater war within the required time. The Joint Staff developed the Dynamic Commitment war game series to test whether the currently planned force structure was sufficient to execute the range of potential military operations. The Joint Staff also designed the series to help the services identify stress points—forces that sustained high operating tempo in conducting multiple contingency operations. Dynamic Commitment was not designed to evaluate the forces’ effectiveness, according to OSD officials. The forces were assumed to be ready when called upon and effective in meeting operational requirements. Two major theater wars were incorporated in the war game series to test the forces’ ability to sufficiently respond when some forces were already deployed to smaller-scale contingencies. During Dynamic Commitment, participants from the Joint Staff, combatant commands (geographical and special operations), and service staffs (including reserve components and the Coast Guard), allocated forces to multiple, overlapping smaller-scale contingencies and major theater wars forecasted over 9 years. Nearly 50 notional smaller-scale contingencies were developed to illustrate the full spectrum of potential U.S. military operations short of a war. The contingencies consisted of interventions, shows-of-force, no-fly zone enforcement, maritime sanction enforcement, disaster relief, peacekeeping, noncombatant evacuations, and humanitarian assistance. The contingencies were based on the type, duration, and general frequency of such operations since 1991. Scenarios were developed using defense guidance and combatant command operational plans. Prior to the game, a concept of operations and list of associated forces for each operation were approved by game participants from OSD, the Joint Staff, combatant commands, and the services. During the game, participants—primarily combatant command and service planners—allocated forces to these sequential and sometimes simultaneous military operations, considering the world situation and the need to reserve forces to respond to other potential crises, including major theater wars. While the participants generally allocated forces to a contingency using the previously developed force list, they could change the forces based on military judgment or sometimes their availability. For example, in one case, U.S. forces were deployed to a large-scale intervention when events in two other areas of the world became concerns. Rather than send an Army air assault brigade to one of the two areas as a show of force as originally planned, participants decided to deploy Air Force fighters and a Navy aircraft carrier and hold the Army’s one remaining uncommitted air assault brigade in reserve. DOD officials had differing views about whether the force allocation process in Dynamic Commitment resulted in the appropriate size and mix of forces being allocated to military operations. According to some game participants, there was a perceived need for each service to maximize the allocation of its forces to justify them and avoid force reductions. As a result, more forces than necessary may have been allocated to some operations. However, Joint Staff officials asserted that the force allocations during the game were appropriate, since they were generally consistent with those used in actual deployments and each service was there to ensure that others were not over-allocating their forces. As a result of the Dynamic Commitment war game series, DOD officials concluded that the projected U.S. force is sufficient in size, though stressed, to execute the defense strategy and that some forces already known to be stressed would continue to be so. Another significant insight was that sequential deployments to smaller-scale contingencies may have a cumulative, negative impact on the all-volunteer force. The series confirmed that high operating tempo remains an issue for previously identified “low density/high demand” assets—those major platforms, weapon systems, units, and personnel that are in continual high demand to support worldwide joint military operations and that are available in relatively small numbers. The series also identified other forces that were in high demand, such as military police and Army signal units. According to DOD, the series helped identify forces that services should not cut and provided valuable insights into managing the force and the challenges of responding to multiple, overlapping smaller-scale contingency operations. Some service assets, identified as “low density/high demand” assets, are managed by the global military force policy, which establishes peacetime prioritization guidelines to assist senior leaders in allocating these assets for crises, contingency operations, and long-term operations. These assets include the Airborne Warning and Control System; the EA-6B, electronic warfare aircraft; and civil affairs units. According to Joint Staff officials, the Dynamic Commitment series affirmed their value and gave the services insights into managing them. The series also identified issues critical to ensuring that U.S. forces can transition from smaller-scale contingencies to wars. For example, it found that in the case of mobilization for a major theater war, the logistics of redeploying forces already committed in various regions around the world would be difficult and could seriously strain mobility and support forces. Although they did not summarize the results to make force structure recommendations or decisions based on the series, Joint Staff officials said the analysis provided insights into which forces should not be cut. It also made clear that there is much work still to be done in assessing the impact and managing the demands of smaller-scale contingencies. Participants also discussed the potential impact of weapons of mass destruction and the consequences of limited theater access during the series. According to Joint Staff officials, the pace of force deployment slowed when chemical weapons were introduced. Also, the use of these weapons raised the awareness of force protection and the advantage of forces operating at a distance from the battle. While the Dynamic Commitment series did yield some insights, DOD did not use it to identify or analyze any changes to DOD’s current force structure. Evaluating alternatives might have led DOD to consider reducing some combat or war-fighting capabilities and adding others more suitable to the specialized needs of smaller-scale contingencies. Such alternatives could help alleviate operating tempo problems while maintaining forces capable of winning two major theater wars with acceptable risk. Moreover, the services’ analyses of the Dynamic Commitment data generally confirmed that certain parts of their forces were sustaining a high operating tempo. Had the Joint Staff or OSD centrally analyzed the data, they might have gained insights on how to better balance requirements for smaller-scale contingencies and wars across all services or identified excess or low-utility capabilities that could be reduced. To test the U.S. ability to defeat a regional great power in the 2010-2015 time frame, DOD officials believed it was important to analyze an aggressor with greater capabilities than are currently anticipated for Iran, Iraq, or North Korea. The regional great power assessment attempted to examine this potential by modeling projected U.S. weapons and forces modernized at various levels against a notional enemy. However, this assessment did not analyze alternatives that varied the mix of DOD’s planned modernization programs to help identify the most cost-effective investments. Also, it did not fully assess the potential impact of new technologies on future operational concepts and force structure. Even though the services are exploring new doctrine arising from advanced weapons, DOD officials believe that these efforts cannot be modeled yet. OSD considered using TACWAR to model the conflict between the invading enemy nation and allied forces. However, much of the baseline data needed for TACWAR to perform this assessment was not available in the level of detail needed and would have taken 6 months to prepare. As a result, OSD decided to use JICM, a multiple theater combat model developed by RAND, because it requires less definitive data to model campaigns. The scenario for the regional great power assessment involved an air/land military conflict on a hypothetical continent in 2014. A large and technologically advanced regional great power had invaded its weaker neighbor to prevent its entrance into a fictional alliance. The United States was allied with a medium-sized power that bordered the weaker nation. The U.S. objective was to repel the aggressor nation’s forces and push them back to the pre-war border. OSD officials told us that they used this scenario because they did not want to identify any particular country as the focus of U.S. threat planning. Developing the scenario required assembling large amounts of data that were not readily available. OSD constructed the hypothetical scenario using primarily Defense Intelligence Agency information regarding terrain, forecasted orders of battle, and weapon systems of current major powers. The enemy nation’s capabilities were extrapolated from intelligence data on a major power after examining projected data for several potential adversaries. Its capabilities included large numbers of armored vehicles that were moderately technologically advanced. The intelligence community’s projection of the threat data assumed a moderate level of economic growth for the enemy nation. The United States committed 75 percent of its forces to this effort. U.S. forces consisted of those projected for 2014, reflecting the services’ 1997 force structure and modernization projections. The total number of U.S. and allied ground and air forces employed were about 80 percent of the enemy’s, but U.S. and allied forces possessed more advanced air and ground forces than the enemy nation. (See fig. 2.1.) According to OSD officials, several key assumptions were made for the regional great power assessment. JICM assumed that each side had equal intelligence on the activities of the other. In addition, it assumed that projected mobility forces were available and in working order and that support forces were ready and available. Success in a war with a regional great power was based on assessing the extent to which U.S. and allied forces accomplished specific tasks, such as minimizing allied losses and moving battle lines, and returning the enemy to its pre-war border. Service officials criticized the regional great power scenario for not representing a full range of threats that would require a broader range of joint war-fighting capabilities. For example, Navy officials told us that main combat actions in the scenario occurred too far inland for naval aviation to make an effective contribution to the war and allow amphibious landings to be modeled at all. In general, maritime warfare was depicted only in a separate, supporting mobility analysis. An Air Force official stated that the proximity of the hypothetical continent to the United States was favorable to airlift capabilities. Like TACWAR, JICM is an aggregate model and not sensitive enough to show the impact of other than major changes in force structure, according to OSD officials. Also, service officials told us that JICM did not simulate their forces’ capabilities well. For example, Army officials complained that the theater-level focus of JICM modeled aircraft and air-delivered weapons more accurately than ground forces. Therefore, the contribution of different ground forces is not as clearly discernable as various types of air power. An Air Force official said the use of the Air Force’s space assets also could not be modeled with JICM. According to OSD officials, the results of the assessment reassured them that the 1997 modernization program was the correct one to follow for the foreseeable future. They ran numerous excursions with varying levels of modernization, warning time, and ballistic missile threat. In no excursion were the United States and its allies in danger of losing the war. However, DOD concluded that some excursions caused unacceptable levels of risk that the United States and its ally would not achieve their specific tasks. The regional great power assessment modeled four levels of modernization: the 1997 force, the 1997 force extended to 2014, one-third and two-thirds of the 1997 extended force. The results showed that the more modernized the force, the faster the adversary was defeated, with less risk. In addition, the results showed that most of the benefits gained by modernization were achieved by the one-third modernized force. Increased levels of modernization did not significantly affect the final outcome of the war but did further reduce the risks. JICM’s other excursions also provided insights, according to DOD officials. Warning time before invasion of the victim nation by the adversary was varied in several excursions. The results showed that the shorter the warning time, the longer it took U.S. and allied forces to evict the adversary. Although the enemy possessed a missile threat in all excursions, some excursions examined U.S. capabilities against an enemy with a substantially increased missile threat. Officials viewed this robust tactical ballistic missile threat as comparable to chemical weapons employment. The results showed that enemy missile attacks delayed but did not prevent the eventual allied victory. DOD’s regional great power assessment did not examine alternatives to the mix of modernization programs reflected in DOD’s 1997 program. Moreover, neither force structure options nor the final modernization decisions in the QDR report were analyzed in the regional great power assessment. Like the major theater war assessment, OSD considered analyzing reductions to the force by 10, 20, and 30 percent, but these were not pursued for three reasons. First, OSD could not reach consensus with the services on the nature of the reductions because the scenario took place so far into the future. OSD officials told us that imposing reductions to the projected force without agreement would strain the credibility of this assessment with the services. Second, JICM models the campaign at too aggregate a level to show how changes in the force structure may make a difference in a conflict. Third, OSD officials decided to focus on modernization rather than force structure because they thought senior officials could benefit more from knowing the potential impacts of modernization on future wars. Finally, despite the time frame for the regional great power assessment, no innovations in doctrine or operational concepts were modeled. OSD officials told us that the services’ exploration of new doctrine arising from advanced weaponry was not mature enough to be modeled. U.S. forces were modeled in large, proportional modernization slices, that is, one-third, two-thirds, and full. There was no attempt to analyze varied mixes of air, ground, and maritime modernization to test their effectiveness. Although these slices were based on modernization plans, varying the mix might have provided more insight into modernization trade-offs. Although the QDR modernization assessment was finished before the end of the regional great power assessment, OSD did not model the modernization decisions, saying that there was little interaction between the two assessment processes and that they had insufficient time to develop the data needed to model the results. DOD’s modernization review examined some variations of the services’ planned modernization programs but did not reflect a thorough, mission-oriented approach to assessing the mix of capabilities the United States will need to counter future threats. The Modernization Panel’s assessments were divided into 17 topics, such as theater air and missile defense, tactical aircraft, and ground systems, and did not include formal analyses of trade-offs among the topics. While DOD officials said they considered Joint Vision 2010 capabilities, the review did not provide adequate assurance that the decisions reached represent the best mix of capabilities needed for a future in which emerging threats could generate requirements that differ significantly from the current mix of U.S. capabilities. Rather, the Panel’s work consisted mostly of developing options to restructure some programs to provide a plan that DOD believes can be implemented within an expected procurement budget of $60 billion annually. Further, the Modernization Panel’s analyses were not fully integrated with the work of the Force Assessment Panel. As a result, the QDR did not sufficiently examine linkages and trade-offs between force structure and modernization decisions. In November 1996, DOD formed the Modernization Panel cochaired by senior officials from OSD and the Joint Staff. The Panel was instructed by OSD to evaluate the services’ modernization programs by looking at what is needed to sustain the force with modern equipment and superior technology. It identified 17 topics, grouped into three broad categories: cross-cutting issues, equipment-focused issues, and technology and acquisition issues. The topics and some of the systems examined are included in table 3.1. A separate task force of service, OSD, and joint staff officials was assigned to analyze each topic and arrive at a set of options. The objective of each task force, according to DOD officials, was to propose affordable plans for procuring systems that would modernize equipment and technology based on their view of capabilities for Joint Vision 2010, maximize jointness, and minimize the time to develop them. According to Panel officials, affordable meant that DOD assumed its procurement budget would increase to and then remain at about $60 billion a year by 2000. As a result, task forces were asked to examine the projected funding for systems beyond the Future Years Defense Program to 2015, based on then-current procurement plans, and determine whether systems or groups of related systems were affordable in terms of whether they represented an appropriate share of the procurement budget, given procurement plans for other types of systems. For example, the tactical aircraft task force developed options to reduce out-year funding requirements for tactical aircraft systems because then-current procurement plans for the Joint Strike Fighter, F-22, and F/A-18E/F would require a significantly larger share of procurement funds than was allocated to tactical aircraft in 1998. The task force examining the Navy’s ship acquisition program also explored options to reduce out-year funding requirements. Allowing these programs to go forward as planned would have required senior DOD officials to decrease funding for other types of systems to maintain overall procurement spending at $60 billion annually. DOD was not able to provide the amount of planned funding for each of the 17 topics, but officials estimated that total annual procurement plans for the systems amounted to approximately $40 billion, or about two-thirds of DOD’s planned annual procurement budget. The task force did not review some planned modernization efforts, such as antisubmarine and electronic warfare or minor procurement. The Panel directed the task forces to assess the acquisition plans reflected in the fiscal year 1998 Future Years Defense Program and to consider increasing or decreasing funding allocated to each group of systems up to 10 percent as a means of encouraging them to develop options to modify planned programs. According to DOD officials, the task forces began briefing their options to the Modernization Panel and to senior DOD officials in February 1997. Neither the Panel nor the task forces made recommendations; each only proposed options. Soon thereafter, the Senior Steering Group directed the task forces to identify adjustments to the fiscal year 1998-2003 budget based on the options; the programmatic risk associated with each option; how the option would affect the military’s capability to implement the defense strategy; the impact of the option on the industrial base; and the statutory, regulatory, and other external barriers to implementing the option. In general, DOD’s modernization decisions modified, but did not cancel, service procurement plans. The Secretary of Defense described the modernization decisions in the QDR as a modest reduction in some of the programs to ensure that the total program is realistic and executable within the budget. Some decisions decreased the number and delayed the procurement of some systems, reducing associated funding. For example, to sustain procurement of tactical aircraft systems at an affordable rate, DOD reduced the Air Force’s plan to buy F-22s from 438 to 339 and delayed its full production time line. The Navy’s plan to buy 1,000 F/A-18E/Fs was reduced to 785 with a provision to buy only 548, depending on the timely success of the Joint Strike Fighter. And the number of Joint Strike Fighters was reduced as well. In total, these changes reduced the services’ $270 billion funding estimate for these aircraft by over $30 billion, or more than 10 percent. Another task force examined the Navy’s shipbuilding program. The Navy had planned to build up to 10 ships a year between 2004 and 2015, but that would increase annual spending in those years to over $12 billion, well above the fiscal year 2001-2004 average of $7.9 billion. After examining the number of ships planned for 2015 and the associated annual shipbuilding costs, the task force presented an option to reduce the 334 ships planned for 2003 to 303 and thereby reduce the annual shipbuilding estimate to between $8 billion and $8.8 billion. The task force suggested that the annual savings in operating and support costs associated with maintaining fewer ships could be used to increase the capabilities on new ships and modernize existing ones. Other modernization decisions proposed increases to investment in some areas. For example, DOD increased its investments in biological and chemical defense by approximately $1 billion and national missile defense by about $2 billion. Furthermore DOD set aside $1 billion over the next 6 years for minor cost overruns and fund disruptions to ensure the stability of modernization programs, according to DOD officials. The Modernization Panel’s stovepipe approach to analyzing the services’ procurement plans may have helped the task forces provide senior DOD officials with budget-based options for changing planned system modernization, but they did not provide an integrated look at how the options or final decisions impact joint war-fighting missions. For example, capabilities that might be used for the close air support functions, such as helicopters, tactical aircraft, and C4ISR systems, were evaluated as separate topics by different task forces. We have previously reported on the benefits of looking at modernization from an integrated mission perspective. The Chairman of the Joint Chiefs of Staff’s Joint Vision 2010 also focuses on the need to achieve new levels of effectiveness in joint war-fighting. Noting today’s smaller forces, the Chairman stated: “Simply to retain our effectiveness with less redundancy, we will need to wring every ounce of capability from every available source. That outcome can only be accomplished through a more seamless integration of Service capabilities.” Furthermore, he stated that technology trends will provide an order of magnitude improvement in lethality that clearly offers promise for reducing the number of platforms and the amount of ordnance required to destroy targets. Citing budget realities, he also stated that DOD needs to be selective in the technologies it chooses to invest in and will have to make hard choices to achieve the trade-offs that will bring the best balance, highest capability, and greatest interoperability for the least cost. According to Modernization Panel officials, neither their panel nor the task forces performed the type of integrated analyses of options across topics that could facilitate modernization trade-offs. Some said that such a perspective might have been provided by senior DOD officials at higher tiers of the QDR organization when they examined the different procurement options. Panel officials pointed to the senior officials’ decision to examine Army ground and Marine ground force systems together rather than separately as evidence that at least some task forces were asked to look across some topics. However, other officials did not think that anyone systematically looked across the options to see their impact on joint war-fighting missions. In September 1996, just prior to the QDR, we identified the benefits of evaluating modernization options from a joint perspective and the urgent need for such information, given the hundreds of billions of procurement dollars involved. In our report on combat air power, we concluded that DOD is proceeding with some major investments without clear evidence the programs are justified because of their marginal contribution to already formidable capabilities, the changed security environment, and less costly alternatives. In its comments on our report, DOD agreed that mission assessments can improve understanding of military capabilities and limitations and are important to decision-making, but asserted that it has mechanisms to provide that perspective. We recognized steps by DOD to improve the information available on combat requirements and capabilities through studies, the Joint Requirements Oversight Council, and its 10 supporting war-fighting capability assessment teams, but we noted that they had little impact on weighing alternative ways to recapitalize U.S. air power forces.We also reported that while the individual services conduct considerable analyses to identify mission needs and justify new weapon program proposals, these needs are not based on assessments of the aggregate capabilities of the services to perform war-fighting missions. Furthermore, DOD does not routinely review service modernization proposals from such a perspective. We believe that the QDR was such an opportunity and that information on recapitalization alternatives and redundancies in capabilities, developed from a joint war-fighting perspective, would have been invaluable to decisionmakers who must allocate defense resources among competing needs to achieve maximum force effectiveness. Without such mission analyses, it is not clear whether DOD’s QDR modernization decisions will simply replace current systems or buy the most effective mission mix of new systems to respond to future threats. The QDR independent force assessment and modernization reviews were both performed between November 1996 and February 1997 and, according to DOD officials, did not fully consider the results of each other’s work as bases for identifying potential trade-offs. Although senior DOD officials considered broad trade-offs between force structure and modernization at the macro level in determining which of three paths to adopt to meet near- and long-term challenges, we believe that more in-depth analysis of the relationship between force structure and modernization issues would have enhanced the value of DOD’s review. Modernization Panel officials said that the Panel’s task forces did not consider changes in force structure in their deliberations. Furthermore, as noted in chapter 2, the regional great power force assessment, which evaluated the aggregate impact of modernization on force effectiveness in a future war, modeled DOD’s fiscal year 1997 modernization procurement plans. It did not model the QDR modernization decisions. Some Panel officials suggested that a better linking of the two assessments could improve the quality of the QDR, because changes in force structure could affect the size of some procurements. Moreover, as suggested in Joint Vision 2010, leveraging new technologies should increase defense capabilities and could thereby offer opportunities to affect force structure. For example, as part of its Army Force XXI future force transformation initiative, the Army is designing, testing, and fielding new potentially smaller division designs to capitalize on digital technology and give commanders and soldiers better capability to gather and share information. In written comments on a draft of this report, DOD asserted that we characterized the QDR’s modernization options as “budget driven” and based “solely” on a plus-and-minus 10-percent rule. While acknowledging that the overall modernization budget was a central concern of the QDR, DOD said that the primary factor influencing the modernization analyses was the capabilities of current and planned systems. We agree that the Panel’s guidance to the task forces in proposing alternatives based on budget parameters was not the task forces’ sole consideration when developing modernization options. In fact, our report specifically said that the task forces were directed to develop options that would consider the capabilities required for Joint Vision 2010, maximize jointness, and minimize the time needed to develop them. However, we continue to believe that the Panel’s methodology for the modernization review resulted in a primarily budget-driven focus rather than a mission-oriented approach. According to the Panel’s leadership and other participants, proposing budget parameters of plus-or-minus 10 percent was the means the Panel used to encourage the task forces to develop options for their specific group of systems. These budget parameters were further evident in the task forces’ options on tactical aircraft and other modernization topics. DOD cited the tactical aircraft decisions as an example where significant technical or other capability advantages of next-generation systems over current systems resulted in force structure-modernization trade-offs. However, while the task force analyses of the F-22 resulted in an option to reduce aircraft by nearly 100 (from 438 to 339), possibly changing the future mix of tactical aircraft, DOD did not examine other options, such as whether advanced technologies like stealth could reduce the Air Force’s 20 fighter wing force structure. Further, the reductions in F-18E/Fs and Joint Strike Fighters were generally based on a proposal that fewer aircraft would be sufficient to replace existing aircraft and affordable within the budget, not because the Navy expects to reduce its force structure by cutting the number of carrier fighter wings. DOD can enhance the value of the next QDR by providing formal oversight of QDR preparation efforts, improving models and other analytical tools, and considering changes to the QDR’s structure and design. The Secretary of Defense has not yet established formal oversight at a senior level to facilitate preparation activities for the next QDR, including completion and coordination of follow-on studies to the 1997 QDR. Moreover, although DOD has an effort underway to improve its theater war models to overcome significant limitations in simulating intelligence and other capabilities, it has not determined how to improve its analyses of other types of military operations, such as smaller-scale contingencies and scenarios involving longer-term threats. Changing the timing of the panels’ work, building greater collaboration among some panels, and delaying the QDR until later in the new administration’s term may also provide a more thorough review. Finally, if Congress determines that a panel of experts should provide an independent view of defense requirements, it might require the panel to complete its work earlier so that DOD can consider the panel’s views when conducting the QDR. Although there is no current statutory requirement for another QDR and DOD has not taken formal steps to institutionalize a QDR process, the Secretary of Defense has endorsed the QDR as a continuing process. OSD officials who played a key role in DOD’s 1997 review stated that there is a widespread assumption throughout DOD that the Department will conduct another QDR following the 2000 election. DOD has some initiatives underway that could help it prepare for its next review. For example, DOD is working to improve some analytical tools and is performing some follow-up studies to the QDR. These efforts could equip DOD to perform valuable analyses of its planned force before the next QDR begins. However, DOD has not yet developed plans to improve other tools and analyses that could be important for the next QDR. Moreover, it has not ensured that its efforts will be coordinated and completed in time for the next review. DOD has efforts underway to improve some of the analytical tools used in the 1997 QDR. It is developing a new campaign model, called JWARS and is looking at ways to improve others, such as TACWAR, as well as supporting data to alleviate some of the current campaign modeling limitations. We did not identify comparable efforts by DOD to improve the analyses of smaller-scale contingencies or conflicts with future adversaries who have advanced technologies. Completing these efforts in a timely manner would enhance the potential for the next QDR to provide better analyses of alternatives. According to DOD officials, JWARS is expected to improve DOD’s ability to evaluate the forces’ effectiveness in combat operations. Documents provided by the JWARS Office note that current theater-level simulations, including TACWAR, have limitations that make them only “somewhat” or “poorly/not at all” capable of simulating a number of combat activities (see table 4.1). DOD expects that, based on the current development and funding schedule, which was planned to coincide with the next QDR, an initial version of JWARS should be available for the next review. DOD expects this version to be useful in analyzing the sufficiency of the force. Subsequent versions of JWARS are expected to be capable of analyzing force and capability trade-offs, force planning, and force structure design as well as system alternatives, system trade-offs, and operational concepts. DOD’s Joint Analytic Model Improvement Program is another effort that DOD has underway to improve its models. The objective of this program, which is directed by OSD’s Office of Program Analysis and Evaluation, is to determine how current models such as TACWAR should be improved. The program is tracking and coordinating the models’ improvement schedules with JWARS’ introduction. Gathering and maintaining the large quantities of data needed to run the models is another challenge DOD faces. In the past, DOD lacked a central repository for data, forcing users to recreate data on threats, targets, and other factors whenever they began a new study. DOD officials told us that the Department has established the Joint Data Support System to centrally store and update this data. The system will include information on U.S., allied, and enemy orders of battle, terrain, and weapon systems’ capabilities, in addition to other data developed for the Deep Attack Weapons Mix Study. This system will be linked to JWARS and will be easier to update than current methods. Although DOD has several efforts underway that should improve the quality of its major theater war assessments for the next QDR, it has not determined what improvements should be made to improve its assessments of force requirements for smaller-scale contingencies. Although DOD officials saw the Dynamic Commitment war game series as a valuable exercise in examining the implications of a post-Cold War environment in which smaller-scale contingencies may occur frequently, DOD did not use the exercise to identify and examine force structure alternatives. As noted in chapter two, the war game series was primarily an exercise in allocating planned forces to military operations based on participants’ military judgment. DOD does not have an effort underway to analyze how Dynamic Commitment could be improved for the next QDR or replaced by another analytical tool. Examining ways to improve the Dynamic Commitment war game so that it can be used to identify and examine force structure alternatives would be a valuable step in preparing for the next QDR. DOD also needs to determine how it can improve its analysis of requirements for conflicts against future adversaries who may have access to advanced technologies or employ asymmetric concepts of warfare. At the same time, DOD will need to consider how to model new technologies such as digitization that are expected to be employed by U.S. forces in the future as well as the changes in operational concepts and doctrine that could result from such technologies. As noted in chapter two, DOD’s regional great power assessment did not model changes in doctrine or operational concepts that could result from technological advances or place much emphasis on asymmetric warfare. In addition, DOD officials built the database for the regional great power analysis during the 3- to 4-months allocated for the QDR force assessments. According to OSD officials this was a time-consuming process that reduced the time available to examine alternatives to the programmed force. Preparing for the next QDR by working with the intelligence community and other sources to develop a database containing detailed information on future enemy and allied capabilities, targets, and weapon performance could help DOD focus its QDR assessment on examining alternatives. As part of its preparation for the next QDR, DOD could run analyses of its existing forces that could serve as the basis for comparison to force alternatives caused by changes to strategy or other factors. During the 1997 QDR, DOD spent much of its time modeling the 1997 force’s ability to fight and win two major theater wars, meet the demands of smaller-scale contingencies, and fight a regional great power. Had these force assessments been done as part of DOD’s preparation for the QDR, the time could have been spent modeling alternative force structures, which might have provided insights into the best-suited force. DOD has not established formal oversight at a senior level to coordinate the overall model improvements, follow-on studies, and other preparations for the next QDR. Several offices in DOD are improving models and databases and are performing follow-on studies to the QDR and the National Defense Panel report on topics such as requirements for strategic lift, active/reserve force mix, operations in a chemical environment, and information technology. However, DOD has not issued guidance establishing which office will monitor these efforts or determined how the results of these efforts will be coordinated and integrated in the next QDR. Such oversight might help to ensure that the efforts are completed in time. DOD could also provide direction on issues such as the types of analyses to be performed, the associated data requirements, who will provide the analytical support, how lessons learned will be gathered and shared, and time lines for completing the activities needed to support the next QDR. DOD also may be able to enhance the value of the next QDR by examining options for changing the process DOD established for the 1997 QDR and modifying the review’s timing. We identified the following observations for potential improvements to the QDR process based on discussions with DOD officials and our review of documentation on how the QDR process worked. Although DOD officials modified the force structure slightly as a result of the QDR, these decisions were not based on the three major force assessments. The QDR report identifies three paths that DOD considered and that included varying levels of modernization and force structure sizes. However, some defense experts have criticized this framework as being too simplistic in that two of the options—such as the option to maintain the current force structure but forego DOD’s goal of increasing procurement to $60 billion per year—were not options that DOD would seriously consider. Moreover, DOD’s force structure and modernization panels completed their analyses separately and did not model trade-offs between modernization and force structure. For example, DOD’s regional great power analysis modeled DOD’s planned force with various levels of modernization but did not examine whether a more modernized but smaller force would be effective in defeating potential aggressors. According to some defense experts, technologies such as stealth aircraft, precision munitions, and digitized forces may enable the United States to reduce force structure in the long term. DOD has several options for ensuring better integration of modernization and force structure decisions. DOD could maintain separate panels but provide guidance to ensure that the panels collaborate and that trade-offs between force structure and modernization are examined. Alternatively, DOD could establish one panel to analyze force structure and modernization issues. DOD officials expressed different views on the need to alter the timing of the defense strategy review. DOD began developing the strategy early in the QDR process and provided a draft of the strategy in January 1997 but did not finalize it until March 1997, when the force structure and modernization panels had completed much of their work. Several DOD officials, including those responsible for drafting the strategy and OSD officials who were responsible for leading the force assessments, did not perceive the lack of an approved strategy as a problem because the strategy was provided in draft to panel chairs. However, some service officials and panel members stated that the draft strategy was not widely disseminated and that the lack of a final strategy led to confusion, particularly since the Secretary of Defense changed during the QDR and the new Secretary could have made significant changes to the strategy. The 1997 QDR began after the 1996 presidential election and was performed by a returning administration—although a change in Secretary of Defense occurred during the early months of the QDR. However, if the next QDR occurs following the 2000 presidential election, DOD will have to conduct its analysis while undergoing a change in administration. This may further complicate DOD’s efforts to perform the QDR because of the large turnover of senior DOD officials that may occur. Many DOD officials we spoke to characterized the 6-month time frame for conducting the 1997 QDR as being extremely tight given the complex nature and large number of issues, even with relatively little turnover among senior personnel. Officials also cited the short time frame as a key factor that limited the number and types of alternatives assessed. Delaying the QDR from the first to the second year of the presidential term is an option that would allow more time for an administration to put its key senior people, including the Secretary of Defense, in place; develop a defense strategy; prepare for the QDR; and conduct appropriate analyses. Such a delay in starting the QDR might be useful in providing a new administration with sufficient time to conduct a comprehensive strategy review and have a good analytical basis for making difficult choices among competing priorities. Delaying the process for a year may have some disadvantages. Several OSD officials stated they opposed a delay because it would postpone the administration’s ability to impact the defense budget until well into a president’s term. The current timing would allow QDR decisions made in 2001 to impact the president’s fiscal year 2003 defense budget. A QDR that concludes in 2002 would affect the 2004 defense budget. Even if the review were delayed, a new administration could still make some changes in the 2003 budget through the program, planning, and budgeting system. However, a completed QDR may enable an administration to make more fundamental changes. Congress has not enacted a permanent requirement for an independent panel of experts to supplement DOD’s analysis of future defense requirements. However, work by a congressionally chartered independent panel, if conducted prior to the QDR, could be used to encourage DOD to consider a wider range of strategy, force structure, and modernization options. Conducting a fundamental reassessment of defense requirements, as envisioned by the QDR, is extremely challenging for DOD, given that its culture rewards consensus-building and often makes it difficult to gain support for alternatives that challenge traditional ways of doing business. As evidenced by the 1997 QDR force and modernization assessments, DOD spent most of its analytical effort confirming that its current forces and initiatives were adequate to meet future defense requirements and restricting its analysis to “salami-slice” alternatives. By preceding DOD’s own efforts, an independent panel similar to the National Defense Panel could provide DOD with alternatives to analyze during the QDR. DOD could add value to the next QDR by establishing formal oversight, improving its analytical tools, and making changes to the QDR’s structure and design. Establishing formal oversight would reinforce the importance of the QDR as an ongoing tool for assessing force structure and modernization requirements and help to identify and establish priorities for key preparation tasks. It could also provide an impetus for improving DOD’s analytical tools to evaluate requirements for theater wars, smaller-scale contingencies, and future warfare, including the potential impact of advanced technology and new concepts of operations. In addition, summarizing lessons learned from the 1997 QDR could enable DOD to develop options to make the process more effective in the future. The Secretary of Defense has endorsed the concept of the quadrennial review of defense needs. To enhance the value of the next QDR, we recommend that the Secretary of Defense assign responsibility for overall oversight and coordination of DOD preparation efforts. Preparation tasks should include identifying the analytical tools and data needed to support force structure and modernization analyses, monitoring the status and funding for efforts to upgrade DOD’s models, summarizing lessons learned from the 1997 QDR, and considering the need to change the structure and timing of the QDR process. If Congress chooses to establish another panel of experts to provide an independent review of defense needs, it may wish to require the panel to complete its work prior to the next QDR. This approach could provide DOD with a broader set of options to examine in its review. In written comments on a draft of this report, DOD concurred with our recommendation that the Secretary of Defense assign responsibility for overall oversight and coordination of DOD preparation efforts for the next QDR. DOD stated that it is identifying the analytic tools needed for the next QDR and is improving existing tools where shortcomings have been identified. It also stated that it is examining areas of U.S. defense strategy and associated military capabilities not fully explored by the QDR or that were raised by the National Defense Panel, in addition to commissioning studies of internal and external lessons learned from the 1997 QDR. Moreover, it concurred with our conclusion that there is no central authority to ensure that follow-up efforts are integrated and that centralization could improve QDR preparation efforts. DOD also agreed that any mandated panel similar to the National Defense Panel should precede the QDR. DOD did not concur with our characterization of the QDR process in some areas and with our recommendation to consider changing the timing of the QDR. First, DOD stated that our draft was overly concerned with the benefit of having the QDR’s panels report sequentially. For example, DOD noted that the draft strategy had been briefed early in the QDR to the force assessment and modernization panels and that they were told to base their assumptions on this draft. DOD further stated that if panel members were confused as to the final shape of the strategy, it should not be blamed on the QDR process. Second, DOD wrote that our draft placed undue emphasis on the force assessment and modernization panels acting as “stovepipes.” DOD stated that the QDR’s structure allowed panels to focus on a tractable set of issues and that the Integration Panel ensured that all the various panel reports were combined into a coherent set of options. Finally, DOD wrote that beginning the QDR process later in a presidential administration would force the Secretary of Defense to wait two years before submitting a budget that reflects an administration’s strategy, priorities, and program. We believe that our characterization of the QDR process does not overly stress the benefits of having panels report sequentially. We acknowledge that DOD officials primarily responsible for drafting the strategy and leading the force assessments believed that providing the draft strategy in January 1997 and the final strategy in March 1997 did not pose a problem for the panels. However, some panel members perceived that the lack of a final strategy earlier in the process led to confusion. We note that the 1997 QDR was conducted under favorable conditions in that many senior DOD officials were in place prior to the November presidential election to begin work on the strategy and that major elements of the strategy remained the same. We believe that significant concurrency between the strategy review and force structure and modernization assessments could be more problematic for the next QDR, which will be conducted by a new administration, particularly if senior officials decide on a new strategy that alters key force planning assumptions. Therefore, we believe that DOD should consider the need to finalize the strategy earlier in evaluating changes to the QDR process. In addition, while we agree that senior officials combined the work of the panels into broad, macro level alternatives, the panels themselves lacked a high degree of integration. For example, more collaboration between the regional great power force assessment and modernization analysis, possibly as a single panel, might overcome challenges to the timely sharing of information and would have permitted DOD to explore force structure versus modernization trade-offs. We acknowledge the benefit of breaking down a giant task like the QDR into discrete issue panels. If the overarching Integration Panel is the best means available for combining those panels’ reports into coherent options, it could benefit from collaboration occurring at the lowest possible levels to make its work easier. Finally, while we recognize DOD’s concerns regarding changing the timing of the QDR to later in an administration’s term, we continue to believe that the 1997 QDR faced challenges from its tight time-frame, despite the benefits of a returning administration and speedy appointment of a new Secretary of Defense. The next QDR will be performed by a new administration. If the next QDR is delayed, it would allow the new administration to appoint its senior defense leadership, develop a defense strategy, prepare for the QDR, and conduct appropriate analyses. Our observation does not seek to limit a new administration’s flexibility in determining how and when to conduct the next QDR. Rather, it attempts to give a new administration the benefit of more time to perform a more rigorous review before reaching conclusions that will shape the future of DOD and its budgetary priorities. | Pursuant to a congressional request, GAO reviewed whether: (1) the Quadrennial Defense Review's (QDR) force structure and modernization assessments examined alternatives to the planned force; and (2) opportunities exist to improve the structure and methodology of future QDRs. GAO did not evaluate the rationale for the Department of Defense's (DOD) proposed defense strategy. GAO noted that: (1) QDR did not examine alternatives that would provide greater assurance that it identified the force structure that is best suited to implement the defense strategy; (2) the QDR's force assessments built on DOD's Bottom-Up Review analysis by examining requirements for a broader range of military operations beyond major theater wars, and by analyzing the potential impact of some key assumptions; (3) only one of the three major force assessments modeled any force structure alternatives; (4) the assessment did not examine alternatives that involved targeted changes because DOD officials foresaw problems in obtaining service consensus and DOD's models are not sensitive enough to assess the effects of some types of force structure changes; (5) although some technologies consistent with Joint Vision 2010 were modeled, none of the assessments fully examined the potential effects of new technologies and war-fighting concepts on DOD's planned force structure; (6) DOD's modernization review examined some variations of the services' procurement plans but did not include a thorough, mission-oriented review of the mix of capabilities the United States will need to counter future threats; (7) DOD divided responsibility for analyzing major procurement programs and investment issues among 17 task forces; (8) this approach did not always provide a mission focus that examined trade-offs or facilitated a fundamental reassessment of modernization needs in light of emerging threats and technological advances; (9) the modernization and force assessment panels conducted most of their work independently and concurrently, which hampered their ability to explore linkages and trade-offs between force structure and modernization alternatives; (10) DOD can provide a more thorough review of U.S. defense needs in the next QDR by preparing early, improving its analytical tools, and considering changes to the structure and design of the QDR process; (11) DOD has not yet developed a formal process to prepare for and coordinate activities related to the next QDR; and (12) delaying the start of the next QDR until later in the next Presidential administration may also facilitate a more thorough review. |
Section 550 of the DHS appropriations act for fiscal year 2007 requires DHS to issue regulations establishing risk-based performance standards for the security of facilities that the Secretary determines to present high levels of security risk, among other things. The CFATS rule was published in April 2007, and appendix A to the rule, published in November 2007, listed 322 chemicals of interest and the screening threshold quantities for each. ISCD has direct responsibility for implementing DHS’s CFATS rule, including assessing potential risks and identifying high-risk chemical facilities, promoting effective security planning, and ensuring that final high-risk facilities meet applicable standards through site security plans approved by DHS. From fiscal years 2007 through 2012, DHS dedicated about $442 million to the CFATS program. During fiscal year 2012, ISCD was authorized 242 full-time- equivalent positions. ISCD uses a risk assessment approach to develop risk scores to assign chemical facilities to one of four final tiers. Facilities placed in one of these tiers (tier 1, 2, 3, or 4) are considered to be high risk, with tier 1 facilities considered to be the highest risk. According to an ISCD document that describes how ISCD develops its CFATS risk score, the risk score is intended to be derived from estimates of consequence (the adverse effects of a successful attack), threat (the likelihood of an attack), and vulnerability (the likelihood of a successful attack, given an attempt). ISCD’s risk assessment approach is composed of three models, each based on a particular security issue: (1) release, (2) theft or diversion, and (3) sabotage, depending on the type of risk associated with the 322 chemicals. Once ISCD estimates a risk score based on these models, it assigns the facility to a final tier. In July 2007, ISCD began reviewing information submitted by the owners and operators of approximately 40,000 facilities. By January 2013, ISCD had designated about 4,400 of the 40,000 facilities as high risk and thereby covered by the CFATS rule. ISCD had assigned about 3,500 of those facilities to a final tier, of which about 90 percent were tiered because of the risk of theft or diversion. The remaining 10 percent were tiered because of the risk of release or the risk of sabotage. Over the last 2 years, ISCD has identified problems with the way the release chemicals model assigns chemical facilities to tiers and has taken or begun to take action to address those problems. In February 2011, ISCD found that some chemical facilities had been placed in an incorrect final tier because this model included incorrect data about the release of high-risk chemicals of interest. In June 2011, ISCD officials adjusted the model, which resulted in lowering the tier for about 250 facilities, about 100 of which were subsequently removed from the CFATS program. In October 2012, ISCD officials stated that they had uncovered another defect that led the model to exclude population density calculations for about 150 facilities in states or U.S. territories outside the continental United States, including Alaska, Hawaii, Puerto Rico, and Guam. In February 2013, ISCD officials said that they had made adjustments to the model to resolve this issue and do not expect any facilities’ tier will change due to this issue. Our preliminary analyses indicates that the tiering approach ISCD uses to assess risk and assign facilities to final tiers does not consider all of the elements of risk associated with a terrorist attack involving certain chemicals. According to the NIPP, which, among other things, establishes the framework for managing risk among the nation’s critical infrastructure, risk is a function of three components—consequence, threat, and vulnerability—and a risk assessment approach must assess each component for every defined risk scenario. Furthermore, the CFATS rule calls for ISCD to review consequence, threat, and vulnerability information in determining a facility’s final tier. However, ISCD’s risk assessment approach does not fully consider all of the core criteria or components of a risk assessment, as specified by the NIPP, nor does it comport with parts of the CFATS rule. Consequence. The NIPP states that at a minimum, consequences should focus on the two most fundamental components—human consequences and the most relevant direct economic consequences. The CFATS rule states that chemical facilities covered by the rule are those that present a high risk of significant adverse consequences for human life or health, or critical economic assets, among other things, if subjected to terrorist attack, compromise, infiltration, or exploitation. Our review of ISCD’s risk assessment approach and discussions with ISCD officials shows that the approach is currently limited to focusing on one component of consequences—human casualties associated with a terrorist attack involving a chemical of interest—and does not consider consequences associated with economic criticality. ISCD officials said that the economic consequences part of their risk-tiering approach will require additional work before it is ready to be introduced. In September 2012, ISCD officials stated that they had engaged Sandia National Laboratories to examine how ISCD could gather needed information and determine the risk associated with economic impact, but this effort is in the initial stages, with an expected completion date of June 2014. ISCD officials added they are uncertain about how Sandia’s efforts will affect their risk assessment approach. Threat. ISCD’s risk assessment approach is also not consistent with the NIPP because it does not consider threat for the majority of regulated facilities. According to the NIPP, risk assessments should estimate threat as the likelihood that the adversary would attempt a given attack method against the target. The CFATS rule requires that, as part of assessing site vulnerability, facilities conduct a threat assessment, which is to include a description of the internal, external, and internally assisted threats facing the facility and that ISCD review site vulnerability as part of the final determination of a facility’s tier. Our review of the models and discussions with ISCD officials shows that (1) ISCD is inconsistent in how it assesses threat using the different models because while it considers threat for the 10 percent of facilities tiered because of the risk of release or sabotage, it does not consider threat for the approximately 90 percent of facilities that are tiered because of the risk of theft or diversion; and (2) ISCD does not use current threat data for the 10 percent of facilities tiered because of the risk of release or sabotage. ISCD did not have documentation to show why threat had not been factored into the formula for approximately 90 percent of facilities tiered because of the risk of theft or diversion. However, ISCD officials pointed out that the cost of adding a threat analysis for these facilities might outweigh the benefits of doing so. ISCD officials said that given the complexity of assessing threat for theft or diversion, they are considering reexamining their approach. ISCD officials also said that they are exploring how they can use more current threat data for the 10 percent of facilities tiered because of the risk of release or sabotage. Vulnerability. ISCD’s risk assessment approach is also not consistent with the NIPP because it does not consider vulnerability when developing risk scores. According to the NIPP, risk assessments should identify vulnerabilities, describe all protective measures, and estimate the likelihood of an adversary’s success for each attack scenario. Similar to the NIPP, the CFATS rule calls for ISCD to review facilities’ security vulnerability assessments as part of its risk-based tiering process. This assessment is to include the identification of potential security vulnerabilities and the identification of existing countermeasures and their level of effectiveness in both reducing identified vulnerabilities and meeting the aforementioned risk-based performance standards. Our review of the risk assessment approach and discussions with ISCD officials shows that the security vulnerability assessment contains numerous questions aimed at assessing vulnerability and security measures in place but the information is not used to assign facilities to risk-based tiers. ISCD officials said they do not use the information because it is “self- reported” by facilities and they have observed that it tends to overstate or understate vulnerability. As a result, ISCD’s risk assessment approach treats every facility as equally vulnerable to a terrorist attack regardless of location and on-site security. ISCD officials told us that they consider facility vulnerability, but at the latter stages of the CFATS regulatory process particularly with regard to the development and approval of the facility site security plan. Our preliminary work indicates that ISCD has begun to take some actions to examine how its risk assessment approach can be enhanced. For example, in addition to engaging Sandia National Laboratories to develop the framework for assessing economic consequences previously discussed, ISCD has commissioned a panel of subject matter experts to examine the strengths and weaknesses of its current risk assessment approach. ISCD officials stated that the panel’s work is intended to focus on whether ISCD is heading in the right direction, and they view it as a preliminary assessment. According to ISCD’s task execution plan, the panel is to provide actionable recommendations on potential improvements to the CFATS models, but the panel is not to develop alternative CFATS models or formally validate or verify the current CFATS risk assessment approach—steps that would analyze the structure of the models and determine whether they calculate values correctly. In February 2013, after the panel was convened, ISCD officials stated that they provided information to the panel about various issues that they might want to consider, among them, (1) how to address vulnerability in the models given ISCD concerns about data quality, and (2) what the appropriate variables to use, if any, are for threats associated with theft or diversion, as discussed earlier. We believe that ISCD is moving in the right direction by commissioning the panel to identify the strengths and weaknesses of its risk assessment approach, and the results of the panel’s work could help ISCD identify issues for further review and recommendations for improvement. Given the critical nature of ISCD’s risk assessment approach in laying the foundation for further regulatory steps in improving facility security—such as the development and approval of facility site security plans—it is important that its approach for assigning facilities to tiers is complete within the NIPP risk management framework and the CFATS rule. Once ISCD’s develops a more complete approach for assessing risk it would then be better positioned to commission an independent peer review. In our past work, we reported that peer reviews are a best practice in risk management and that independent expert review panels can provide objective reviews of complex issues. Furthermore, the National Research Council of the National Academies has recommended that DHS improve its risk analyses for infrastructure protection by validating the models and submitting them to external peer review. As we have previously reported, independent peer reviews cannot ensure the success of a risk assessment approach, but they can increase the probability of success by improving the technical quality of projects and the credibility of the decision-making process. We will continue to monitor and assess ISCD’s efforts to examine its risk assessment approach through our ongoing work and consider any recommendations needed to address these issues. Our preliminary work shows that ISCD has made various revisions to its security plan review process to address concerns expressed by ISCD managers about slow review times. Under the CFATS rule, once a facility is assigned a final tier, it is to submit a site security plan to describe security measures to be taken and how it plans to address applicable risk-based performance standards. The November 2011 internal memorandum that discussed various challenges facing the CFATS program noted that ISCD had not approved any security plans and stated that the process was overly complicated and created bottlenecks. The memorandum stated that revising the process was a top program priority because the initial security plan reviews were conducted using the risk- based standards as prescriptive criteria rather than as standards for developing an overall facility security strategy. According the ISCD officials, the first revision was called the interim review process, whereby individual reviewers were to consider how layers of security measures met the intent of each of the 18 standards. Under the interim review process, ISCD assigned portions of each facility’s plan to security specialists (e.g., cyber, chemical, and physical, among others) who reviewed plans in a sequential, linear fashion. Using this approach, plans were reviewed by different specialists at different times culminating in a quality review. ISCD officials told us that the interim review process was unsustainable, labor-intensive, and time-consuming, particularly when individual reviewers were looking at pieces of thousands of plans that funneled to one quality reviewer. In July 2012, ISCD stopped using the interim review process and began using the current revised process, which entails using contractors, teams of ISCD employees (physical, cyber, chemical, and policy specialists), and ISCD field office inspectors, who are to review plans simultaneously. ISCD officials said that they believe the revised process for reviewing security plans is a “quantum leap” forward, but they did not capture data that would enable them to measure how, if at all, the revised process is more efficient (i.e., less time-consuming) than the former processes. They said that, under the revised process, among other things, field inspectors are to work with facilities with the intent of resolving any deficiencies ISCD identifies in their site security plans. They added that this contrasts with past practices whereby ISCD would review the entire plan even when problems were identified early and not return the plan to the facility until the review was complete, resulting in longer reviews. Moving forward, ISCD officials said they intend to measure the time it takes to complete parts of the revised process and have recently implemented a plan to measure various aspects of the process. Specifically, ISCD’s Annual Operating Plan, published in December 2012, lists 63 performance measures designed to look at various aspects of the site security plan review process—from the point the plans are received by ISCD to the point where plans are reviewed and approved. Collecting data to measure performance about various aspects of the security plan review process is a step in the right direction, but it may take time before the process has matured to the point where ISCD is able to establish baselines and assess its progress. ISCD has taken action to improve its security plan review process, but based on our preliminary analysis, it could take years to review the plans of thousands of facilities that have already been assigned a final tier. ISCD hopes to address this by examining how it can further accelerate the review process. According to ISCD officials, between July 2012 and December 2012, ISCD had approved 18 security plans, with conditions. ISCD officials told us that, moving forward, they anticipate that the revised security plan review process could enable ISCD to approve security plans at a rate of about 30 to 40 a month. Using ISCD’s estimated approval rate of 30 to 40 plans a month, our preliminary analysis indicates that it could take anywhere from 7 to 9 years to complete reviews and approvals for the approximately 3,120 plans submitted by facilities that have been final-tiered that ISCD has not yet begun to review. Figure 1 shows our estimate of the number of years it could take to approve all of the security plans for the approximately 3,120 facilities that, as of January 2013, had been final- tiered, assuming an approval rate of 30 to 40 plans a month. It is important to note that our 7- to 9-year preliminary estimate does not include other activities central to the CFATS mission, either related to or aside from the security plan review process. In addition, our estimate does not include developing and implementing the compliance inspection process, which occurs after security plans are approved and is intended to ensure that facilities that are covered by the CFATS rule are compliant with the rule, within the context of the 18 performance standards. According to ISCD officials, they are actively exploring ways to expedite the speed with which the backlog of security plans could be cleared, such as potentially leveraging alternative security programs, reprioritizing resources, and streamlining the inspection and review requirements. ISCD officials added that they plan to complete authorizations inspections and approve security plans for tier 1 facilities by the first quarter of fiscal year 2014 and for tier 2 facilities by the third quarter of fiscal year 2014. Our preliminary work shows that ISCD’s efforts to communicate and work with owners and operators to help them enhance security at their facilities have increased since the CFATS program’s inception in 2007, particularly in recent years. Since 2007, ISCD has taken various actions to communicate with facility owners and operators and various stakeholders—including officials representing state and local governments, private industry, and trade associations—to increase awareness about CFATS. From fiscal years 2007 through 2009, most of ISCD’s communication efforts entailed outreach with owners and operators and stakeholders through presentations to familiarize them with CFATS; field visits with federal, state, and local government and private industry officials; and compliance assistance visits at facilities that are intended to assist facilities with compliance or technical issues. By 2010 and in subsequent years, ISCD had revised its outreach efforts to focus on authorization inspections during which inspectors visited facilities to verify that the information in their security plans was accurate and complete, and other outreach activities including stakeholder outreach. However, analysis of industry trade associations’ responses to questions we sent them about the program shows mixed views about ISCD’s efforts to communicate with owners and operators through ISCD outreach efforts. For example, 3 of the 11 trade associations that responded to our questions indicated that ISCD’s outreach program was effective in general, 3 reported that the effectiveness of ISCD’s outreach was mixed, 4 reported that ISCD’s outreach was not effective, and 1 respondent reported that he did not know. Our preliminary results indicate that ISCD seeks informal feedback on its outreach efforts but does not systematically solicit feedback to assess the effectiveness of outreach activities, and it does not have a mechanism to measure the effectiveness of ISCD’s outreach activities. Trade association officials reported that in general ISCD seeks informal feedback on its outreach efforts and that members provide feedback to ISCD. Association officials further reported that among other things ISCD has encouraged association members to contact local ISCD inspectors and has hosted roundtable discussions and meetings where members of the regulated community provide feedback, suggest improvements, or make proposals regarding aspects of the CFATS program such as site security plans, alternative security programs, and gasoline storage site risks. Furthermore, according to ISCD officials, while feedback is solicited from the regulated community generally on an informal basis, inspectors and other staff involved in ISCD’s outreach activities are not required to solicit feedback during meetings, presentations, and assistance visits, and inspectors are also not required to follow up with the facilities after compliance assistance visits to obtain their views on the effectiveness of the outreach. ISCD, as part of its annual operating plan, has established a priority for fiscal year 2013 to develop a strategic communications plan intended to address external communication needs including industry outreach. We have previously reported on the benefits of soliciting systematic feedback. Specifically, our prior work on customer service efforts in the government indicates that systematic feedback from those receiving services can provide helpful information as to the kind and quality of services they want and their level of satisfaction with existing services. We will continue to monitor and assess ISCD’s efforts to develop a systematic way to solicit feedback through our ongoing work and consider any recommendations needed to address this issue. Chairman Shimkus, Ranking Member Tonko, and members of the subcommittee, this completes my prepared statement. I would be happy to respond to any questions you may have at this time. For information about this statement please contact Stephen L. Caldwell, at (202) 512-9610 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Other individuals making key contributions included John F. Mortin, Assistant Director; Chuck Bausell; Jose Cardenas; Michele Fejfar; Jeff Jensen; Tracey King; Marvin McGill; Jessica Orr; and Ellen Wolfe. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | Facilities that produce, store, or use hazardous chemicals could be of interest to terrorists intent on using toxic chemicals to inflict mass casualties in the United States. As required by statute, DHS issued regulations that establish standards for the security of high-risk chemical facilities. DHS established the CFATS program in 2007 to assess the risk posed by these facilities and inspect them to ensure compliance with DHS standards. ISCD, which manages the program, places high-risk facilities in risk-based tiers and is to conduct inspections after it approves facility security plans. A November 2011 ISCD internal memorandum raised concerns about ISCD's ability to fulfill its mission. This statement is based on GAO's ongoing work conducted for several congressional committees and subcommittees and provides preliminary observations regarding the extent to which DHS has (1) assigned chemical facilities to tiers and assessed its approach for doing so, (2) revised its process to review facility security plans, and (3) communicated and worked with owners and operators to improve security. To conduct this ongoing work, GAO reviewed DHS reports and plans on risk assessments, security plan reviews, and facility outreach and interviewed DHS officials. GAO received input from 11 trade associations representing chemical facilities about ISCD outreach. The results of this input are not generalizable but provide insights about DHS outreach efforts. Since 2007, the Department of Homeland Securitys (DHS) Infrastructure Security Compliance Division (ISCD) has assigned about 3,500 high-risk chemical facilities to risk-based tiers under its Chemical Facilities Anti-Terrorism Standards (CFATS) program, but it has not fully assessed its approach for doing so. The approach ISCD used to assess risk and make decisions to place facilities in final tiers does not consider all of the elements of consequence, threat, and vulnerability associated with a terrorist attack involving certain chemicals. For example, the risk assessment approach is based primarily on consequences arising from human casualties, but does not consider economic consequences, as called for by the National Infrastructure Protection Plan (NIPP) and the CFATS regulation, nor does it include vulnerability, consistent with the NIPP. ISCD has begun to take some actions to examine how its risk assessment approach can be enhanced. Specifically, ISCD has, among other things, engaged Sandia National Laboratories to examine how economic consequences can be incorporated into ISCD's risk assessment approach and commissioned a panel of experts to assess the current approach, identify strengths and weaknesses, and recommend improvements. Given the critical nature of ISCD's risk assessment approach in laying the foundation for further regulatory steps in improving facility security, it is important that its approach for assigning facilities to tiers is complete within the NIPP risk management framework and the CFATS regulation. DHS's ISCD has revised its process for reviewing facilities' site security plans-- which are to be approved by ISCD before it performs compliance inspections-- but it did not track data on the prior process so is unable to measure any improvements. The past process was considered by ISCD to be difficult to implement and caused bottlenecks in approving plans. ISCD views its revised process to be a significant improvement because, among other things, teams of experts review parts of the plans simultaneously rather than sequentially, as occurred in the past. Moving forward, ISCD intends to measure the time it takes to complete reviews, but will not be able to do so until the process matures. Using ISCD's expected plan approval rate of 30 to 40 plans a month, GAO estimated that it could take another 7 to 9 years before ISCD is able to complete reviews on the approximately 3,120 plans in its queue. ISCD officials said that they are exploring ways to expedite the process, such as reprioritizing resources. DHS's ISCD has also taken various actions to work with facility owners and operators, including increasing the number of visits to facilities to discuss enhancing security plans, but trade associations that responded to GAO's query had mixed views on the effectiveness of ISCD's outreach. ISCD solicits informal feedback from facility owners and operators on its efforts to communicate and work with them, but it does not have an approach for obtaining systematic feedback on its outreach activities. Prior GAO work on customer service efforts in the government indicates that systematic feedback from those receiving services can provide helpful information as to the kind and quality of services they want and their level of satisfaction with existing services. GAO will continue to assess ISCD's efforts in these areas and consider any recommendations needed to address these issues. GAO expects to issue a report on its results in April 2013. |
The arbitration of disputes first occurred at NYSE in the late nineteenth century but eventually became the practice within the securities industry in general. Arbitration was used to settle disputes over employee contracts, and in 1991 the U.S. Supreme Court ruled that an age discrimination claim brought forth by a securities industry employee could be subject to mandatory arbitration. Subsequent court decisions permitted the use of mandatory arbitration for resolving other employment discrimination disputes, including sexual harassment. Proponents of mandatory arbitration believe it is an efficient, cost- effective way to resolve conflicts between employers and their employees. Opponents of mandatory arbitration believe that it puts employees at a disadvantage. They argue that discovery, the process by which parties exchange documents and other information relevant to their case, is limited, hearings take place outside of public scrutiny, and arbitrators favor employers, who are more likely to be “repeat users” than employees. SROs include NYSE that operates and regulates its market, as well as NASD, a private-sector provider of financial regulatory and dispute resolution services. Their responsibilities include overseeing the arbitration of claims brought in the securities industry by customers, firms, and employees as required by the Securities and Exchange Act of 1934 (the Exchange Act). In 2000, NASD established a separate subsidiary to administer its arbitration program. The subsidiary is headquartered in Washington, D.C., and New York City, but also maintains staff in five regional offices. NYSE administers fewer arbitration cases than NASD and its arbitration program, which is administered by its Department of Arbitration in New York, is much smaller. NASD’s subsidiary operates with a staff of about 200 while NYSE maintains a staff of approximately 18. In addition, NASD currently has approximately 7,000 arbitrators on its roster, while NYSE has 1,905. Arbitrators play a key role in resolving disputes brought forth in the securities industry and their performance has a direct bearing on the fairness of a hearing. Like judges, they oversee the administration of proceedings, including determining the number of hearing sessions a case requires and what evidence can be admitted. Unlike judges, arbitrators are not required to base their decisions on legal precedent or to provide any reasoning for their decisions. In addition, their decisions—unlike those rendered in court—can only be appealed on limited grounds. SEC is responsible for regulating securities market participants, including SROs such as NASD and NYSE. In addition to overseeing SROs through its inspections, SEC approves the rules they use to administer their arbitration programs to ensure they comply with the Exchange Act and other securities laws and rules. When SROs propose new rules or change existing rules, they are required to file them with SEC for approval. SEC then provides interested parties an opportunity to comment on proposed rules or rule changes. In general, SEC is to approve certain new or amended rules within 90 days after they are published or institute proceedings to determine whether they should be disapproved. In most employment disputes, arbitration is mandatory, although for discrimination cases NYSE rules strictly limit its use and NASD has instituted additional requirements. For both customer and employment disputes, both SROs require that all arbitrators have certain qualifications in order to be on their rosters of available arbitrators. However, neither SRO verifies the qualifications for all of the arbitrators on their rosters. Both SROs have procedures designed to ensure that the arbitrators selected to hear cases do not have conflicts and have procedures for evaluating arbitrator performance. Yet, arbitrators who hear cases at both SROs may not be receiving evaluations on a routine basis. Prior to 1999, both NASD and NYSE rules required the mandatory arbitration of all employment-related disputes, including discrimination claims. In the 1990s, as discrimination claims filed at NASD rose, some Members of Congress challenged the use of mandatory arbitration for discrimination disputes. In 1997, the Equal Employment Opportunity Commission (EEOC), which is responsible for enforcing the nation’s employment discrimination laws, published a policy statement opposing the use of mandatory arbitration agreements for these disputes. Opposition to mandatory arbitration for these claims stemmed from concerns that arbitration eliminated the role the courts played in deterring discrimination and protecting employees. In addition, others believed that many arbitrators were unfamiliar with antidiscrimination laws and, therefore, could not provide a fair hearing on these claims. NASD and NYSE took different approaches in changing their rules to address these concerns. NYSE followed EEOC’s recommendation and only arbitrates discrimination claims when all parties agree to arbitration after the dispute occurs. NASD, on the other hand, no longer requires that employees arbitrate employment discrimination disputes, but will arbitrate these disputes, based on agreements employees have made before or after the dispute occurs. The net result is that NASD will administer arbitration cases that include discrimination claims if the parties have entered into an agreement to do so. This includes policies employees sign as a condition of employment. According to NASD, in conjunction with this rule change, they assembled a working group to consider recommendations contained in a document known as “A Due Process Protocol for Mediation and Arbitration of Statutory Disputes Arising out of the Employment Relationship.” This Due Process Protocol was developed in 1995 by a committee of representatives from a range of organizations, to provide arbitration procedures for statutory employment claims. Following NASD’s review of this protocol NASD introduced additional requirements for these types of claims. Changes ranged from setting qualifications for arbitrators who chair arbitrator panels to specifying how arbitrators documented their decisions. The arbitrator chairing a discrimination case at NASD must hold a law degree, have 10 years of legal experience, have substantial familiarity with employment law, and must not have primarily represented employers or employees in the last 5 years. In addition to special chair qualifications, all the arbitrators who hear cases with discrimination claims must also be classified as “public”—that is, individuals who are not affiliated with the securities industry either professionally or through their family relationships. For employment discrimination claims of $100,000 or less, a single public arbitrator is appointed, and for claims greater than this amount a panel of three public arbitrators is selected. In disputes subject to arbitration that arise out of the employment or termination of employment of an associated person, and that relate exclusively to disputes involving employment contracts, promissory notes or receipt of commissions, a single “nonpublic” arbitrator—that is someone who is affiliated with the securities industry—can only hear nondiscrimination claims of $50,000 or less. In similar cases with claims of $50,000 or more, a panel composed of three nonpublic arbitrators is appointed. Currently, arbitrator chairs in cases without discrimination claims need the same qualifications as any arbitrator. At NYSE, all employment disputes, at the option of the employee, are entitled to a panel of three arbitrators, and a majority of the arbitrators cannot be from the industry unless the employee requests it. NASD rules, adopted in 2000, also made two changes to procedures concerning arbitrator decisions in cases with employment discrimination claims. First, the rules specifically state that arbitrators can award “reasonable” attorney’s fees for discrimination claims. This change also creates an incentive for attorneys to take discrimination cases because it provides greater assurance that they will be compensated for their work if they are successful. Second, NASD’s rule change requires arbitrators to document the disposition of discrimination claims, something not required for the other claims. While this rule still does not require arbitrators to explain their decisions, it requires arbitrators to specify for the parties how they ruled on any statutory discrimination claim. Both SROs require that all applicants for the arbitrator roster provide information on their affiliation with the securities industry, have 5 years of work experience, supply two letters of recommendation, and complete training in basic arbitration procedures. Recommendation letters must include particular information about the person writing the letter, the prospective arbitrator, and an attestation as to the character and fitness of the nominee. NASD also requires that applicants take a multiple choice examination and receive a passing score of at least 80 percent. (See fig.1.) After receiving arbitrator applications from applicants who work or worked in the securities industry, the SROs check the Central Registration Depository (CRD), a computerized database that contains the educational, work, and disciplinary history for current and former securities registered persons. Therefore, the CRD only covers arbitrators classified as nonpublic. Currently, information from arbitrator applicants not employed in the securities industry is not checked by the SROs, but NASD is proposing a rule change that would require the verification of background information on all new arbitrators. NASD reported that verifying the background information on all new arbitrators would enhance the reputation of its arbitration program. If SEC approves its rule change, NASD will use an independent firm to conduct the background checks and will pass the cost of this process—expected to be between $60 and $85—onto the applicant. NYSE did not report any plans to change its procedures at this time. At NASD, once arbitrators’ applications are approved, they must take a half-day introductory training course, be evaluated by the trainer, and pass a 25-question multiple choice examination on arbitration procedures. Once they pass the examination and evaluation by the trainer, they are included on the NASD arbitrator roster. At NYSE, on the other hand, once an application is reviewed and approved by staff, the applicant is considered able to arbitrate any case once he or she participates in one training course on arbitration procedures and conduct issues. Ongoing training at both SROs is limited. NYSE requires that arbitrators continue to attend at least one training course every 4 years. NASD does not have such a requirement but does offer chairperson training for those arbitrators wanting to chair cases. One SEC official raised concerns about mandating ongoing training for arbitrators, arguing that it may discourage the most experienced arbitrators from serving. Both SROs, recognizing that arbitrators are one of the key factors to ensuring a fair and efficient process, have developed procedures to help ensure that the selection of arbitrators for a case is unbiased. Prior to 1998, NASD staff selected arbitrators based on the issues in the case and the expertise the arbitrators held. In 1996, a NASD task force, organized to review the securities arbitration process, reported that claimants and their representatives were concerned that staff could be biased in selecting arbitrators. To address this concern, NASD changed how arbitrators were selected. Since 1998, NASD has allowed both parties involved in a dispute to choose the arbitrators, which limited NASD staff involvement in the selection process. NASD provides parties with a computer-generated list of up to 15 arbitrators with profiles for each arbitrator. An arbitrator’s profile includes a paragraph on the arbitrator’s background, a summary of the arbitrator’s education and work history, the arbitrator’s experience, the arbitrator’s disclosure and conflict information, and a list of all the publicly available award decisions that the arbitrator has rendered. Each party may peremptorily strike any arbitrator from the list, then ranks the arbitrators who remain by order of preference. If the parties do not mutually agree on an acceptable number of arbitrators after striking and ranking, the list is extended by the computer and the parties are assigned the next available arbitrator(s) on the computerized roster. While this process reduces the potential for staff bias, some arbitrators have raised concerns that a computer-generated list may not contain arbitrators with substantial experience. In 2000, NYSE also began giving parties three options for selecting arbitrators: (1) choosing randomly from a list drawn from all available arbitrators; (2) choosing from a list the staff compiles; or (3) having NYSE staff attorneys select, the only procedure used prior to 2000. If all parties cannot agree on one of these options, staff attorneys determine who will arbitrate. According to NYSE, staff selection has remained the most common method for selecting arbitrators, with parties using it for about 85 percent of the cases. Since this method is the default if parties cannot agree, it is not possible to determine how often this method was actually chosen by parties, or used as the default. At both NASD and NYSE, arbitrators selected to serve on cases are asked to review the case and determine if they have any possible conflicts of interest. In addition, arbitrators must update their profile, which includes information on their employment history and affiliation with the securities industry. Both NYSE and NASD will remove arbitrators from their roster if they misstate or fail to disclose information concerning conflicts of interest. Each SRO has developed three types of evaluations for arbitrators: (1) party evaluations, completed by either party or their attorneys; (2) peer evaluations, completed by other arbitrators who hear the case; and (3) staff evaluations. Both SROs summarize evaluation results and input them into a centralized arbitrator database. According to NASD officials, staff are required to summarize and input only negative comments on an arbitrator, although SEC staff noted that in practice it also often sees positive comments from NASD staff recorded in the files. NYSE officials, on the other hand, reported recording a complete summary of the evaluations. NASD conducts quarterly audits in which they check to see if staff members are consistently entering information in the centralized database and documenting actions taken concerning any evaluations. In addition, the audits review how complaint letters have been recorded, reviewed, and resolved. Both SROs reported that it has been difficult for them to get parties to return evaluations. Yet, NYSE reported that response rates have increased since it began requiring that arbitrator chairs encourage parties to complete the evaluations and reiterate that the evaluations are confidential and will not affect the case outcome. NYSE said that peers are very responsive with evaluations. NYSE said it requires that staff observe new arbitrators for their first hearing at NYSE and said it sought to evaluate all arbitrators, who serve on a case that goes to a hearing, at least once a year. Although NYSE said that it had fulfilled this requirement in 2002, NYSE could not provide data on evaluations showing that arbitrators had been observed. NASD could not report how often staff evaluate arbitrators. Officials from both SROs said that if no information is received about an arbitrator on a case, they assume the arbitrator performed adequately. To gain a better understanding of how often arbitrators were evaluated, we reviewed the records of 124 out of the 494 arbitrators at NASD who had heard discrimination claims and/or other employment claims between January 2001 and June of 2002. On the basis of this sample, we estimate that about 45 percent of arbitrators who heard cases during this time had received some type of evaluation and of those only about 2 percent received all three types of evaluations—peer, party, and staff. (See fig. 2 for a breakdown of the types of evaluations arbitrators received.) Although NASD supplements its evaluations by rating arbitrators on a quarterly basis, our review showed that ratings are often based on little or no information. Every quarter NASD rates those arbitrators who have been active during that time, using a 3-point scale, with 1 being the lowest and 3 being the highest. Staff bases the rating on evaluations and complaints received that quarter and any notes recorded during that time frame in the arbitrator database. In general, NASD reported that any arbitrator who did not have any evaluations during the quarter is likely to be rated adequate (“2”). We estimate that the majority of the arbitrators that were rated received an adequate rating of 2, whether or not they received any evaluations during this time, and 57 percent of arbitrators with a 2 rating had not received any evaluations during this time frame. (See fig. 3.) Some arbitrators without evaluations during this time frame were also rated excellent, which could be a result of the rating from the prior quarter. Both NASD and NYSE have mechanisms in place to address poor performance by arbitrators. If NYSE or NASD receives either a poor arbitrator evaluation or complaints about an arbitrator on a case, staff will take steps to respond. For example, the staff member assigned to the case may be asked to corroborate the complaint or be asked to consult other arbitrators assigned to the case to see if they support the allegation. A staff member who confirms the complaint may then speak to the arbitrator and suggest how he or she could improve his or her behavior. If the complaint suggests no corrective action is possible, both SROs reported that the arbitrator would be removed from the active roster immediately. All complaints are recorded in the arbitrator database, and both SROs reported that staff input how the complaint will be resolved. In reviewing the records of NASD arbitrators, we found that staff did not always document how they responded to poor evaluations and complaints. We estimate that 10 percent of all 494 NASD arbitrators that heard cases between January 2001 and June of 2002, received some kind of complaint, either from a staff member, a party member, or another arbitrator. In our sample, 6 of the 16 arbitrators that received negative complaints were permanently dropped from NASD’s arbitrator list and 1 was temporarily made unavailable pending further review. One arbitrator, who had been permanently dropped in 2001, appeared to have complaints going back to 1993, yet the notes showed that no changes had been made to the adequate rating of 2. For another permanently dropped arbitrator, staff noted they were concerned that no negative comments were recorded on the computer file since other staff and arbitrators had complained about this arbitrator’s conduct. Of the 9 remaining arbitrators, information provided by NASD indicated that staff had followed-up on the complaints raised for 5 arbitrators. Of the 1,546 employment cases decided by arbitrators at NASD and NYSE over the last 10 years, 261 (17 percent) included at least 1 discrimination claim. Cases with discrimination claims required more hearing sessions and took longer to complete than those with no discrimination claims. At the same time, the compensatory damages claimed in all cases was generally over $100,000, with claimed amounts generally higher at NYSE than at NASD. In over half of all employment cases, employees won some level of monetary compensation, although in cases with discrimination claims employees were generally less likely to win. In most cases, when employees won they received less than half of the compensatory damages they claimed, with over 50 percent of the awards over the last 10 years being $50,000 or less. When compensatory damages were awarded in cases involving discrimination, it tended to be higher than compensatory damages awarded in other employment cases, with just over 60 percent of discrimination cases receiving more than $50,000. Appendix 1 describes the reliability and limitations of these data. Employment cases arbitrated at NASD and NYSE can contain 1 or more claims, some of which might involve discrimination. Of all 1,546 employment cases heard (1,289 at NASD and 257 at NYSE) at NASD and NYSE over the last 10 years, 261 (17 percent) included at least 1 type of discrimination claim. NASD arbitrated 202 of the cases that involved discrimination allegations. NYSE arbitrated the remaining 59. Given that some cases involved more than 1 type of discrimination claim, in 261 cases a total of 324 discrimination claims were made. As shown in table 1, the majority of these 324 discrimination claims was either age (33 percent) or sex-based (32 percent). Over the last 10 years, the number of cases with discrimination claims has generally decreased at NYSE. In more recent years, this has also occurred at NASD, although prior to 2000 the number of cases at NASD involving discrimination fluctuated. (See fig. 4.) NASD and NYSE officials reported that the rule changes in 1999, which altered if and how discrimination cases are arbitrated, might have reduced the arbitration of these types of cases. Over the last 10 years, the median number of hearing sessions in discrimination cases ranged from 5 to 10 at NASD (see fig. 5) and from 8 to 15 at NYSE (see fig. 6). The median number of hearing sessions in cases that did not involve discrimination ranged from 4 to 5 at NASD and 5 to 11 at NYSE. Not surprisingly, cases requiring more hearing sessions also took longer to complete. For example, cases requiring 1 to 2 hearing sessions took 438 days on average to complete, while those requiring 5 to 8 hearing sessions took 490 days on average. According to NASD, discrimination cases could require more hearing sessions and take longer to complete because they are more complex. In most cases arbitrated at NASD and NYSE over the last 10 years, employees sought more than $100,000 in compensatory damages, whether or not the case included a discrimination claim. (See fig. 7.) Overall, employees in NYSE cases sought higher compensatory damages than employees in NASD cases with the average compensatory damage claimed at NYSE over $2 million and the average compensatory damage claimed at NASD was under $1 million. These differences might reflect differences in the membership of the two SROs. For example, members of NYSE tend to include mostly the larger, more established broker-dealers, whose employees may seek higher compensatory damages in arbitration cases. In general, in more than 50 percent of cases at NASD and NYSE, employees were awarded some level of compensatory damages. (See fig. 8.) Employees in cases involving discrimination, however, were less likely to win some compensatory damages than employees in cases with no discrimination claims. (See fig. 9.) Forty-eight percent of all NASD and NYSE cases over the last 10 years that included a discrimination claim won some level of compensatory damages compared with 61 percent of cases with no discrimination claims. In cases where employees received a monetary award, over 60 percent of employees received less than half of the compensatory damages they claimed. In terms of the amount of compensatory damages awarded, awards in cases at NYSE tended to be higher. (See fig. 10.) At NASD, just over half of the cases won had awards of $50,000 or less, while at NYSE 70 percent of awards were over $50,000. Compared with cases with no discrimination claims, employees in cases involving discrimination were more likely to receive larger awards. (See fig. 11.) Sixty-two percent of cases with discrimination claims that received monetary awards had an award amount over $50,000, compared with 48 percent of cases without discrimination claims. In addition to receiving monetary compensation, employees sometimes seek and receive nonmonetary awards. For example, an employee may want defamatory language removed from his or her record. In the employment cases that we analyzed, approximately 13 percent of employees won some type of nonmonetary award without any monetary award. To assess arbitration programs at NASD and NYSE, SEC conducts periodic inspections and reviews complaint letters it receives. It has cited problems at one or both SROs in the procedures used to (1) ensure arbitrators are qualified and (2) track arbitrator performance. SEC generally reviews arbitration procedures, arbitrator profiles, disclosure reports, and closed cases and interviews staff during its inspections. Although SEC officials indicated that complaint letters could affect the focus of an inspection, we found that few of the letters SEC receives focus on employment arbitration. In its most recent inspections, in addition to problems with procedures both SROs used to ensure arbitrators are qualified, SEC found that one or both SROs did not record information on arbitrator performance in a central database or disqualify all arbitrators who were poor performers from hearing cases. Both SROs have taken some steps to address the problems. Since 1995, SEC has examined NASD’s and NYSE’s arbitration programs three times each and has routinely responded to complaint letters about the process. Most inspections have focused on either case processing or recruiting and maintaining arbitrators. In general, inspections also included reviewing problems raised in previous inspections to determine whether they had been resolved. (See fig. 12.) In conducting inspections, SEC reviews a variety of documents, summarizes findings, develops recommendations, and provides SRO with the opportunity to comment on both its findings and recommendations. The documents SEC reviews generally include case files and arbitrator profiles and disclosure reports. Some of the case files are chosen randomly while others are selected based on risk factors that suggest problems may exist, such as the length of time it took to complete a case. In addition to reviewing documents, SEC interviews SRO staff to better understand its operations. In its 2000 inspection of NASD, SEC reviewed 110 arbitrator profiles and disclosure reports and 89 arbitration case files. In its 2001 inspection of NYSE, SEC reviewed 200 arbitrator profiles and disclosure reports and 40 customer and employment cases in addition to other documents. An SEC official noted that under the Exchange Act, SEC has a broad range of authority to address deficiencies found in an inspection. As a practical matter, SEC staff and SROs discuss deficiencies and document that necessary steps have been taken. In addition to carrying out inspections to oversee SRO arbitration programs, SEC reviews complaint letters from individuals employed in the securities industry and other interested parties regarding SRO- administered arbitration programs. Of all the complaint letters SEC receives, however, only a small percentage raise concerns about the arbitration and an even smaller percentage deal with employment cases. According to SEC’s complaint letter log, of the over 12,000 complaint letters SEC received from January 1992 through October 2002, approximately 500 contained a specific reference to arbitration. We reviewed a random sample of 100 of the letters that referred to arbitration and found 16 that discussed the arbitration of employment clams. Of the 16, 6 raised concerns about the use of mandatory arbitration to address employment or employment discrimination claims. The other 10 letters dealt with a variety of issues, including the amount of time allocated to address a claim, the scheduling of hearings, and a proposal to limit damages that can be claimed. An SEC official with the division that approves SRO rules said the division responds to all complaint letters it receives, which are tracked using the database letter log. The official indicated that when letters register general discontent with the arbitration process but do not contain a specific allegation, parties are provided general information about arbitration, including information on the narrow procedural mechanism for challenging awards. When letters contain specific allegations, SEC attorneys contact the SRO or use other means to investigate the allegation before providing a response. SEC attorneys may also forward a copy of the letter to the office that oversees periodic inspections, so it can assess the allegation in its inspection activities. For example, an SEC official reported that SEC had placed special emphasis in a recent inspection on reviewing updates SRO staff made to arbitrator profiles and disclosure reports in response to concerns raised in a complaint letter. In recent inspections, SEC staff identified a number of ways NASD and NYSE could improve their procedures for ensuring that arbitrators are qualified and for tracking arbitrator performance. For example, to ensure that arbitrators are qualified, SEC staff recommended that one or both SROs ensure that they consistently conduct CRD checks of all industry arbitrators and document those reviews in arbitrator profiles; ensure that all arbitrator profiles are complete and reflect new or updated information arbitrators submit about themselves; lengthen training courses for new arbitrators; include in arbitrator training manuals guidance on certain arbitration procedures and certain problems arbitrators are likely to encounter; and develop policy on how often arbitrators must attend ongoing training, the circumstances under which it can be waived, and documentation of reasons waivers are granted. On the basis of our review of SRO documents containing policies and standard procedures and interviews with SRO officials, we found that each SRO had taken steps to address SEC’s recommendations. One or both SROs now require that CRD checks be recorded in arbitrator profiles; have an online reporting form arbitrators can use to submit updated information about themselves; and have a basic training course for new arbitrators, more comprehensive training manuals, and a written policy regarding ongoing arbitrator training. In addition, in recent inspections, SEC staff found that the procedures in place to track arbitrator performance could be improved. For example, SEC staff recommended that one or both SROs ensure that all pertinent information on arbitrator performance, whether negative or positive, is recorded in a central database and do more to address complaints of poor arbitrator performance, including, if appropriate, removing arbitrators from the active pool and better documenting actions taken in response to complaints of poor performance. SEC staff reported that it appears from recent ongoing and completed inspections that the SROs have taken steps to address these recommendations. In general, to determine if any issues raised in past inspections remain unresolved, SEC, at the beginning of each new inspection, reviews recommendations from prior inspections. SEC is currently inspecting NASD and will report on the results, including unresolved issues, if any, within the next year. NYSE will be reexamined beginning in 2003, at which time SEC will assess what additional steps, if any, NYSE has taken to address the issues reported here. SEC oversees NYSE and NASD, which regulate their member firms in the securities industry. All three are responsible for ensuring that the procedures for arbitrating discrimination and other employment disputes are fair and the requirements of the Exchange Act are met. Although SEC’s approval of rules governing arbitration programs and its periodic inspections of these programs has resulted in improvements, there are aspects of these programs that deserve closer scrutiny. Currently, NASD and NYSE verify the qualifications for those arbitrators who have worked in the securities industry and neither SRO verifies the information provided by nonindustry arbitrators. While we did not find instances where arbitrators provided false statements of qualifications, verifying the qualifications of all arbitrator applicants is an important step in ensuring that employees and employers receive accurate information on the arbitrators they select to hear their cases. Additionally, while SEC has reviewed both SROs procedures for evaluating arbitrator performance, we found evidence that arbitrators are not evaluated on a routine basis. Although NASD has procedures for peer, party, and staff to evaluate arbitrators and identify poor performers, these evaluations are not always completed. While NYSE officials indicated that NYSE has similar procedures and reported staff generally evaluate active arbitrators at least once a year, we were unable to confirm this information. Securities industry employees must use NASD and NYSE arbitration programs to resolve most employment disputes. Therefore, more effort should be made to verify that arbitrators meet the qualifications SROs require and to encourage parties, other arbitrators, and staff to submit evaluations more regularly, so that only arbitrators who perform adequately are maintained on SRO rosters. To help ensure that all NASD and NYSE arbitrators possess the qualifications required by their SRO, we recommend that the Chairman of SEC direct NASD and NYSE to verify basic background information of all new applicants for their arbitrator rosters. We also recommend that SEC continue to review the adequacy of procedures for evaluating arbitrator performance in their next inspections at NASD and NYSE. We provided a draft of this report to SEC, NASD, and NYSE for their review. A copy of their written comments is in appendixes II, III, and IV, respectively. SEC, NASD, and NYSE also provided technical comments on the draft report, which were incorporated as appropriate. SEC agreed with the focus of our recommendation concerning the verification of background information. However, SEC believed that in the absence of any indication that the falsification of information is a problem, it might not be necessary for NYSE, as a smaller arbitration forum than NASD, to add this cost to the arbitration process. As a result, SEC indicated that it should be up to NYSE to decide whether the independent verification of basic background information of arbitrator applicants is needed. NASD noted that although it has had no evidence that arbitrators ever falsified information, it is planning to verify the background information on all new applicants to increase party confidence in the accuracy of arbitrator records. NASD reported that a one-time fee for arbitrator applicants would cover the cost of this procedure. NYSE reported that since it has found no proof of anyone providing false information, there is insufficient justification for independently verifying application information and adding costs to the process. In addition, NYSE believes that it has already taken steps to ensure that its application procedures are adequate, such as having applicants affirm that the information they provide is correct and requiring two recommendation letters. NYSE also indicated that counsel for employees can and do take further actions to review the background of arbitrators. Despite concerns raised by SEC and NYSE, we continue to believe that verifying background information for all new arbitrators is an important part of ensuring the integrity of arbitration, a process required for most disputes. While adding costs to the process is a legitimate concern, NASD’s approach of instituting a one-time application fee of $80 would not increase the expense of arbitration for the parties involved. Additionally, the fact that lawyers representing parties are already sometimes verifying information suggests that verification is valued and further supports the need for it to be done independently and systematically for all new arbitrators. Moreover, although our report has focused on the arbitration of employment cases, a small percentage of all the cases arbitrated in the securities industry, our recommendation will benefit all parties, since NASD and NYSE arbitrators are available for both employment and customer cases. Concerning our recommendation that SEC continue to review evaluation procedures at SROs, SEC, NASD, and NYSE, all indicated that they understand the importance of evaluating arbitrators. Specifically, SEC agreed that evaluating arbitrator performance is a fundamental element of the arbitration process and reported that it will continue to review the adequacy of procedures for evaluating arbitrator performance during its inspections of SRO arbitration programs. NASD noted that it would strive to provide better documentation of the actions it takes in response to complaints or evaluations. NYSE reported it has a new computer system that creates a centralized, easily accessible record of all feedback and comments from arbitrator evaluations, which will allow staff to have a more comprehensive view of an arbitrator’s performance. As arranged with your offices, unless you announce its contents earlier, we plan no further distribution of this report until 30 days after the date of this report. At that time, we will provide copies of this report to the Chairman of SEC, the President of NASD, and the Director of Arbitration for NYSE, appropriate congressional committees, and other interested parties. We will also make copies available to other interested parties, upon request. This report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you have any questions about this report, please contact me on (202) 512-9889. Other contacts and staff acknowledgments are listed in appendix V. This appendix provides a detailed description of the scope and methodology we used to determine (1) the characteristics and outcomes of arbitrated employment and employment discrimination disputes in the securities industry; (2) who evaluates arbitrators and what performance ratings they receive; and (3) how the Securities and Exchange Commission (SEC) responds to complaint letters it receives concerning arbitration of employment and employment discrimination cases. To determine the nature and outcomes of employment and employment discrimination disputes in the securities industry, we analyzed a database containing employment disputes in which arbitration decisions had been made by NASD or the New York Stock Exchange (NYSE) from January 1993 through June 2002. We obtained this database from Securities Arbitration Commentator, Inc. (SAC), Maplewood, New Jersey. SAC is a commercial research firm that maintains a database of information from publicly available records on decided cases from all self-regulatory organizations (SRO) arbitration forums, as well as the American Arbitration Association. The SAC database contained information on arbitration awards that resulted from employee claims for damages against SRO member firms. By definition, this database did not include cases that were settled or withdrawn before an arbitration decision was reached. The 1,564 cases in the database included fields describing a range of variables, such as the name of the forum, the parties involved in the case, types of claims in the case, amounts of compensatory damages claimed, and amounts of compensatory damages awarded. Data on every variable we analyzed were not available for all 1,546 employment cases arbitrated at NASD and NYSE over the last 10 years. Our analyses of the median number of hearing sessions were based on 96 percent of the total 1,546 cases. The amounts claimed in discrimination and nondiscrimination cases, overall, were based on 84 percent of the 1,546 cases. All other analyses presented in this report were based on the total 1,546 employment cases arbitrated over the last 10 years, unless otherwise noted. To assess the reliability of the data we received from SAC, we reviewed 100 randomly sampled cases in the database, 50 with discrimination claims and 50 without discrimination claims. To verify the accuracy of the information for cases in the database, we compared this information with information in copies of the original awards for the same cases as issued by the forums or as reprinted by Lexis/Nexis. For most variables, data reliability was adequate for the analysis we conducted. We did not use any variables in the SAC database with high error rates. However, we were unable to verify that the SAC database included all cases decided by NASD or NYSE from January 1993 through June 2002. To determine who evaluates arbitrators and what performance ratings they receive, we first generated a list from the SAC data file of all NASD arbitrators who had decided at least 1 employment case that did not include a discrimination claim. We stratified this list of 494 arbitrators into two groups—those that had also decided at least 1 case involving discrimination during this time and those that had not decided any cases involving discrimination. We selected all 60 arbitrators from the group that had heard at least 1 discrimination case and selected a random sample of 64 of those that had not heard any and obtained NASD’s files containing evaluation and rating information for each of these 124 arbitrators. From the files associated with the sampled arbitrators, we extracted data on the number of evaluations, if any, these arbitrators received from the parties and/or other arbitrators in the cases they had decided and on performance ratings these arbitrators received. Each arbitrator in our study population of 494 had a nonzero probability of being selected for our sample. In analyzing data about the arbitrators in our sample, we weighted each sampled arbitrator to account statistically for all arbitrators in the study population, including those who were not selected. Because we followed a probability procedure based on random selections, our sample is only one of a large number of samples that we might have drawn. Since each sample could have provided different estimates, we express our confidence in the precision of our particular sample’s results as 95 percent confidence intervals. These are intervals that would contain the actual population value for 95 percent of the samples we could have drawn. As a result, we are 95 percent confident that each of the confidence intervals in this report will include the true value in the study population. The width of a confidence interval is also referred to as the sampling error associated with the estimate. Sampling errors associated with estimates from our file review do not exceed plus or minus 15 percentage points. SEC tracks complaint letters in a computerized database and has logged over 12,000 from 1992 through October 2002. To determine how SEC responds to complaint letters it receives concerning arbitration of employment and employment discrimination cases, first we asked SEC staff to search its database and identify those letters that mention arbitration. SEC found that approximately 500 of the logged letters mentioned arbitration. We reviewed the content of a random sample of 100 of these letters to determine how many dealt specifically with arbitration of employment or employment discrimination claims. Out of the 100 letters, we found 16 that dealt with the arbitration of employment or employment discrimination claims. Twenty-five of the 100 letters in our sample were missing from SEC files, and the issues raised in the remaining 59 letters were either unclear or unrelated to employment cases. In addition to those named above, Susan S. Pachikara, Joan K. Vogel, and Sidney H. Schwartz made significant contributions to this report. | Employees in the securities industry must submit to binding arbitration in most employment disputes. The Securities and Exchange Commission (SEC) is responsible for overseeing these arbitration programs--the largest being run by NASD and the New York Stock Exchange (NYSE). The Congress asked GAO to examine (1) the circumstances under which NASD and NYSE will arbitrate employment and employment discrimination disputes, and their procedures for selecting and evaluating their arbitrators; (2) the characteristics and outcomes of arbitrated employment and employment discrimination disputes at NASD and NYSE over the last 10 years; and (3) how SEC oversees the arbitration programs at NASD and NYSE and the results of these oversight activities. Arbitration is generally required for most employment disputes, except those dealing with discrimination claims. NYSE will only arbitrate discrimination cases when parties involved agree to arbitrate after the dispute occurs. NASD will arbitrate employment discrimination cases based on agreements entered into between employees and firms before or after a dispute occurs. NASD has instituted additional requirements, however, for these cases, such as requiring that arbitrators not be affiliated with the securities industry. In addition, those chairing hearings for employment discrimination cases must hold a law degree, have 10 years of legal experience, have substantial familiarity with employment laws, and must not have primarily represented employers or employees in the last 5 years. To qualify to hear cases, NASD and NYSE require that arbitrators have at least 5 years of work experience, supply two letters of recommendation, and complete training in basic arbitration procedures. Arbitrators must also provide information on their complete employment history, including any affiliation with the securities industry, as well as information on whether they have any regulatory or criminal history. Neither organization independently verifies the qualifications for applicants not associated with the securities industry. In addition NASD and NYSE have standard procedures for ensuring that arbitrators selected to hear cases do not have conflicts and for evaluating arbitrator performance. However, evaluations of arbitrators by staff, parties in disputes and other arbitrators on cases are not always completed. Officials at NASD and NYSE noted that if they receive no information about an arbitrator's performance on a case, they assume that the arbitrator's performance was adequate. Over the last 10 years, 261 (17 percent) of the 1,546 employment disputes arbitrated at NASD or NYSE included a discrimination claim. Discrimination cases differed from cases with disputes that did not involve discrimination in the following ways: (1) discrimination cases required more hearing sessions; (2) employees won discrimination cases less often than cases not involving discrimination claims; and (3) in cases that employees won, the monetary award in discrimination cases was generally larger than in cases not involving discrimination. SEC periodically inspects NASD and NYSE arbitration programs. On the basis of its inspections, SEC has recommended improvements. In its most recent inspections of NASD and NYSE, SEC made various recommendations concerning procedures for ensuring that arbitrators are qualified. In addition, SEC recommended that one or both improve procedures for recording information on arbitrator performance in a central database and for disqualifying arbitrators who are poor performers. |
Under contract with CMS, states survey 13 types of health care facilities that participate in Medicare and Medicaid; in 2007, there were about 60,000 such facilities. State survey activities are primarily funded by the federal government. Other types of facilities that participate in Medicare and Medicaid are also subject to surveys, but the surveys are not always conducted by states or are not federally funded. For example, community mental health centers are surveyed by federal surveyors located in each of CMS’s 10 regional offices rather than state surveyors. Four facility types— ambulatory surgical centers, home health agencies, hospices, and hospitals—can choose to be surveyed by accrediting organizations, such as the Joint Commission, instead of states. However, facilities that choose this option are charged fees and are subject to state validation surveys that assess how well the accreditation process detects deficiencies in compliance with Medicare quality standards. Clinical labs are unique in that CMS collects fees from the labs to cover the cost of state surveys and federal oversight, including state validation surveys of a sample of accredited labs. Survey frequencies for nursing homes, intermediate care facilities for the mentally retarded, and home health agencies are established by federal statute, range from about 1 to 3 years, and are defined as maximum time intervals between surveys. In contrast, CMS sets survey frequencies for the 10 other facility types that states survey as a matter of policy (see table 1). These frequencies are typically every 6 years or more and they have generally been defined as the average across all facilities of the same type (see app. II). As a result of CMS’s reliance on averages, some facilities could be surveyed earlier and others later and still meet the agency’s frequency standard. CMS distinguishes, however, between (1) its policies on survey frequency, and (2) the survey frequencies that it holds states accountable to meeting each year in its state performance reviews (discussed below), which may be less frequent than those established by policy. Although its policies on survey frequency change infrequently, CMS officials told us that nonstatutory survey frequencies are resource driven and depend on each year’s funding level. For example, CMS policy for most nonstatutory survey frequencies has been about 6 years since fiscal year 2001; based on available resources, however, the survey frequencies for which CMS has held states accountable have ranged from 3.5 years to 10 years from fiscal years 2006 through 2008 (see app. II). In fiscal year 2003, CMS introduced a 4-tier structure for prioritizing surveys with tier 1 being the highest priority—facilities with statutorily mandated survey frequencies—and tier 4 the lowest priority. CMS instructs states to ensure that tiers 1 and 2 will be completed as a prerequisite for planning surveys in subsequent tiers. States undertake a variety of survey activities, including standard and validation surveys, complaint investigations, revisits, and enforcement actions. Surveys and complaint investigations are conducted to determine facility compliance with federal quality and safety standards. The quality-of-care component of a survey focuses on assessing the facility’s compliance with all regulatory requirements, other than the requirements pertaining to protection from fire. It involves direct observation of the provision of care to a sample of patients or residents; interviews of a sample of patients or residents; and review of patient or resident medical records, as well as other facility documents. The safety component of a survey examines a facility’s compliance with federal fire safety standards. Complaint investigations allow state surveyors to intervene promptly if problems arise between standard surveys or at accredited facilities. Compared to surveys, complaint investigations are (1) more targeted because they focus on specific concerns, and (2) less predictable because they depend on the number and seriousness of the allegations. For example, some complaints involve potential immediate jeopardy to patient health and safety and must be investigated within 2 to 5 working days. Less serious complaints must be investigated promptly or, in the case of accredited facilities, within 45 calendar days. Moreover, when a complaint investigation identifies a serious deficiency at an accredited facility, an intermediate care facility for the mentally retarded, or a home health agency, a full or extended survey must be conducted. Deficiencies identified during a survey or complaint investigation are categorized according to their severity. States conduct revisits to ensure that facilities correct any serious deficiencies identified by state surveyors; revisits may also be conducted to determine when a nursing home has returned to compliance and an enforcement action known as a sanction may be ended. On the basis of state recommendations, CMS may implement a sanction when surveyors identify serious deficiencies in a facility’s compliance with federal standards. The nature of the care provided by a facility influences the type of expertise needed to conduct surveys. For example, nursing home survey teams primarily consist of registered nurses (RN) and social workers. Surveys of intermediate care facilities for the mentally retarded, on the other hand, call for the skills of a developmental disabilities specialist. In general, state survey activities are funded through a combination of Medicare, Medicaid, and non-Medicaid state funds. Typically, almost 60 percent of federal spending on survey activities comes from Medicare, with the remaining 40 percent funded by the federal Medicaid share. Salaries, particularly surveyor salaries, are the most significant cost component of state survey activities. Table 2 shows how the two programs fund survey activities for each type of facility. Nursing homes are the only facility type whose surveys are funded by both Medicare and Medicaid. Medicare. Medicare funding for survey activities is requested and provided as part of a lump sum appropriation for the CMS Program Management Account, which generally funds CMS operations. For each fiscal year, CMS develops a budget request for that account, including an amount for survey activities, giving priority to funding for statutory requirements. In determining the amount for survey activities, CMS considers three factors: the number of facilities; the number of surveys states need to conduct, as determined by the established survey frequencies; and the cost of surveys, using the number of hours to complete them as a proxy. The request is submitted to Congress as part of the President’s proposed budget. In the annual appropriations act for HHS, Congress authorizes the transfer of a specific amount from the Medicare Trust Funds to CMS’s Program Management Account, which limits the amount of money that CMS can use for operations, including survey activities. Typically, tables within the conference report identify amounts for survey and other activities funded through the Program Management Account. According to a CMS official, the agency generally allocates the amounts specified in the conference report tables to the relevant activities. Funding for survey activities covers (1) state survey operations; (2) direct federal surveys, such as community mental health centers; and (3) support contracts, such as for training surveyors, developing a new nursing home survey methodology, and surveying psychiatric hospitals. The costs of managing survey activities, such as salaries for staff of CMS’s Survey and Certification Group and federal surveyors in each of CMS’s 10 regional offices, are also funded through CMS’s Program Management Account, but not as part of the funds designated in the conference report for survey activities. Each August, CMS notifies states of their projected Medicare budget allocations for the federal fiscal year starting October 1, based on the President’s proposed budget. After enactment of the appropriations act, the agency notifies states of any changes in their Medicare allocations for survey activities. At the end of the federal fiscal year, CMS may provide supplemental funds to states that spent more than their initial Medicare allocations by redistributing funds from states that spent less than their allocations. Medicaid. For surveys of facilities funded by Medicaid, states generally pay 25 percent of the costs and the federal government pays the remaining 75 percent. The President’s budget proposal provides Congress with an estimate of Medicaid spending for survey activities based on projected workload. The annual appropriations act for HHS includes an amount for the federal share of Medicaid expenditures, including states’ expenditures for survey activities. Funds are provided to states based on claims submitted for survey activities or state estimates of activities to be conducted. Non-Medicaid state funding. While states contribute to survey activities by paying 25 percent of Medicaid-covered expenditures, states are also expected to contribute funds for (1) the benefit they derive from facilities meeting federal quality standards and (2) the survey costs associated with state licensing requirements. According to CMS guidance, if the survey of a Medicare facility covers 100 standards and the state has adopted 50 of them for licensing purposes, the state and Medicare would contribute equally to the survey costs of the 50 shared standards and Medicare would cover all the survey costs for the 50 Medicare-only standards. If state survey requirements are more stringent than federal requirements—for example, federal requirements call for a facility type to be surveyed every 3 years but a state mandates surveys every 18 months—the state is expected to pay for the additional surveys. Moreover, if a state has no licensing requirements for a facility type, the state still acquires a derived benefit from that facility’s having to adhere to federal standards because of its participation in Medicare and Medicaid. Through staff in its 10 regional offices, CMS oversees the extent to which states’ performance ensures that facilities participating in Medicare and Medicaid provide high-quality care in a safe environment. The agency’s primary oversight tools are annual performance reviews that measure states’ compliance with specific standards and statutorily required federal monitoring surveys of nursing homes to assess the adequacy of state surveys. CMS regional offices also monitor states’ use of federal funds provided for survey activities. State performance reviews. CMS established state performance reviews in fiscal year 2001. Annually, the agency’s regional offices use the reviews to determine whether states are meeting federal requirements—both statutory and nonstatutory—and to identify areas for improvement in state program management. The reviews assess states’ performance across 18 standards, which generally focus on the timeliness and quality of surveys, complaint investigations, and enforcement actions. Since establishing the performance standards, CMS has continued to refine and expand their scope. For example, the standards originally focused on state nursing home survey activities, but now include ambulatory surgical centers, comprehensive outpatient rehabilitation facilities, end-stage renal dialysis facilities, home health agencies, hospices, hospitals, intermediate care facilities for the mentally retarded, and rural health clinics. However, only the survey frequency standards—whether states are completing surveys within statutory time frames or CMS-established survey priorities— encompass all 13 facility types surveyed by states. In fiscal year 2006, CMS began penalizing states that did not complete their entire tier 1 workload by reducing the states’ Medicare funding allocation for the following year. Federal monitoring surveys of nursing homes. Regional office staff conduct statutorily required federal monitoring surveys annually in at least 5 percent of state-surveyed Medicare and Medicaid nursing homes in each state. Federal monitoring surveys, which can be either comparative or observational, provide an indication of the quality of state nursing home surveys. For a comparative survey, federal surveyors conduct an independent survey of a nursing home recently surveyed by a state in order to compare the findings. When federal surveyors identify a deficiency not cited by state surveyors, they assess whether the deficiency existed at the time of the state survey and should have been cited by state surveyors. In prior work, we used the results of federal comparative surveys as a benchmark for identifying when state surveys have failed to cite a deficiency altogether or cited a deficiency at too low a level. For observational surveys, federal surveyors accompany a state survey team to a nursing home to evaluate the team’s on-site survey performance and ability to document survey deficiencies. Observational surveys allow federal surveyors to provide more immediate feedback to state surveyors and to identify state surveyor training needs. In fiscal year 2007, 786 federal monitoring surveys were conducted, 170 of them comparative, 616 observational. States’ use of federal funds for survey activities. CMS regional offices are responsible for reviewing state spending. This oversight has two key aspects. First, regional office staff monitor states’ Medicare spending during the fiscal year and states’ adherence to CMS policies and guidelines. If states request supplemental Medicare funds, regional offices evaluate the states’ requests and make recommendations to the CMS central office. Second, according to CMS’s State Operations Manual, a state must allocate the costs of a survey to Medicare, Medicaid, and state licensure based on the extent to which each of these programs benefit from the survey. According to CMS central office officials, regional office staff are responsible for working with states to establish the amount of non-Medicaid state funds that states contribute to cover the costs associated with their derived benefit and their licensing requirements that differ from federal requirements. Federal funding for state surveys increased from fiscal years 2000 through 2002 but was nearly flat from fiscal years 2002 through 2007. In inflation- adjusted terms, funding fell 9 percent from fiscal years 2002 through 2007. CMS has taken incremental steps to address both the recent trend in funding levels and survey budget allocation weaknesses. CMS has placed a priority on funding state surveys at the expense of certain support contracts, such as the development of a new nursing home survey methodology. To ensure that states would have to conduct some surveys of every facility type each year, CMS distributed the survey requirements for several facility types across more than one tier, placing a higher priority on surveying the most problematic facilities. At the same time, it increased the average time between surveys for many facility types. Recognizing that its previous method for allocating Medicare funds for state survey activities resulted in over- and underfunding relative to state survey requirements, CMS developed a new budget analysis tool in 2005. However, use of the tool has been confined to making incremental adjustments, rather than baseline reallocations, to Medicare survey funding. In addition, the agency asked states to develop contingency plans to prepare for possible reductions in Medicare funding. Federal funding for state surveys increased from fiscal years 2000 through 2002 but was nearly flat from fiscal years 2002 through 2007. In inflation- adjusted terms, funding increased modestly by 4 percent over the entire 8 fiscal years, but fell 9 percent from fiscal years 2002 through 2007 (see app. III). In fiscal year 2008, Medicare funding for survey activities increased by about 7 percent after adjusting for inflation. Figure 1 compares overall federal funding for survey activities in actual and inflation-adjusted dollars for fiscal years 2000 through 2007. For about 3 months in calendar year 2007, CMS charged and retained fees for revisits from Medicare facilities. In fiscal year 2007, Congress required CMS to charge user fees for revisit surveys and to use those fees to cover the costs of these surveys. That authority was extended through part of fiscal year 2008 through a series of continuing resolutions. According to CMS, the agency sought this authority to encourage Congress to fund requested increases in the Medicare survey budget, breaking what they perceived to be a cycle of inadequate funding for survey activities. The agency billed facilities about $8 million during the 3 months that the revisit user fee program was in effect. Although this authority was requested in the President’s Budget for fiscal year 2008, Congress did not provide it. In response to a decline in inflation-adjusted funding since fiscal year 2002, CMS modestly increased the amount of Medicare funds targeted for state surveys in fiscal years 2005 through 2007 by tapping into its support contract funds. For example, in fiscal year 2007, CMS cut Medicare funding for support contracts by about 17 percent ($2.7 million) and correspondingly increased states’ Medicare allocation for conducting surveys by 1.3 percent, ranging from about $9,000 to about $368,000 a state (see table 3). A CMS official also told us that the agency has decreased funding for and thus slowed the refinement and implementation of the new nursing home Quality Indicator Survey (QIS)—a project funded through a support contract initiated about 10-years ago that is intended to improve the consistency and efficiency of state surveys and provide a more reliable assessment of quality. CMS had intended to significantly expand implementation from the 5 pilot states, but has only added 3 of the 13 states interested in transitioning to the QIS. As of May 2008, CMS projected that the QIS would not be fully implemented nationally until 2014, at an estimated cost of about $20 million. According to CMS officials, further reductions in support contracts would adversely affect the activities funded through the contracts. In fiscal year 2006, CMS adopted a risk-based approach for state survey requirements in response to declining inflation-adjusted funding since fiscal year 2002. This approach entailed distributing the survey requirements for several facility types across more than one tier, thus ensuring that states would have to conduct some surveys of every facility type each year. First, CMS required states to survey a targeted sample of the most problematic facilities as a tier 2 priority for many facility types. States select 5 percent or 10 percent of facilities, depending on the type, from a CMS list that identifies those most at risk of providing poor care. In addition, CMS moved the previous tier 3 requirement for many facility types to tier 4 and increased the average time between surveys for tier 3. By doing this, CMS effectively increased the average time between surveys in tier 3—for example, from every 6 years to every 8 years—for nine facility types whose survey frequencies are not set by statute (see app. II). For example, survey requirements for end-stage renal disease facilities—a tier 3 priority in fiscal year 2005—were spread across tiers 2, 3, and 4 in fiscal year 2006. States were required to survey a 10 percent sample of these facilities selected for tier 2, while surveying facilities for tiers 3 and 4 on an average of 3.5 and 3.0 years, respectively (see fig. 2). The 3-year average for tier 4 reflects CMS’s policy for end-stage renal disease facility surveys, but CMS acknowledges that Medicare funding may not be sufficient for most states to accomplish tier 4 survey priorities. In fiscal year 2007, CMS further increased the average survey frequency in tier 3 for five facility types from 8 years to 10 years and for one facility type from 3.5 years to 4 years. Despite this, we found that from fiscal years 2000 to 2007 the increased average survey frequency had almost no impact on states’ required survey workload. Given the fiscal year 2008 increase in Medicare funding, CMS decreased the time between surveys for many facility types, returning them to approximately fiscal year 2000 levels (see app. II). Despite CMS’s risk-based approach, some state-surveyed facilities have not been surveyed for many years. About 2,700 facilities (13 percent) whose survey frequencies are established by CMS had not been surveyed in 6 years or more as of September 30, 2007 (see table 4); about 900 (4 percent) had not been surveyed in 10 years or more. Officials from both CMS and most of the states we contacted told us that the time between surveys for facilities without statutory survey frequencies was too long, which can increase the risk for quality problems. For example, officials from several states told us that they cite deficiencies more often, or the deficiencies are more serious, at facilities that are surveyed infrequently. Officials from one state said that facility administrators might become complacent about meeting federal quality standards during the lengthy periods between surveys. Officials from another state told us that, in 2006, surveyors of hospices in their state cited serious deficiencies on four out of eight surveys. Many state officials said that the survey frequency for all facilities that are not set by statute should be every 2 to 3 years. CMS’s attempts to make the survey budget allocation process more effective have had a limited impact. In fiscal year 2005, it began using a budget analysis tool to more equitably distribute funding to states. It also asked states to develop contingency plans to deal with the uncertainty about state funding due to the timing of the Medicare budget allocation process. The budget analysis tool was designed to address funding inequities resulting from CMS’s previous method for allocating Medicare funds for state survey activities, but its impact has been limited. Previously, CMS determined states’ allocations based on their past spending, but this method did not guarantee that funding levels accurately reflected state workloads—some states received too much funding given their survey workloads, others too little. For example, regional office staff told us that a state hiring freeze in the 1990s caused severe understaffing for one state in their region. One year, this state spent significantly less than its Medicare allocation because it was unable to hire staff. Consequently, the following year this state’s Medicare funding increases reflected the previous year’s low level of expenditures and the relatively low level of Medicare funding has been carried forward every year. CMS officials chose not to use the tool to recalculate states’ base allocations to avoid shifts that could result in layoffs of trained staff. CMS officials anticipated that over time the use of the budget analysis tool would incrementally align state funding with workload. The budget analysis tool allows CMS to measure state survey workload against funding and compare the match of workload to funding across states. It uses both state-specific and national data to measure state survey requirements, such as hours needed to perform surveys, and states’ costs for conducting survey activities, such as salaries, as well as fringe benefits for those staff, training, and travel. While state-specific data are used to calculate workloads for nursing homes, national averages are used for other facilities because they are surveyed so infrequently. The tool makes final adjustments based on regional office analysis and other factors. The tool then scores each state from 1 (less well-funded, relative to other states) to 5 (better funded) given the state’s workload (see fig. 3). In 2005, 10 states scored 1 and 15 states scored 4 or 5. In 2008, 7 states had a score of 1 while 14 had a score of 4 or 5. An agency official acknowledged that there are limitations in the tool’s effectiveness. First, state scores do not account for state enforcement activity or the fixed costs associated with administering survey activities. Second, CMS officials told us that they did not know how long an efficient survey should take and could not assess whether the considerable interstate variation in the length of surveys was appropriate. Third, state- specific data are limited for most facility types other than nursing homes because they are surveyed less frequently. CMS has used the budget analysis tool five times: (1) twice to distribute annual increases in Medicare funds to states (after allocating a small across-the-board inflation increase), and (2) three times to redistribute Medicare funds at the end of the fiscal year to states that spent more than their initial allocations by using state funds and had requested supplemental funding. In fiscal year 2005, CMS used the tool to distribute a 3 percent increase in survey funding, which translated into a 0.5 percent to 9.5 percent increase depending on the state. In fiscal year 2008, average increases to states ranged from about 10 percent for states which scored 1 to about 6.5 percent for states which scored 5. In fiscal years 2005 through 2007, CMS used the tool to redistribute year-end supplemental funding to states, but the amount to be redistributed has shrunk in recent years. For example, CMS had about $6 million available to redistribute in fiscal year 2005 but only about $2.5 million in fiscal year 2007. To help address uncertainty about federal Medicare funding levels, CMS asked states to develop a baseline budget and contingency plans for a specified reduction or increase to the baseline for fiscal years 2007 and 2008. In general, CMS communicates state-projected allocations in August, before the beginning of the next federal fiscal year. These projected allocations may be more or less than the final allocation and, for the past several years, CMS’s budget for state survey activities has not been finalized until 6 to 8 months later. CMS acknowledged the uncertainty that resulted from states not knowing their final Medicare allocations until well into the state fiscal year, which for most states begins on July 1. Regional office and state officials identified several problems that can result from finalizing states’ Medicare allocations late in the state fiscal year. First, officials from one regional office told us that states had to conduct their survey work cautiously until they received their final Medicare allocations, which could be less than initially projected. In some cases, the uncertainty may cause states to defer some of their surveys until the end of the fiscal year, potentially causing them to spend less than their Medicare allocations. Second, officials from one state told us that if their Medicare allocation was less than initially projected, they would have to cut staff or other direct costs such as travel—all essential to completing their survey workload. Third, officials from another state said that the lag in receiving their final Medicare allocation further delayed hiring new staff. Only one state, Arkansas, was able to complete all surveys in tiers 1 through 3 in fiscal year 2006, but pinpointing the cause is difficult because (1) several factors such as workload, funding, staffing, and management could have had an impact and (2) distinguishing the extent to which each factor contributed to state completion rates and the quality of each states’ surveys is challenging. Overall, we found that states’ required survey workload—the workload that states would have to complete to meet statutory and CMS survey frequency requirements—decreased 4 percent from fiscal years 2000 to 2007. This decrease was due to a decline in the number of the most frequently surveyed facilities that also require more time to survey compared to other facilities. This decline offset the workload increase from the overall growth in the number and type of other facilities subject to surveys over the same time period. It is unclear how states’ complaint investigation workload changed over this same time period because CMS lacks reliable and consistent state data on complaints received and investigated. According to CMS, the agency adjusts the survey priorities in tiers 1 through 3 so that the workload is feasible given Medicare funding levels, but the states we contacted disagree. However, some of the states we contacted that spent significantly more than their initial Medicare allocations still did not complete all surveys in those tiers. Sixteen of the 28 states we contacted were unable to spend their entire Medicare allocations, most indicating that this was due to high surveyor attrition rates and hiring freezes. Though many states believe that noncompetitive surveyor salaries contribute to attrition, states, not CMS, establish those salaries. In two states, CMS concluded that poor management of the survey process had compromised the quality of state surveys, but acknowledged that in one of the states staffing levels, salaries, and other issues may have been a contributing factor. CMS state performance reviews for fiscal years 2006 and 2007 found that few states were able to complete or nearly complete all surveys in tiers 1 through 3, despite decreases in the required survey workload from fiscal years 2000 to 2007. However, the impact of complaint investigations and revisits on state workloads during this time period is unclear because the data were not complete or reliable. Only one state, Arkansas, was able to complete its surveys in all three tiers in fiscal year 2006. Seventeen states did not complete their tier 1 surveys in fiscal year 2006 and, as a result, were assessed deductions totaling $298,200 from their fiscal year 2007 Medicare survey allocations (see table 5). Thirty-five states were unable to complete their tier 2 surveys and 46 states were unable to complete their tier 3 surveys. Some states narrowly missed completing surveys in one or more tiers, while others missed completion by a wide margin. For example, in fiscal year 2006 one state completed 99.9 percent of the surveys of intermediate care facilities for the mentally retarded, while another state completed only 33 percent of such surveys, but CMS rated both states as not meeting the survey workload. Counting the few states that narrowly missed the standards as passing had little impact on the results presented in table 5. These results were similar in fiscal year 2007—25 states were unable to complete their tier 1 surveys, 34 did not complete tier 2 surveys, and 41 did not complete tier 3 surveys. CMS officials believe that recent Medicare funding levels have been sufficient for states to complete surveys in tiers 1 through 3. States’ required survey workload—the workload that states would have to complete to meet statutory and CMS survey frequency requirements— decreased nationally from fiscal year 2000 to fiscal year 2007, even though the number and type of facilities subject to surveys during that period increased. The decrease in the required survey workload was due primarily to the decline of more than 1,100 nursing homes and 300 intermediate care facilities for the mentally retarded (see app. IV). Declines in these two facility types offset overall increases in other facilities subject to surveys because nursing homes and intermediate care facilities for the mentally retarded are comparatively the most resource- intensive facilities to survey: (1) statute dictates that nursing homes and intermediate care facilities for the mentally retarded must be surveyed approximately every 12 months, and (2) their surveys take longer than most other facilities to complete. For example, even though the number of ambulatory surgical centers increased by 31 percent from fiscal year 2000 to fiscal year 2007, the increase had a small impact on the required survey workload because on average ambulatory surgical centers require 26 hours to survey and, as of fiscal year 2007, only had to be surveyed once every 10 years to meet tier 3 workload priorities; in contrast, nursing homes take 157 hours to survey and their surveys are tier 1 workload priorities that must occur an average of every 12 months. After factoring in both average survey hours and required frequencies, 1 less nursing home can offset the workload increase of 60 new ambulatory surgical centers (see fig. 4). Surveys of nursing homes and intermediate care facilities for the mentally retarded together accounted for about 93 percent of states’ required survey workload in fiscal year 2000 and 91 percent of states’ required survey workload in fiscal year 2007; all other surveyed facilities accounted for less than 10 percent of the workload in both years. When all facilities are considered, the required survey workload decreased by about 4 percent (see fig. 5). Almost all of the decrease was due to the decline in nursing homes and intermediate care facilities for the mentally retarded; the increase in the interval between surveys had a negligible impact. The disproportionate impact of decreases in nursing homes and intermediate care facilities for the mentally retarded on states’ required survey workload is illustrated by Washington. From fiscal year 2000 to fiscal year 2007, the number of facilities subject to surveys in Washington increased by about 16 percent due largely to growth in ambulatory surgical centers, end-stage renal dialysis centers, and rural health centers. In fiscal year 2007, these facility types were subject to surveys on average every 10, 4, and 10 years, respectively. During the same period, however, Washington experienced decreases in the numbers of nursing homes and intermediate care facilities for the mentally retarded, which, on average, are surveyed every 12 months. As a result, the number of surveys that Washington was expected to conduct each year decreased about 9 percent; when average survey hours are taken into account, the state’s required survey workload decreased by 11 percent. Eleven states— Alabama, Alaska, Delaware, Florida, Georgia, Mississippi, New Jersey, North Carolina, Texas, Utah, and Virginia—experienced increases in their required survey workload, ranging from less than 1 percent to about 8 percent (see app. V). In addition to changes in the number and type of facilities subject to surveys, two other survey activities as well as survey process improvements could have affected states’ overall workload, but the results for complaints and revisits were unclear because the data were not available or reliable. It is difficult to discern from the data whether survey process improvements contributed to the small increases from fiscal years 2000 to 2007 in average survey hours for most facility types. Complaint investigations. Although complaint investigations represent a significant portion of state workload, CMS officials told us that the agency lacks complete and reliable data on complaints received and investigated. CMS implemented a new complaint database in 2004 but officials told us that the data are not reported consistently. First, a few states either do not report complaints in the CMS database or investigate complaints under state licensure, thus underreporting the number of complaints in the database. Second, states may not be consistently reporting complaints. According to CMS, the agency instructs states to differentiate between facility-reported incidents, which they can choose to investigate as complaints, and complaints received from residents, family members, or others. According to CMS, however, some states report few if any facility-reported incidents. Third, CMS believes that some states may be overestimating the number of complaints by reporting complaints received and investigated during standard surveys in the CMS complaints database. According to CMS, about 15 percent of complaints are investigated during standard surveys. Although the changes in the complaint workload are difficult to quantify, both CMS and state officials told us that resource constraints have hampered complaint investigations. For example, according to both CMS and state officials, states may be bundling complaints—waiting until they receive two to three complaints about a particular facility and then investigating them all at the same time—resulting in less timely complaint investigations. One state now sends in one surveyor to investigate complaints rather than two or three, which had been a more typical team size. Officials from a different state expressed concern that complaint bundling may affect the adequacy of their investigations. State officials stressed that the unpredictable nature of complaint investigations can be disruptive to scheduling and completing standard and validation surveys. State officials told us that CMS does not adequately fund complaint investigations and that CMS expects states to use their own funds. According to CMS officials, the amount identified for such investigations in the fiscal year 2008 President’s budget request does not fully fund all anticipated complaint investigations. We believe, however, that it is appropriate for states to cover the additional costs of completing complaint investigations within state time frames that are more stringent than federal requirements. For example, both California and Pennsylvania require all investigations to be initiated within 10 days, while CMS requires such rapid investigations only for complaints alleging immediate jeopardy or actual harm. In contrast, Florida requires all complaints to be investigated within 90 days. We believe that it is difficult to determine the appropriate federal funding level for complaint investigations without a complete estimate of the complaint workload. Revisits. CMS does not have reliable and complete data on revisits from fiscal years 2000 to 2004 due to changes in how revisit survey activities were reported across states. As a result, it is not possible to fully account for the impact of revisits on states’ overall survey workload. However, CMS data for fiscal years 2005 to 2007 show that the revisit workload declined by 4 percent. Revisits for standard surveys accounted for approximately 8 percent of states’ survey workload in fiscal year 2007 and nursing homes and intermediate care facilities for the mentally retarded constituted about 85 percent of states’ revisit workload in 2007. We believe that the decline in revisits for fiscal years 2005 to 2007 is consistent with states’ overall decline in survey workload since fiscal year 2000. Survey process improvements. To improve the quality of state surveys, CMS has implemented new directives and more stringent standards for surveys, which CMS believes have increased states’ survey workload. For example, CMS added new survey requirements for hospices and end-stage renal disease facilities and required states to include in their surveys home health and outpatient physical therapy locations (branches and extensions, respectively) that are under the supervision of a licensed facility. According to one state we interviewed, new requirements (1) increase the time required to conduct surveys and (2) require additional surveyor training, which decreases productivity and is not reflected in recorded survey hours. Although new requirements that result in additional time to conduct surveys should be reflected in the survey hours that states report, CMS expressed doubt that survey hours were actually increasing as a result of these initiatives because it believes that states lack adequate resources to carry them out. Data for fiscal years 2000 to 2007 show small increases in average survey hours for most facility types; however, it is difficult to determine whether these changes are due to the new requirements or to factors such as surveyor experience levels. Several other factors—funding, staffing, and management of the survey process—may have an impact on states’ ability to complete survey workloads and these factors also influence the quality of surveys. These factors are often interrelated and can play out differently in each state. States disagree with CMS’s position that there is sufficient funding to complete the workload in tiers 1 through 3, primarily because of workforce instability due to noncompetitive salaries. However, states, not CMS, establish these salaries and manage the workforce and the survey process. CMS established the tiered survey priorities to ensure that Medicare funding was sufficient for states to complete surveys in tiers 1 through 3. While most of the states we contacted believe that CMS’s expectations are unreasonable, the data suggest the influence of factors other than the federal Medicare allocation. For example, 16 of the 28 states we contacted spent more than their fiscal year 2006 initial Medicare allocations, but none were able to complete all required surveys in these three tiers. Seven of these states were unable to complete even their tier 1 requirements— those that are statutorily mandated. For example, Missouri spent more than its fiscal year 2006 initial Medicare allocation and was able to complete all of its surveys in tiers 2 and 3 but failed to complete its entire tier 1 workload. On the other hand, 16 of the states we contacted spent less than their Medicare funding for fiscal years 2000 through 2006, 11 of these states spent less than their fiscal year 2006 allocations. For some of these states, the ability to spend Medicare allocations—not the Medicare funding level itself—affected their ability to complete the required surveys. Officials from 23 of the states we contacted told us that an additional $35 million in cumulative Medicare funding was needed, primarily to increase surveyor salary levels so that states could fill staff vacancies and offer incentives to retain current staff, issues that they believe have significantly inhibited their ability to complete required surveys. Conversely, officials from 4 states told us they did not need any additional Medicare funding. Officials from AHFSA and many of the 28 states we contacted told us that an unstable workforce had affected their ability to meet CMS survey priorities over the past several years. The workforce instability arises mostly from noncompetitive salaries, which result in the hiring of less qualified candidates, and hiring freezes. Salary levels, minimum qualifications, and decisions about when to hire or not hire surveyors are the result of state personnel policies that affect surveyor positions as well as positions for other state employees. According to AHFSA and state officials, staff retention issues among states can be attributed primarily to noncompetitive salaries for RNs—the profession that comprises the largest proportion of surveyors nationally. In fiscal year 2006, the surveyor attrition rate among the 28 states we contacted ranged from 0 percent to about 46 percent, and 17 of these states reported attrition rates of 10 percent or higher. Officials from one state told us that the starting salary for their RN surveyors ranged from $30,000 to $35,000 and that trained RNs typically leave surveyor positions after a few years to seek jobs in the private sector for higher salaries. The average salary for RN surveyors in the 28 states we contacted was about $59,000 in fiscal year 2006 and ranged from about $37,000 to about $88,000. More recently, some states have been able to increase surveyor salaries from previous levels to compete with the private sector. For instance, in one state, the salaries of experienced surveyors increased by about 28 percent in fiscal year 2007. However, officials from 13 states are concerned that any increase in surveyor salaries may not be sustainable in the long-term without increases in state Medicare allocations. Without an increase, these states indicate that they may have to lay off staff, which would adversely affect their ability to complete the survey workload. According to AHFSA officials, states have hired applicants that are less qualified for surveyor positions due to noncompetitive surveyor salaries. They told us that some states formerly hired RN surveyors with bachelor’s degrees, but given current salary levels, these positions may only be attractive to licensed practical nurses with 2-year rather than 4-year degrees. States are also hiring nurses with less nursing experience to fill the positions. Of the 28 states we contacted, 6 states offered surveyor positions to applicants with no prior experience. AHFSA officials believe that inexperienced surveyors tend to be less productive and require increased supervision and oversight. Hiring freezes have also affected states’ abilities to manage their survey workloads. During the past few years, some states temporarily suspended the hiring of state employees due to state budget deficits. Consequently, states had to suspend hiring of surveyors, even though they may have had sufficient federal funding to support the additional staff. Of the 28 states we contacted, 16 states spent less than their Medicare budget allocations from fiscal years 2000 through 2006 and 14 of them identified hiring freezes or vacancies as the primary reason. With consistently high turnover rates among these states’ surveyors, the hiring freezes prevented states from filling vacant positions. Given workforce instability, states told us that they have adjusted how they manage surveys to meet CMS priorities. Some states adjust the size of a survey team depending on the availability of staff. Of the 28 states we contacted, officials from 20 states indicated that they reduced the survey team size or restricted the time a surveyor is allowed to spend in a facility in fiscal year 2006. Officials from one of these states explained that, in the past, a survey team may have consisted of four surveyors plus a specialist, but now a survey team only consists of three surveyors. As noted earlier, they also told us that a state may send one surveyor to investigate several complaints whereas previously, multiple surveyors were sent to investigate complaints. Additionally, a state may limit or restrict the time a surveyor is allowed to spend in a facility to ensure that other facilities are surveyed and the state meets CMS performance measures. As a result, officials from 11 states told us that surveyors do not have enough time to conduct thorough surveys. Although states may complete surveys in a given tier, this does not ensure that the surveys are thorough. CMS’s 2006 state performance review indicated that Missouri, Oklahoma, New Mexico, South Carolina, South Dakota, Tennessee, and Wyoming completed all of their nursing home surveys within the statutorily required time frames. But, as we previously reported, more than 25 percent of federal comparative surveys conducted in these states from fiscal years 2002 through 2007 found that state surveyors had missed serious deficiencies. For example, South Carolina missed at least one serious deficiency on 6 of the 18 comparative surveys during those 6 fiscal years, with an overall total of 19 missed deficiencies that caused harm or placed residents in immediate jeopardy. In one of these states, CMS told us that performance issues raised concerns about the management of survey activities. For example, 26 percent of federal comparative surveys conducted in Tennessee from fiscal years 2002 through 2007 found that state surveyors had missed serious deficiencies. Moreover, the results of federal observational surveys from this same time period indicated that the proportion of Tennessee surveyors with below satisfactory ratings in investigative skills and deficiency determination was more than double the national average. A new director took over the state survey agency in October 2007 and, due to the surveyor performance issues and staff turnover, brought in CMS regional office staff to assist in retraining all of the state’s surveyors. Unlike these seven states, about 93 percent of Alabama’s nursing home surveys in fiscal year 2007 were not completed within the maximum 15.9 month interval. In a June 2007 letter to the state, CMS described these results as alarming and asked Alabama to develop an action plan in 2007 to address persistent weaknesses in state performance. The agency used both comparative and observational data from federal monitoring surveys to highlight persistent weaknesses in the survey process to Alabama state officials. Although CMS recognized that the state’s inability to complete surveys could be due to staffing levels, salaries, and other issues, CMS ultimately concluded that Alabama needed to improve organization, management, and oversight of all regulatory systems and functions. CMS oversight of states’ use of funds for survey activities is limited. To oversee how states spend federal funds, CMS regional offices we spoke with now rely primarily on off-site reviews of state reports documenting their expenditures and workload, but there are limitations to relying on such reports, including their accuracy. In eliminating the budget and financial standard from annual state performance reviews in 2006, CMS redefined these financial responsibilities as core state functions, but not all regional offices we reviewed are attempting to hold states accountable for ensuring the appropriate application of costs to Medicare, Medicaid, and state licensure programs. We also found that regional offices we spoke with had taken a variety of approaches to determining non-Medicaid contribution rates for the states in their regions. Most told us that these rates have not been reviewed in recent years, even though federal survey and state licensure requirements may have changed over time. Regional officials told us that they do not verify that states actually contributed funds in a manner consistent with their shares, noting limits on their authority to require state data and states’ refusal to provide it voluntarily. However, CMS assumes that the cost for a state to operate a survey program is higher than the amount CMS provides them and the agency is convinced that states were likely contributing more than their fair share to survey activities. Finally, most regional offices we spoke with do not require states to justify requests for supplemental Medicare funds. As a result, it is difficult for CMS to determine whether expenditures in excess of a state’s initial Medicare allocation represent the state’s non-Medicaid share of survey costs. To oversee states’ use of federal funds for survey activities, the five CMS regional offices we spoke with now rely primarily on the off-site review of reports on expenditures, workload, and survey hours that states submit during the fiscal year, but reliance on such reports for financial oversight has limits. CMS’s central office believes that the majority of the analyses regional offices are expected to perform as part of their oversight can now be accomplished using the reports states submit. In contrast, officials from four of the five CMS regional offices we spoke with generally told us that in the 1990s, they either conducted more formal, on-site reviews or more detailed reviews of systems, such as those used for time and effort reporting, which served as the basis for states’ allocations of survey costs to Medicaid, Medicare, and state licensure programs. This allowed regional offices to verify the accuracy of states’ expenditures and ensure that states complied with financial procedures established by CMS. Effective fiscal year 2006, CMS eliminated the state performance review standard that focused on states’ budget practices and financial reporting and redefined these financial responsibilities as “core” functions that states were required to perform. As a part of the state performance review, the state’s budget practices were evaluated against 14 elements to determine if the state used acceptable methods for (1) charging the federal programs, and (2) monitoring the current rate of expenditures and planned workload. Two of these 14 elements dealt with the appropriate application of program contribution rates across Medicare, Medicaid, and state licensure programs. Specifically, states must provide reasonable assurances that survey and certification costs were appropriately applied to the Medicare, Medicaid, and state licensure programs for all items and costs in their budgets and across providers and suppliers and the various types of facilities. According to CMS, however, regional offices are still expected to ensure that states are fulfilling their responsibilities under the standards, but only one of the five regional offices we spoke with (San Francisco) determines whether or not it has reasonable assurances that survey and certification program costs are appropriately applied to Medicare, Medicaid, and state licensure programs. Two other regional offices (Chicago and Dallas) inspect state records regarding the application of program costs across the three programs, but do not determine reasonable assurance of this accounting. Two regional offices (Atlanta and New York) have not incorporated these two elements into a review of this standard and its oversight of state survey activities. Relying primarily on state-reported expenditure data for federal financial oversight has limits. Since fiscal year 2002, states have been required to submit their financial information electronically through a Web-based, automated reporting system provided by CMS. Regional office officials have expressed concern regarding whether state expenditures are accurately reported through this system, as there have been instances where errors were discovered in states’ expenditure reports well after their submission. For example, Washington state officials told us that they identified a significant error on the state’s expenditure report for fiscal year 2006. In reporting the amount of staff time it took to complete its workload, the state provided the data in terms of months, though it was required to provide staff time in years. As a result, the information in the expenditure report was contradictory. In addition, officials from the Dallas regional office told us that Texas underreported its expenditures in fiscal years 2003 and 2004 due to errors that resulted as the state transitioned to a new accounting system. The error was not discovered until the Medicare funds the state appeared not to have spent had been reallocated to other states. Officials from the Chicago regional office told us that it is difficult to verify the figures presented on state expenditure reports because of delays by many states in entering information into OSCAR, which regional offices may use to verify states’ expenditures. Also, a lack of timeliness in reporting such information can limit regional office oversight efforts. For example, in its review of Delaware’s survey expenditures for fiscal years 1998 and 1999, HHS’s OIG found that, in addition to not having sufficient internal controls for preparing accurate reports of its Medicare and Medicaid expenditures, the state did not file its fiscal year 1999 expenditure reports on time. The contribution rates for states in the five regions we spoke with were determined using different methodologies and in most cases have not been reviewed in recent years. Moreover, states are not required to report their non-Medicaid state expenditures to CMS and, as a result, the agency has no way of verifying that states are contributing their own funds appropriately. Nonetheless, CMS central office and regional office officials we spoke with generally assume that the cost of conducting survey activities is greater than the federal funds provided. Consequently, they believe states are contributing more than their fair share to the cost of survey activities and that the exact amount of the non-Medicaid state contribution is less important. CMS guidance reflects the complexity of establishing equitable state shares and acknowledges that regional office staff must be knowledgeable about state licensure requirements to negotiate states’ non-Medicaid contribution rates. For 21 states, the non-Medicaid state share for nursing homes ranged from 12 to 48 percent (see table 6). Regional offices we spoke with have taken a variety of approaches to setting these rates. In some regions, regional office staff determined the rates, while in other regions the states determined the rates themselves. Officials from the Chicago regional office told us that the methodology used by their staff to determine state contribution rates was complex and involved determining a separate state share for each facility type that was surveyed. Regional office staff took into consideration the number of surveys that each state needs to conduct in a given year, the average amount of time each survey should take, and how much of a benefit each state derived from having to conduct the survey. Officials from the San Francisco regional office told us that contribution rates for the states in their region are mostly based on historical figures, as reported in states’ time and effort record keeping systems. Officials from the New York regional office told us that their staff and state officials jointly determined that Medicare, Medicaid, and state licensure programs derived equal benefit from federal nursing home surveys conducted by states in the region. As a result, they concluded that each program should be responsible for one-third of the cost of these surveys. In contrast to what other regions told us, however, states in this region do not have a non-Medicaid state share for other facility types. Officials from the Atlanta regional office told us that they played no role in establishing these rates and were unaware of the process states in their region used to determine them. Officials from the Dallas regional office told us that, due to the complexity involved in determining an appropriate state share, states in their region do not have pre-established non-Medicaid state contribution rates. Instead, a staff person reviews state surveyor salaries and makes sure that states have apportioned them appropriately between federal and state licensure activity, based on the surveyors’ workload from the previous year. Officials from four CMS regions indicated that the rates for states in their regions are not regularly reviewed and in one case have not been reviewed since they were established. CMS guidance does not prescribe how often the rates should be updated given changes in requirements for federal and state licensure surveys over time. A 2002 HHS OIG review also found that states in four of the five regions it reviewed allocated survey costs based on predetermined, historical contribution rates. Because these rates were established in prior years, documentation of the basis for the rates was not available. CMS officials told us that the agency does not collect information from states on their non-Medicaid survey expenditures. As a result, CMS does not know if states are contributing their own funds appropriately (see fig. 6). CMS officials noted limits on the agency’s authority to collect state data, particularly regarding licensure activities. In addition, states are not willing to voluntarily disclose information on state funding. For example, officials from the Dallas regional office told us that Texas officials indicated the Dallas officials were not entitled to this information when they requested it. However, information on state expenditures could be relevant to federal oversight of state survey activities in certain situations. For example, if a state requests supplemental funding for shared survey activities—that is, those not exclusively conducted for purposes of state licensure—having information on the state’s expenditures for the non- Medicaid share could be relevant in evaluating whether survey costs are equitably shared and that Medicare is not paying more than its fair share for survey activities. Though officials from CMS’s central office told us that regional offices should require states to justify any requests for supplemental Medicare funds they submit, three of the five regional offices we interviewed told us that they do not require states to do so. Without an examination of state justifications, it would be difficult for CMS to know if expenditures in excess of states’ initial Medicare allocations represent their non-Medicaid share of state costs. According to CMS guidance, states can request supplemental Medicare funds in two ways. They can submit a memo to their regional office that includes the amount of funds requested and a detailed rationale for why the funds are needed. A second method is for states to include actual Medicare expenditures in excess of their allocation on the expenditure report they submit at the end of the fiscal year. The state is eligible to receive supplemental Medicare funding as reimbursement for the portion of these expenditures that exceeds the amount in Medicare funds CMS allocated to it during the fiscal year. According to CMS central office officials, both the memo and the amounts reported on states’ expenditure forms are subject to review and approval by the regional offices prior to the funding of states’ supplemental requests. Central office officials told us that the amount of Medicare supplemental funds requested each year has been substantially more than the amount of funds available to redistribute. Consequently, CMS expects that regional offices will use their judgment regarding the intensity of reviews so that they are not spending time reviewing requests that will not be funded anyway. The level of review conducted by three of five regional offices we interviewed was limited. Officials from the New York regional office told us that their staff checks to see whether a state completed its workload and if the state’s expenditures reflect what was included in its budget plan. Officials from the Atlanta and Chicago regional offices said that they do not require documentation or conduct audits to verify these requests. However, officials from the Dallas and San Francisco offices told us that their staff follows up with states to verify their need for supplemental funding, such as asking states for documentation to justify their expenditures in excess of their initial Medicare allocations. On at least one occasion, the Dallas region told CMS that some of a state’s requested money should be disallowed because the state conducted tier 4 work before completing work in a higher priority tier. The current approach for funding state surveys of facilities participating in Medicare and Medicaid is ineffective—yet these surveys are meant to ensure that these facilities provide safe, high-quality care. We found serious weaknesses in CMS’s ability to (1) equitably allocate more than $250 million in federal Medicare funding to states according to their workload, (2) determine the extent to which funding or other factors affected states’ ability to accomplish their workload, and (3) guarantee appropriate state contributions. These weaknesses make assessing the adequacy of funding difficult. CMS has made limited progress in ensuring that federal Medicare allocations reflect state workloads. Since 2000, CMS has taken several steps in response to relatively flat, inflation-adjusted federal funding for state surveys, but these efforts have had little impact. Reducing funding for support contracts—such as one to develop and implement a new nursing home survey methodology to improve the consistency and efficiency of state surveys—provided only about 1 percent more funding to states in fiscal years 2006 and 2007. In our view, the delay in implementation of the QIS is problematic and CMS and beneficiaries would benefit from its implementation well before 2014. Increasing the time between surveys for many facility types had almost no impact on state workloads and state officials believed many facilities were already surveyed too infrequently. Asking states to develop funding contingency plans could not resolve the problem that states do not know their final Medicare allocations until late in the fiscal year, which can hamper efforts to effectively manage state resources. In addition, while Congress did not provide CMS authority to charge facilities for revisit surveys in fiscal year 2008, revisit fees could offer (1) Medicare Trust Funds savings if they result in reductions of amounts that would be otherwise transferred, and (2) somewhat more predictable funding to the extent the fees do not require annual appropriations. Oversight of clinical labs, which pay user fees, provides a precedent for facility contributions to defray the cost of survey activities. CMS took these steps because it believed that Medicare funding had not kept pace with state workloads. But we found that the required survey workload actually decreased from fiscal years 2000 to 2007, suggesting that resources available in fiscal year 2007 were similar to or slightly greater than those in fiscal year 2000 given the modest 4 percent increase in inflation-adjusted federal funding. Nevertheless, because state allocations have been based on historical spending, CMS believes that some states have too much funding given their workload while others have too little. The budget analysis tool that CMS developed to align survey funding with state workloads has been used only incrementally to address state funding inequities, rather than adjusting the mismatch between federal allocations and states’ current survey workloads. We believe that CMS’s concerns about the instability that would be created if it changed baseline funding for state survey activities could be mitigated through other means. For example, CMS could limit the annual adjustments on states with shrinking baselines to a fixed percentage of each state’s historical funding baseline. In addition, CMS lacks adequate data on states’ complaint workloads, a significant weakness in its ability to ensure that it is requesting adequate Medicare funding. Moreover, agency officials believed that the amounts identified for complaint investigations in connection with the President’s budget request had not fully funded state complaint surveys. It is difficult to determine the extent to which funding and other factors affected states’ ability to accomplish survey workloads. Twenty-three of the 28 states we contacted told us that more funding was needed and many of these states said that RN salaries were not competitive, which created workforce instability. Although states set surveyor salaries, Medicare allocations that do not support salary increases could result in states’ laying off staff, further limiting their ability to accomplish survey workloads. For some states that did not spend their initial allocations, the inability to spend their full allocations rather than the level of funding may interfere with workload completion. Most states told us that underspending was the result of insufficient staff due to retention problems or state hiring freezes. Other states spent more than their initial Medicare allocations and still failed to complete their survey workload. Even if states complete their workload, ensuring that facilities comply with federal quality and safety standards is not guaranteed. For example, seven states completed their nursing home surveys in fiscal year 2006, but CMS found that they missed serious deficiencies on more than a quarter of federal comparative surveys. CMS lacks information on state contributions, which impedes an overall assessment of the resources available for state surveys. While CMS knows states’ Medicare and Medicaid spending, including requests for supplemental federal funding, it has no way to ensure that states contribute their fair share of non-Medicaid state funds. Non- Medicaid state shares for nursing homes vary widely across states, state contribution rates are not determined consistently, and CMS officials do not collect information on such state expenditures. But CMS officials said the agency assumes that the cost of conducting all required surveys is greater than the federal funds provided, so the exact amount each state contributes is less important. Further, states in most regions we interviewed were not required to justify supplemental funding requests. Without examining state justifications, CMS cannot be sure that spending above states’ initial Medicare allocations represents their non-Medicaid state share of survey costs. The evidence is mixed on whether federal funding has kept pace with the changes in states’ required survey workload—the workload that states would have to complete to meet statutory and CMS survey frequency requirements. On the one hand, the required survey workload decreased nationwide. On the other hand, most states told us that survey frequencies of 6 to 10 years for many facilities could adversely affect beneficiaries. Moreover, it is often difficult to distinguish the impact of funding, staffing, and management on state workloads overall. We believe that these and other limitations of the current funding approach will continue to frustrate CMS’s efforts to support and oversee state survey activities. To address significant shortcomings in the current system for financing and conducting surveys of Medicare and Medicaid facilities, we recommend that the CMS Administrator take the following nine shorter- term actions. To help ensure that those facilities that have not been surveyed in at least 6 years are in compliance with federal quality standards, we recommend that the CMS Administrator take the following two actions: Increase the survey priority assigned to such facilities in the annual instructions given to state survey agencies with the goal of surveying them as quickly as possible. Monitor the progress made by state survey agencies that have a significant number of such facilities. To ensure that Congress has adequate information on the impact of funding on facility oversight, we recommend that the CMS Administrator take the following two actions: Inform Congress of the projected cost of surveying all facilities that lack statutorily mandated survey frequencies a minimum of at least once every 3 years. Include information in the President’s budget request on projected state complaints and the cost of completing the associated workload. To help address state survey funding inequities, we recommend that the CMS Administrator: Use available tools to adjust the annual baseline Medicare allocations provided to each state. To improve CMS’s ability to differentiate between funding and management issues and help ensure the quality of surveys, we recommend that the CMS Administrator take the following two actions: Identify appropriate methodologies to help evaluate the efficiency and effectiveness of state survey activities. One such methodology may be the new Quality Indicator Survey, developed to help ensure the consistency, efficiency, and effectiveness of state nursing home surveys. Explore the feasibility of using a similar methodology to survey other Medicare and Medicaid facilities. Provide Congress with an estimate of the cost of implementing, over 3 years, the Quality Indicator Survey methodology for nursing homes. To improve the oversight of state expenditures, we recommend that the CMS Administrator take the following two actions: Collect information about current state shares, including the methodologies used to determine them and the date that they were last reviewed. Regularly review state shares to ensure that they are accurate, explore ways to obtain information from states on non-Medicaid expenditures where such information is relevant for ensuring that costs are actually shared on an equitable basis, and consider ways to simplify the process of determining state shares. Over the longer term, we are also recommending that the CMS Administrator undertake a broad-based reexamination of the current approach for funding and conducting surveys of Medicare and Medicaid participating facilities. This reexamination should consider issues such as (1) the source and availability of funding, including possible imposition of user fees, and (2) ways of ensuring an adequate survey workforce with sufficient compensation to attract and retain qualified staff. We provided a draft of this report to HHS for comment. In response, the Acting Administrator of CMS provided written comments. We also received written comments from AHFSA. CMS’s and AHFSA’s comments are reproduced in appendices VI and VII, respectively. Although CMS disagreed with elements of our survey workload analysis, the agency concurred with 9 of our 10 recommendations. For 2 of these, we recommended that CMS provide Congress with certain information and the agency indicated that it would do so upon Congress’ request. CMS partially concurred with 1 of our 10 recommendations. While the agency agreed to produce special follow-up reports and have its regional offices contact states with a significant number of facilities that have not been surveyed for lengthy periods, it did not agree to increase the survey priority assigned to facilities that have not been surveyed in at least 6 years with the goal of surveying them as quickly as possible. Instead, CMS noted that the agency had expanded its risk-based approach in fiscal year 2008 such that the maximum tier 3 survey frequency is a 7-year interval (down from an 8-year average). Additionally, for those facilities that have not been surveyed in 7 years and that are identified with certain risk factors, CMS will consider these facilities as part of the tier 2 targeted surveys. As noted in our draft report, many state officials told us that the survey frequency for all facilities that are not set by statute should be every 2 to 3 years. CMS concurred with our recommendation to inform Congress of the projected cost of surveying these facilities a minimum of at least once every 3 years. We continue to believe that all 2,700 facilities that had not been surveyed in more than 6 years as of September 30, 2007 (900 in 10 years or more), should be inspected as soon as possible, regardless of their risk factors. Finally, AHFSA also disagreed with elements of our survey workload analysis, specifically our treatment of complaints and enforcement actions. CMS’s and AHFSA’s comments and our evaluation are summarized below. Funding trends. CMS noted that surveys are the principal quality assurance system for Medicare and that the portion of the Medicare budget devoted to quality assurance decreased from 0.1 percent in fiscal year 2000 to 0.06 percent in fiscal year 2008. CMS commented that by combining Medicare and Medicaid federal funding in our draft report, we obscured the differences between the two funding sources and the different decisions that face the Congress and executive branch. We reported aggregate federal funding in our draft report because it is the total federal funding available to support state survey activities. However, we reported Medicare and Medicaid funding levels separately for fiscal years 2000 through 2007 in appendix III and described in the background section how both Medicare and Medicaid fund state survey activities. Examining the change in states’ required survey workload. CMS commented that our basic approach to examining the change in states’ required survey workload from fiscal years 2000 through 2007 was sound, but disagreed with some elements of our analysis. AHFSA also disagreed with a few elements. Use of tier 3 priorities. CMS commented that we understated the number of facilities subject to state surveys in fiscal year 2007 and omitted other survey activities, such as initial surveys of new providers, which are a tier 4 priority. As we noted in our draft report, however, the nationwide survey workload would still have declined from fiscal year 2000 to fiscal year 2007 if we included tier 4 surveys. Because CMS’s four-tier structure for prioritizing states’ survey workload did not exist in fiscal year 2000, we used the fiscal year 2000 survey frequencies required by CMS policy. For fiscal year 1999, CMS’s budget for survey activities was increased significantly and CMS expected states to complete all surveys. Our draft report pointed out that CMS subsequently established a system for distinguishing between (1) its policy on survey frequencies (essentially those for fiscal year 2000), and (2) the survey priorities, as reflected in its tier structure, to which it holds states accountable for meeting each year in its state performance reviews. For the latter, CMS officials told us that they based their reviews on the requirements in tiers 1 through 3 because they did not believe funding was adequate to survey facilities that were a tier 4 priority. CMS adopted priorities because of the concern that resources were insufficient to accomplish all of the survey workload but maintained its policy on survey frequencies. CMS’s comments indicate that states that conduct initial surveys of new providers (a tier 4 priority) before completing all surveys in tiers 1 through 3 may be required to submit a plan of correction and, in addition, there could be other consequences. As such, our workload analysis for fiscal year 2007 used the survey priorities for which CMS held states accountable in its state performance reviews during that fiscal year—tier 1 through 3 priorities. Differentiating between Medicare and Medicaid. CMS attempted to replicate our survey workload analysis but separated it by the source of funding—Medicare and Medicaid. CMS concluded that the Medicare- funded workload increased by up to 20 percent from fiscal year 2000 to fiscal year 2007. First, because we were attempting to measure states’ overall required workload, we did not differentiate between funding streams. While the results of CMS’s analysis are not inconsistent with ours, the net effect remains a decrease in states’ required survey workload when the Medicaid workload is considered. Thus, we reported that the decline from fiscal year 2000 to fiscal year 2007 in the number of nursing homes and intermediate care facilities for the mentally retarded, whose surveys receive significant Medicaid funding, offset overall increases in other facilities, whose surveys are largely Medicare-funded, because the two facility types are comparatively the most resource-intensive facilities to survey. Second, in replicating our methodology to incorporate the effect of survey hours on workload, CMS used average survey hours by facility type for fiscal year 2000. As noted in our draft report, because the yearly CMS survey hour data were not consistent or reliable, we calculated national average survey hours for each facility type for all fiscal years from 2000 through 2007. We used these national averages in our analysis for both fiscal years 2000 and 2007. This could account for some of the difference between CMS’s and our results. Inclusion of 2008 data. CMS commented that our analysis did not include data for fiscal year 2008 and, as such, may not accurately reflect states’ current workload. Our analysis was limited to the change in states’ required survey workload from fiscal year 2000 to fiscal year 2007 because fiscal year 2008 data were not available when we conducted our analysis. Wherever possible, we tried to note recent CMS initiatives or regulations that could potentially affect workload, including recent regulations requiring organ transplant center programs to be surveyed and new survey requirements for hospices and end-stage renal disease facilities. AHFSA commented on the costs associated with implementing additional CMS requirements, such as new survey protocols and data-entry time frames. However, CMS’s comments acknowledged that not all of the workload associated with its recent initiatives can be quantified. States’ complaint workload. Both CMS and AHFSA commented that our analysis of the change in states’ required survey workload did not adequately account for the work associated with investigating complaints. AHFSA noted that when its members were surveyed, those states that responded indicated an overall increase in complaint growth over the last 5 years. AHFSA’s response did not quantify the increase. CMS commented that the number of complaints investigated on-site increased by about 13.1 percent from fiscal years 2005 to 2007. In our draft report, we acknowledged that complaint investigations represented a significant portion of states’ workload. Although CMS implemented a new complaint tracking system in fiscal year 2004, officials told us that the agency lacks complete and reliable data on complaints received and investigated. For example, in our draft report we noted that CMS believes some states may be overestimating by 15 percent the number of complaints investigated by reporting those complaints received and investigated during standard surveys in the complaints database. We included in our draft report a recommendation that the CMS Administrator include information in the President’s budget request on projected state complaints and the cost of completing the associated workload and the agency concurred with our recommendation. Enforcement workload. AHFSA commented that our analysis did not account for the workload associated with enforcement activities. The association noted that decoupling states’ responsibilities to conduct surveys, complaint investigations, and enforcement follow up is impossible. As noted in our draft report, CMS (1) did not have reliable and complete data on revisit surveys from fiscal years 2000 through 2004 and (2) data for fiscal years 2005 through 2007 showed that the revisit workload declined by 4 percent. Because revisits are an indication of enforcement actions, we believe that states’ enforcement workload also decreased. Length of an efficient survey. Finally, CMS commented that we did not address how long a survey should take to achieve a quality result. In its written comments, CMS noted that the only relevant hard data are for survey hours that CMS regional office staff devote to federal comparative surveys and that, for nursing homes, these surveys are typically 15 percent to 25 percent longer than the average state survey time. As noted in our draft report, CMS officials told us that they did not know how long an efficient survey should take and could not assess whether the considerable interstate variation in the length of surveys was appropriate. Comparative surveys may not be the best measure of how long a survey should take. Indeed, many officials from the states we contacted during the course of our work told us that comparative surveys were not a good measure. Moreover, our May 2008 report found that when the number of surveyors and time on-site are taken together, federal comparative surveys averaged 12.9 surveyor-days and the corresponding state surveys averaged 12.6 surveyor-days in fiscal year 2007. CMS oversight and state performance standards. CMS commented that in fiscal year 2000, the base year of our analysis, there were few consequences for poor performance and few, if any, effective national measures of survey performance. CMS highlighted the improvements it had since made to its performance system, which we noted in our draft report. CMS commented further that its overall approach to accountability is to communicate workload priorities by organizing them into tiers, initiate consequences for unacceptable performance, and match the strength of consequences with the priority and importance of the work. We acknowledged CMS’s efforts to link states’ performance to workload priorities and, as a result, we focused on changes in the workload that CMS holds states accountable to complete. Future trends. CMS believed that the Medicare-funded survey workload is likely to continue to increase and that, given the overall federal budget situation, it is imperative that the agency design survey methodologies that leverage resources to ensure maximum productivity and effectiveness. CMS highlighted examples of such productivity enhancements, including implementing the Quality Indicator Survey methodology for conducting nursing home surveys nationwide, targeting resources to surveys of the most at-risk facilities, and investing in methodologies that help states address their staffing barriers. AHFSA also noted staffing challenges, such as (1) vacant or frozen surveyor positions and (2) a lack of cross-trained surveyors who can survey more than one type of facility. We noted some of these initiatives and challenges in our draft report and, to the extent that we were able, we indicated how these issues might affect states’ survey workload. We also made specific recommendations to the CMS Administrator for improving the agency’s ability to differentiate between funding and management issues and to help ensure the quality of surveys. CMS and AHFSA also provided technical comments, which we incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Administrator of the Centers for Medicare & Medicaid Services and appropriate congressional committees. The report also will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VIII. This appendix provides a more detailed description of our scope and methodology. Centers for Medicare & Medicaid Services (CMS) budget and expenditure data. To identify the trends in federal funding for survey activities, we reviewed the President’s budget request and analyzed federal funding for survey activities CMS expended for survey activities from fiscal years 2000 through 2008. We selected fiscal year 2000 because of the significant increase in funding for survey activities for fiscal year 1999 to support an increased workload associated with the Nursing Home Oversight Improvement Program. We also analyzed data provided by CMS on state survey expenditures from fiscal years 2000 through 2007, including the provision of supplemental funds to states that spent more than their initial Medicare allocations by redistributing unspent state allocations. To understand the Medicare funding allocation process, we reviewed CMS’s State Survey and Certification Budget Call Letters or Mission and Priority Documents for fiscal years 2000 through 2008; CMS uses these documents to (1) provide instructions to states on preparing budget requests for federal funds, (2) communicate anticipated federal Medicare funding levels to states, and (3) communicate state survey priorities based on the requested funding. We also discussed the survey budget process with CMS officials, including their use of the Budget Analysis Tool, which the agency began using in 2005 to better calibrate federal funding with states’ survey workloads. Because of its limited use, we did not evaluate the tool’s effectiveness. To gain a state and regional office perspective on the budget process and how it had changed over time, we interviewed regional office officials as well as state officials in two states that spent more (Florida and New York) and two states that spent less (Ohio and Washington) than their initial Medicare allocations for fiscal years 2000 through 2006 and reviewed periodic state expenditure reports. CMS databases on state survey activities. To determine the extent to which states completed their survey workloads, we analyzed CMS data on the results of the fiscal year 2006 state performance review, the most recent data available at the time we conducted our analysis. We subsequently compared state completion rates to those from the fiscal year 2007 review when that data became available. In addition, we used CMS’s On-Line Survey, Certification, and Reporting (OSCAR) system data to determine the number of facilities with survey frequencies established by CMS that states had not surveyed within 6 and 10 years. We also used OSCAR and CMS documents for fiscal years 2000 and 2007 to examine changes in states’ required survey workload—the workload that states would have to complete to meet statutory and CMS survey frequency requirements. We then analyzed the effect of the following three factors on states’ required survey workload: (1) changes in the number of facilities subject to state surveys including state validation surveys of accredited facilities, (2) changes in intervals between surveys from fiscal year 2000 to fiscal year 2007 for facility types that lack statutory survey time frames, and (3) differences in the time devoted to surveys across facility types. First, we calculated the proportion of each facility type subject to standard and validation surveys in every state based on survey frequency requirements for fiscal years 2000 and 2007. Second, we multiplied this result by national average survey hours for each facility type to estimate survey workload in hours and computed the percentage change in the required survey workload between fiscal years 2000 and 2007 (see app. V). We used national average hours instead of state average survey hours for each facility type because surveys for many facility types were too infrequent at the state level to produce reliable data. We asked CMS to provide the survey hour data because OSCAR data on state- specific survey hours was incomplete for fiscal years 2000 through 2004. Because we used national average survey hours, our analysis does not reflect differences in average facility size across states; it also does not reflect any differences in survey hours over time. Although CMS survey hour data from fiscal years 2000 to 2007 showed an increase of about 4 percent overall, this increase was not gradual from year-to-year and the increases and decreases could not be explained. Therefore, we determined that the yearly survey hour data were not consistent or reliable. In assessing how states’ survey workload changed over this time period, we also considered state complaint investigations, survey process improvements that increased survey hours, and facility revisits required to ensure that serious deficiencies had been corrected. We did not attempt to incorporate these state survey activities into our workload analysis because the data lacked reliability and consistency and the increases in survey hours were modest for most facility types. CMS oversight of states’ use of federal funds. To assess the effectiveness of CMS oversight of states’ use of survey funds, we reviewed CMS’s State Operations Manual, which sets expectations for both CMS regional offices and states on budgeting and expenditure reporting. We also examined CMS’s state performance review protocols, which included a standard on state budget practices and financial reporting, and several audits of states’ survey expenditures conducted by the Department of Health and Human Services (HHS) Office of Inspector General in fiscal years 2001 and 2002. We discussed expectations of how CMS regional offices should carry out oversight with central office officials and staff from five regional offices and also obtained the perspective of state officials. State perspectives. During early data collection for this study, we interviewed state officials from Florida, New York, Ohio, and Washington on issues such as the survey budget process, reasons for over- or underspending federal Medicare allocations, completion of CMS workload priorities, state licensure requirements, and staff recruitment and retention. Subsequently, we sent e-mail questionnaires to 27 other states covering similar issues as well as questions on federal oversight, and we followed up with the 4 states already interviewed. We used five factors to select these additional states, which follow. Expenditure of federal Medicare allocations. We selected states that spent at least 5 percent more or less than their total federal Medicare budget from fiscal years 2000 through 2006 and whose total over- or underspending was at least $500,000. At the time of our state selection, data for fiscal year 2006 were the most recent data available. Accomplishment of CMS workload priorities. We selected states that accomplished 50 percent or fewer of CMS’s workload priorities in tiers 1 through 3 for fiscal year 2006. At the time of our state selection, data for fiscal year 2006 were the most recent data available. Quality of nursing home surveys. To gauge the quality of states’ nursing home surveys which most states were able to complete, we analyzed the results of federal comparative surveys conducted from fiscal years 2002 through 2007 using CMS’s federal monitoring survey database. We reported the results of this analysis in May 2008. We selected states in which at least 25 percent of federal surveys found that state surveys had missed serious deficiencies. Number of facilities. Using CMS’s OSCAR database, we selected states that had experienced an increase or decrease of at least 20 percent in the number of facilities from fiscal years 2000 through 2006. At the time of our state selection, data for fiscal year 2006 were the most recent data available. Geographic distribution. We selected at least two states from each of the 10 CMS regions. Twenty-four of the 28 additional states responded to our e-mail questionnaire; considering the 4 states contacted initially, we collected information from a total of 28 states. In addition, we also interviewed officials from the Association of Health Facility Survey Agencies, an organization that represents state survey agencies. Data reliability. We verified the consistency and reliability of documentations and data that CMS provided through various means. On the basis of CMS’s documentation, we determined that CMS’s data on state survey expenditures from fiscal years 2000 through 2007 were reliable for examining state expenditures and the allocation of supplemental funds to states. We determined that CMS’s state performance reviews were reliable to understand states’ completion of CMS survey priorities because CMS uses this information for the same purpose. In addition, CMS generally recognizes OSCAR data to be reliable and throughout the course of our work we discussed our analysis of OSCAR data with CMS officials to ensure that the data accurately reflected state survey activities. We tested the data provided by CMS on survey hours for consistency and compared the data to survey data from OSCAR. We also interviewed CMS officials to learn about how they use the data and to clarify any data discrepancies. We reviewed state-reported data for consistency and plausibility and followed up with state officials to retrieve missing data and resolve data inconsistencies. In general, we determined that the data provided by the states were accurate for our purposes. Table 7 shows the overall survey frequencies by facility type against which CMS measures each states’ completion of its survey workload. In fiscal year 2006, in response to available Medicare funding, the agency began adjusting survey frequencies for facility types that lack statutory survey time frames. According to CMS officials, these adjustments did not alter its policy on survey frequency, which remains at about every 6 years for most facilities with nonstatutory survey frequencies. Appendix III: Federal Funding for Survey Activities in Actual and Inflation-Adjusted Dollars, Fiscal Years 2000 through 2007 Change from 2000 to 2007 Change from 2000 to 2007 (9.0%) Appendix IV: Number of and Percentage Change in Facilities Subject to State Standard and Validation Surveys, 2000 to 2007 (percentage change) Facilities subject to state standard surveys 906 (31) 1,099 (28) 577 (27) 1,340 (20) 447 (13) 17 (3) Outpatient physical therapy or outpatient speech pathology services 42 (1) 101 (5) Intermediate care facility for the mentally retarded -324 (-5) -1,119 (-7) -125 (-19) 254 (N/A) 87 (N/A) 3,302 (7) Facilities subject to state validation surveys 769 (138) 405 (614) 903 (534) 185 (N/A) -173 (-4) 2,089 (39) 5,391 (10) The total number of organ transplant centers is as of January 2008; collectively these centers operated 844 organ transplant programs. Each transplant center may have more than one organ- specific program, each of which will be surveyed separately. In order to determine how states’ required survey workload—the workload that states would have to complete to meet statutory and CMS survey frequency requirements—has changed from fiscal year 2000 to fiscal year 2007, we analyzed OSCAR and CMS data for fiscal years 2000 and 2007. First, we determined percentage changes in the number of facilities subject to state surveys, including state validation surveys of accredited facilities. Second, we combined the effects of the number of facilities subject to standard and validation surveys with the survey frequency requirements for fiscal years 2000 and 2007. Third, we incorporated the effect of survey hours for each facility type using average national survey hours to determine the change in states’ required survey workload between fiscal years 2000 and 2007. States are listed from highest to lowest based on the percentage change in the number of facilities subject to surveys. In addition to the contact named above, Walter Ochinko, Assistant Director; Kaycee M. Glavich; Leslie V. Gordon; Thomas Han; Keyla Lee; Jessica C. Smith; and Timothy J. Walker made key contributions to this report. Nursing Homes: Federal Monitoring Surveys Demonstrate Continued Understatement of Serious Care Problems and CMS Oversight Weaknesses. GAO-08-517. Washington, D.C.: May 9, 2008. Nursing Home Reform: Continued Attention Is Needed to Improve Quality of Care in Small but Significant Share of Homes. GAO-07-794T. Washington, D.C.: May 2, 2007. Nursing Homes: Efforts to Strengthen Federal Enforcement Have Not Deterred Some Homes from Repeatedly Harming Residents. GAO-07-241. Washington, D.C.: March 26, 2007. Clinical Labs: CMS and Survey Organization Oversight Is Not Sufficient to Ensure Lab Quality. GAO-06-879T. Washington, D.C: June 27, 2006. Clinical Lab Quality: CMS and Survey Organization Oversight Should Be Strengthened. GAO-06-416. Washington, D.C.: June 16, 2006. Nursing Homes: Despite Increased Oversight, Challenges Remain in Ensuring High-Quality Care and Resident Safety. GAO-06-117. Washington, D.C.: December 28, 2005. Nursing Home Deaths: Arkansas Coroner Referrals Confirm Weaknesses in State and Federal Oversight of Quality of Care. GAO-05-78. Washington, D.C.: November 12, 2004. Medicare: CMS Needs Additional Authority to Adequately Oversee Patient Safety in Hospitals. GAO-04-850. Washington, D.C.: July 20, 2004. Nursing Home Fire Safety: Recent Fires Highlight Weaknesses in Federal Standards and Oversight. GAO-04-660. Washington D.C.: July 16, 2004. Dialysis Facilities: Problems Remain in Ensuring Compliance with Medicare Quality Standards. GAO-04-63. Washington, D.C.: October 8, 2003. Nursing Home Quality: Prevalence of Serious Problems, While Declining, Reinforces Importance of Enhanced Oversight. GAO-03-561. Washington, D.C.: July 15, 2003. Nursing Homes: Public Reporting of Quality Indicators Has Merit, but National Implementation Is Premature. GAO-03-187. Washington, D.C.: October 31, 2002. Medicare Home Health Agencies: Weaknesses in Federal and State Oversight Mask Potential Quality Issues. GAO-02-382. Washington, D.C.: July 19, 2002. Nursing Homes: Quality of Care More Related to Staffing than Spending. GAO-02-431R. Washington, D.C.: June 13, 2002. Nursing Homes: More Can Be Done to Protect Residents from Abuse. GAO-02-312. Washington, D.C.: March 1, 2002. Nursing Homes: Federal Efforts to Monitor Resident Assessment Data Should Complement State Activities. GAO-02-279. Washington, D.C.: February 15, 2002. Nursing Homes: Sustained Efforts Are Essential to Realize Potential of the Quality Initiatives. GAO/HEHS-00-197. Washington, D.C.: September 28, 2000. Medicare Quality of Care: Oversight of Kidney Dialysis Facilities Needs Improvement. GAO/HEHS-00-114. Washington, D.C.: June 23, 2000. Nursing Home Care: Enhanced HCFA Oversight of State Programs Would Better Ensure Quality. GAO/HEHS-00-6. Washington, D.C.: November 4, 1999. Nursing Home Oversight: Industry Examples Do Not Demonstrate That Regulatory Actions Were Unreasonable. GAO/HEHS-99-154R. Washington, D.C.: August 13, 1999. Nursing Homes: Proposal to Enhance Oversight of Poorly Performing Homes Has Merit. GAO/HEHS-99-157. Washington, D.C.: June 30, 1999. Nursing Homes: Complaint Investigation Processes Often Inadequate to Protect Residents. GAO/HEHS-99-80. Washington, D.C.: March 22, 1999. Nursing Homes: Additional Steps Needed to Strengthen Enforcement of Federal Quality Standards. GAO/HEHS-99-46. Washington, D.C.: March 18, 1999. California Nursing Homes: Care Problems Persist Despite Federal and State Oversight. GAO/HEHS-98-202. Washington, D.C.: July 27, 1998. | Americans receive care from tens of thousands of health care facilities participating in Medicare and Medicaid. To ensure the quality of care, CMS contracts with states to conduct periodic surveys and complaint investigations. Federal spending on such activities totaled about $444 million in fiscal year 2007; states are expected to contribute their own funds both through the Medicaid program and apart from that program. GAO evaluated survey funding, state workloads, and federal oversight of states' use of funds since fiscal year 2000 to determine if federal funding had kept pace with the changing workload. GAO analyzed (1) federal funding trends from fiscal years 2000 through 2007 and CMS's methodology for determining states' allocations and spending, (2) CMS data on the number of participating facilities and completed state surveys, and (3) CMS oversight of state spending. GAO interviewed state officials and collected data from 28 states. Federal funding for state surveys increased from fiscal years 2000 through 2002 but was nearly flat from fiscal years 2002 through 2007. In inflation-adjusted terms, funding fell 9 percent from fiscal years 2002 through 2007. CMS has made incremental adjustments to improve its management of state allocations. It shifted federal funding from support contracts to surveys, increasing state allocations about 1 percent in fiscal years 2006 and 2007. For some facilities without statutory survey frequencies, CMS increased the time between surveys from 6 years to 10 years--a schedule that may further increase the chance of undetected quality problems. CMS also developed a budget analysis tool to help address the mismatch between federal allocations and states' current survey workloads, but use of the tool has been limited. Most states, including those that spent more than their initial federal allocations, did not complete CMS's survey workload priorities in fiscal years 2006 and 2007, though the required survey workload--the workload that states would have to complete to meet statutory and CMS survey frequency requirements--decreased about 4 percent nationwide from fiscal years 2000 to 2007. A decrease in the number of the most time-consuming and frequently surveyed facilities, such as nursing homes, offset the increase in other facilities. CMS lacked consistent and reliable data to measure workload changes in other areas such as complaint investigations. States reported that workforce instability due to noncompetitive surveyor salaries and hiring freezes hindered their workload completion but CMS has little influence over state hiring. Among seven states that completed their nursing home surveys, CMS found that 25 percent or more of some of their surveys missed serious deficiencies. According to CMS, the performance of one of these states raised concerns about the state's management of survey activities. There is little oversight of state non-Medicaid contributions intended in part to reflect the benefit states derive from participating in federally sponsored oversight of facilities. State contribution rates have not been reviewed in recent years. CMS officials told GAO that the agency does not collect information on state expenditures to help ensure that states are contributing funds consistent with those rates, noting limits on their authority to require submission of such data. CMS believes, however, that federal funding may not be sufficient and that state spending above the initial Medicare allocation represents state funds in addition to the non-Medicaid share. The evidence is mixed on whether federal funding has kept pace with the changing workload. The required survey workload decreased nationwide but most states told GAO that survey frequencies of 6 to 10 years for many facilities could adversely affect beneficiaries. Moreover, distinguishing the impact of funding, staffing, and management on state workloads is difficult. GAO believes that these and other weaknesses in CMS's current funding approach will continue to frustrate the agency's efforts to support and oversee state survey activities. |
The Board is responsible for overseeing a complex broadcast environment which spans 5 broadcast entities with varying missions, 84 discrete language services, changing consumer habits and preferences, and a technology environment that presents constant new challenges and opportunities. The Board currently oversees a staff of almost 3,200 and a worldwide network of leased communication satellite services and 38 owned or leased transmission stations. The Board oversees the broadcast of almost 2,000 hours of original (not rebroadcasts) broadcast material each week. The Board estimates that the Voice of America’s broadcasts alone reach a worldwide listening audience of 91 million people each week. Radio Free Europe/Radio Liberty broadcasts reach an estimated 16 million listeners each week. Radio Free Asia and Radio/TV Marti have difficulty obtaining reliable audience estimates due to the closed nature of target broadcast countries. These audiences are reached through a variety of means, including direct radio and television broadcasts from U.S.-owned or -leased transmitters, local rebroadcasters (known as affiliates) who carry U.S. international broadcasting content on their stations, and the Internet. The U.S. international broadcasting budget for fiscal year 2000 is about $420 million. The Board, the Voice of America, Radio/TV Marti, and Worldnet are federal entities and receive funding directly from Congress. Radio Free Europe/Radio Liberty and Radio Free Asia operate as independent, nonprofit corporations and are funded by grants from the Board. The Board’s current organizational structure is illustrated in figure 1. While this figure shows a reporting relationship from the Voice of America, Worldnet, and Radio/TV Marti to the Director of the International Broadcasting Bureau, these broadcast entities have a direct reporting relationship with the Board regarding all programming issues. The Acting Director of the International Broadcasting Bureau told us that his organization provides consolidated technical and support services to client broadcasters; however, programming decisions are handled by the respective broadcast entities and the Board. As noted earlier, the central focus of U.S. international broadcasting is on reaching audiences that are underserved by their local media. According to Freedom House’s year 2000 survey of press freedom, most countries rated as “not free” are located in Africa, the Middle East, and Asia (see app. I for a reproduction of Freedom House’s current world map of press freedom). While all five broadcast entities share the core mandate of reaching underserved populations, a key distinction among the entities is that the Voice of America and Worldnet broadcast to a global audience, while Radio Free Europe/Radio Liberty, Radio Free Asia, and Radio/TV Marti serve as “surrogate” broadcasters in their respective regions and substitute for local media in countries where a free and open press is deemed not to exist or has not been fully established. In addition to adhering to a global mission for U.S. broadcasting, each broadcast entity has its own broadcast mission. As described in public documents and by Board officials, the Voice of America provides accurate and credible international, regional, and country-specific news to a global audience, with a particular emphasis on supplying information relating to the United States. However, in Africa where the Voice of America serves a surrogate role, greater emphasis is given to news of local interest. The Voice of America meets its mandate to broadcast the U.S. position on various foreign policy matters by including the views of U.S. officials in its regular programs and through daily editorials that are identified as representing the views of the U.S. government. It also broadcasts a number of public affairs programs which focus on discussions of U.S. policy by policymakers and experts. Radio Free Europe/Radio Liberty focuses on providing regional and local news to emerging democracies in Central Europe and the former Soviet Union, and to Iran and Iraq. Radio Free Asia and Radio/TV Marti concentrate on providing news of local interest to audiences in Asia and Cuba, respectively, who generally do not have access to a free and open press. Figure 2 shows the regional coverage of the Voice of America, Radio Free Europe/Radio Liberty, Radio Free Asia, and Radio/TV Marti. Shortwave broadcasting has dominated the history of U.S. international broadcasting for over 50 years. Over the past decade, however, the range of media options available to many listeners around the world has expanded to include local AM/FM programming, television, and the Internet. This diversified media environment has greatly increased the complexity of the strategic decisions the Board faces. These transmission modes and certain issues surrounding their use are described in appendix II. The Board responded to the $75-million funding cap placed on Radio Free Europe/Radio Liberty and related cost-cutting expectations by relocating to virtually rent-free quarters in Prague, Czech Republic; reducing staff; and forming local broadcast partnerships in two cases. The Board achieved further savings by consolidating Radio Free Europe/Radio Liberty and Voice of America broadcast schedules, consolidating Radio Free Europe/Radio Liberty and Voice of America transmission operations under the International Broadcasting Bureau, and implementing digital sound recording and editing technology in Prague. One key cost-cutting action that has not been implemented was the original expectation in the 1994 act that Radio Free Europe/Radio Liberty would receive private rather than public funding after the end of calendar year 1999. Based on the results of analysis that the Board conducted, the Board concluded that privatization was not a feasible option due to the lack of tangible business assets (such as transmission facilities or broadcast frequencies) of interest to commercial buyers. The Foreign Relations Authorization Act for Fiscal Years 2000 and 2001 (sec. 503 of App. G of P. L. 106-113) amended the original expectation regarding privatization to require that broadcast operations to a given country should be phased out when there is clear evidence that democratic rule has been established and that balanced, accurate, and comprehensive news and information is widely available. In line with congressional expectations, Radio Free Europe/Radio Liberty reduced its budget from $208 million in fiscal year 1994 to approximately $71 million in fiscal year 1996 by taking the following actions. In 1995, Radio Free Europe/Radio Liberty relocated its headquarters from Munich, Germany, to quarters in Prague, Czech Republic, provided by the Czech Republic as a public service. In conjunction with the move to Prague, Radio Free Europe/Radio Liberty reduced its total staffing by almost 1,200 individuals, or almost 75 percent of its workforce. Radio Free Europe/Radio Liberty and Voice of America officials coordinated their respective broadcast schedules and eliminated over 300 weekly broadcast hours in overlapping and duplicative programming. The Polish and Czech language services were reconstituted as separate, nonprofit corporations. Radio Free Europe/Radio Liberty transmission facilities were turned over to the International Broadcasting Bureau in 1995 in connection with the consolidation of engineering and technical operations under the Bureau. Prior to this consolidation, Radio Free Europe/Radio Liberty controlled a network of six transmission stations located in Germany, Portugal, and Spain. The two stations in Portugal were closed as a result of the consolidation. International Broadcasting Bureau officials estimate that the consolidation of engineering and technical operations initially resulted in more than $32 million in annual recurring savings and that current annual savings have grown to more than $50 million. A digital sound recording and editing platform was installed in connection with the move to Prague. This technology, under appropriate circumstances, allows one individual to produce a radio broadcast that previously would have required the services of an announcer, a producer, and a sound technician using the analog recording and editing technology that had been used in Munich. One Radio Free Europe/Radio Liberty official noted that approximately 75 percent of the station’s output lent itself to the streamlined mode of production enabled by digital technology. The Board completed its first annual language service review in January 2000 and plans to use the results of this review to strategically reallocate approximately $4.5 million in program funds across broadcast regions on the basis of priority and impact ratings assigned to each language service. The priority ratings reflected a number of factors, including the language service’s contribution to furthering U.S. strategic interests, audience size, and other variables. The language service’s impact was based on the mass audience size and the number of “elite” (that is, government and other influential decisionmakers) listeners reached. The Board plans to use next year’s language service review to examine the issue of duplication in program content among the Voice of America and surrogate language services. We also found overlap in overseas news-gathering resources among broadcast entities. This is a potentially important duplication issue that the Board has not reviewed. We raised a similar issue in our 1996 report reviewing potential budget reduction options. Board officials explained that a comprehensive language service review was not completed until January 2000 because the Board lacked adequate audience research on the number and type of listeners for such a review. Starting in 1997, the Board increased the budget devoted to audience research and in 1999 tasked the International Broadcasting Bureau’s Office of Strategic Planning with developing a comprehensive set of program and performance data to be used as the basis for the comprehensive review of language services. Board members assigned priority and impact (audience) ratings to each language service as a basis for reallocating resources. The evaluation criteria used for the priority ratings included potential audience size, U.S. strategic interests, press freedom, economic freedom, and political freedom. For example, a service’s contributions to furthering U.S. strategic interests was scored on the basis of inputs received from a variety of sources, including the White House, the National Security Council, the State Department, and applicable congressional Committees. For the impact ratings, the Board focused on audience size and composition as key performance measures. The Board also evaluated other data, such as the language service’s program quality, operating budget, broadcast hours, signal strength, and affiliate stations, to identify approaches for increasing listening rates in selected countries. Audience data were based on research conducted by the International Broadcasting Bureau’s Office of Audience Research and the InterMedia Survey Institute, which provided data on both audience size and elite listening rates. Appendix III contains further details on the criteria and related processes used to support the Board’s language service review process. The Board used the language service evaluation criteria to develop priority/impact ratings for 69 of the Board’s 84 language services. As shown in table 1, the Board used these ratings to develop a matrix that identified higher priority/higher impact services, higher priority/lower impact services, lower priority/higher impact services, and lower priority/lower impact services. The Board intends to use this information to strategically reallocate approximately $4.5 million in language service funds from emerging democracies in Central and Eastern Europe to several African countries and selected countries in other regions. The review resulted in 21 language service reduction recommendations, 15 recommended service enhancements, and a call for the further review of seven low-performing and five duplicate language services. Language services rated as higher priority were concentrated in countries with a large potential listening audience; low press, political, and economic freedom; and high strategic interest to the United States. Higher and lower impact scores were determined on the basis of percentage weekly listening rates for both mass and elite audiences. Services with listening rates below 5 percent for mass listeners and 15 percent for elite listeners were rated as having lower impact. Services that ranked above this threshold were rated as having higher impact. According to the Board, next year’s language review will include an assessment of overlapping language services among the five U.S. broadcast entities. Board officials told us that the strategy of duplicating language services has been designed to allow U.S. international broadcast entities to achieve their respective missions by offering different program content in the same language. Nonetheless, the Board said in a written evaluation of this year’s language service review that it is essential that the Board revisit the respective roles of the broadcasting services in light of evolving foreign policy and geopolitical and budget realities in the new century. The Board intends to use the language service review next year to look at program duplication between the Voice of America and surrogate language services, such as broadcasts to countries of the former Soviet Union, and to determine whether this overlap effectively serves U.S. interests on a country-by-country basis. Figure 3 shows those languages where both the Voice of America and a surrogate service broadcast in the same language. While the Board intends to review the issue of program content duplication next year, it does not expect to explicitly review the duplicate news resources maintained by broadcast entities overseas. The Voice of America, Radio Free Europe/Radio Liberty, and Radio Free Asia each maintain field offices and freelance journalists in their respective regions. Voice of America resources overlap with those deployed by Radio Free Europe/Radio Liberty and Radio Free Asia in their respective regions. For example, Radio Free Europe/Radio Liberty has a combined total of about 700 bureau staff and freelance journalists covering its broadcast area. The Voice of America has a combined staff of about 150 in the same region. In addition to the issue of overlap, broadcasting officials noted that news-gathering resources are not shared across broadcast entities. For example, one Voice of America language Division Director noted that news feeds from Voice of America overseas bureaus are not shared with Radio Free Asia and that Radio Free Asia news feeds are not shared with the Voice of America. The Division Director said “They do their work, and we do ours.” A Radio/TV Marti employee noted that neither the Voice of America nor Radio Free Europe/Radio Liberty share relevant news items of interest to Radio/TV Marti listeners. As an example, news from Russia is not directly available to the station, because Radio/TV Marti does not have overseas bureaus or freelance journalists. We reported on a similar issue in our 1996 report on budget reduction options for the U.S. Information Agency. In our report, we noted areas where elimination of existing overlap could yield management improvements and cost reductions. One area we highlighted was the potential for further consolidation of overseas news bureaus and other broadcasting assets. Our report cited the overlap in news-gathering resources deployed by the Voice of America and Radio Free Europe/Radio Liberty in Moscow as an example of a potential area for consolidation. Table 2 provides details on the number of bureaus, bureau staff, and freelance journalists deployed by each broadcast entity along with related fiscal year 2000 funding data. The need to manage overseas resources effectively is heightened by the fact that several broadcasting officials commented they do not have adequate news-gathering resources and that product quality has suffered as a result. For example, a Radio/TV Marti official told us that a lack of resources has prevented the station from sending journalists to domestic locations outside of the Miami area and overseas to report on news stories of interest to the Cuban people. A Radio Free Asia language Director noted that her service has only $500 a month to pay for reports from freelance journalists that cost $50 to $100 per report. She noted that this level of funding is not sufficient to produce original and up-to-date programming. Radio Free Asia officials have since told us that freelance budgets have been adjusted to fully fund all language services’ projected requirements for the remainder of fiscal year 2000. The Board has not yet developed a strategic planning and performance management system that provides a high level of assurance that resources are being used in the most effective manner possible. The key components of this system are Results Act planning, the annual language service review, and the program reviews of individual language services. The Board’s fiscal year 2001 Results Act performance plan is deficient because of missing or imprecise performance goals and indicators and a lack of key implementation strategies and related resource requirements. In addition, the lack of a standard program review approach and audience goals for individual language services limits the usefulness of the program reviews that the broadcast entities conduct to assess the content and presentation of their individual language service programs. As a newly independent federal entity, the Board has full responsibility for implementing its strategic planning and performance management system. A key component of such a system is Results Act planning. Under the Results Act, executive agencies are required to prepare 5-year strategic plans that set the general direction for their efforts. Agencies then develop annual performance plans that establish the connections between long-term strategic goals outlined in the strategic plan and the day-to-day activities of program managers and staff. Finally, the act requires that each agency produce an annual performance report on the extent to which it is meeting its annual performance goals and the actions needed to achieve or modify those goals that have not been met. Board officials pointed out that they have made considerable progress in implementing a strategic planning and performance management system and that they submitted a performance report in March 2000 as required. The Board’s fiscal year 2001 performance plan includes two strategic objectives that are not supported by accompanying performance goals and indicators. First, the performance plan lists encouraging the development of a free and independent media as a strategic objective. This reflects one of the objectives embodied in the 1994 Broadcasting Act that calls for the training and technical support for independent indigenous media through government agencies or private U.S. entities. The second strategic objective lacking supporting performance goals and indicators relates to the Board’s need for comprehensive and up-to-date audience research data. Again, the 1994 Broadcasting Act stipulates that U.S. international broadcasting efforts should be based on reliable audience research data. The Board recognizes that its performance plan has some limitations and has formed a Results Act indicators review team to address them. Table 3 provides an overview of the Board’s fiscal year 2001 performance plan that was included with the agency’s fiscal year 2001 budget submission to Congress. This performance plan supports the Board’s stated mission of using U.S. international broadcasting to encourage the development and growth of democratic values in support of the diplomatic, humanitarian, and economic goals of the United States. The array of programs and accurate information that U.S. international broadcasting strives to provide foreign audiences worldwide are intended to help people understand democratic ideals, civil governance, free market economics and trade, and respect for the rule of law. Within this context, while a number of performance goals and indicators are used to assess the extent to which U.S. international broadcasting is achieving its mission, Board officials told us that audience size is the most important performance goal and indicator. Table 3 shows the strategic objectives and the performance goals and indicators contained in the Board’s fiscal year 2001 performance plan. Of the performance goals and indicators shown in table 3, Board officials have identified audience size as the most important performance goal and indicator for assessing to what extent U.S. international broadcasting is achieving its mission. Audience size provides an indicator of how many people around the world are tuning in to information intended to help them understand democratic ideals, civil governance, and the rule of law. However, the Board uses only global audience size estimates by broadcast entity to set performance goals and track performance. For example, the fiscal year 2001 performance plan lists the Voice of America’s current listening audience at 91 million and sets a performance target of 92 million for fiscal year 2001. A January 1999 memo provided instructions on preparing submissions to the fiscal year 2001 performance plan; it invited units to suggest potential program enhancements and provide a memo describing the impact these enhancements would have on such performance measures as audience size. The instructions also called for a description of how the actual impact of such program enhancements would be measured. However, this guidance did not discuss the systematic establishment of specific audience targets by language service or the method for monitoring such targets to provide meaningful performance data (such as the number of language services achieving target performance levels each year) for inclusion in the Board’s annual performance plan. The Board acknowledges in its performance plan that changes in estimated global listening audiences from year to year do not necessarily indicate a “genuine” increase in listeners because better survey techniques may simply have identified additional listeners not included in earlier estimates. In addition, the International Broadcasting Bureau’s Office of Research reported that the Voice of America’s global estimate should be taken only as a rough indication of the number of listeners, with a potentially wide margin of error. The report further noted that “most of Voice of America’s audience is heavily concentrated in a small number of countries; as a result, exclusive reliance on the global estimate as a measure of effectiveness may obscure important changes that occur from year to year at the regional or country level.” Radio Free Asia officials have pointed out that Radio Free Asia is relatively new and has no effective means to advertise its services in the closed target countries. Further, these officials said that it is very difficult to obtain reliable audience size estimates. Thus, the officials believed that audience size would not be an adequate measure of Radio Free Asia’s performance at this time. A second problem with this key performance indicator is that the performance plan makes no distinction between mass versus elite (that is, government and other influential decisionmakers) audiences and only references mass listening audiences in its strategic objectives and performance goals. The distinction between these two basic audiences has major implications for the Board with regard to setting strategic objectives and performance goals, establishing and refining broadcast strategies, and allocating resources in the most effective manner possible. A senior Voice of America official told us that the agency’s biggest challenge is analyzing its programming language by language and determining what matches the needs of the various audiences the Voice of America is trying to reach. The target audience can also change over time. For example, the Voice of America’s audience in Africa has typically been made up of an elite group of 40- to 50-year-old males in political or civil service leadership positions. Now, one official told us, the African language services need to attract more of a mass audience in order to reach future leaders. According to the Results Act, agency performance plans should describe the operational processes, skills, technology, and other resources an agency will need to achieve its performance goals. The plans should describe both the agency’s existing strategies and resources and any significant changes to them. We found that the Board’s fiscal year 2001 plan does not discuss such strategies or resource requirements for its ongoing initiatives. For example, the plan does not include a discussion of the Board’s Internet deployment plan. This is a concern, given the complex issues the Board faces as it attempts to integrate the Internet with the more traditional radio and television distribution efforts of five discrete broadcast entities in an era of rapid political and technological changes and shifting consumer demands and preferences. The lack of a discussion of the role and significance of the International Broadcasting Bureau’s deployment of digital production technology for the Voice of America is another concern. Under the title of the “Digital Broadcasting Program,” the digital production technology effort is being overseen by the Board. This $57-million effort to upgrade the Voice of America’s operations from an analog mode to a digital one will allow, in certain cases, a single staff member to perform the work previously assigned to an announcer, a producer, and a sound technician. Radio Free Europe/Radio Liberty and Radio Free Asia have already implemented digital production systems, and Radio/TV Marti expects to have its digital project completed by December 2001. However, according to a senior International Broadcasting Bureau official, the Digital Broadcasting Program, which was initiated in 1995, was supposed to be finished within a 3- to 4-year time period predicated on the project’s receiving funding at the planned levels. Actual funding has been extended over a longer period of time, and a definitive end-point for the project remains to be established. The Board’s performance plan does not highlight the importance of this project to the Voice of America’s effectiveness, the specific strategies being followed to ensure successful implementation, the impact budget shortfalls will have on its completion, and the projected cost savings (in terms of long-term staffing needs, for example) to be derived from full implementation of the project. The usefulness of annual program reviews of individual language services is hampered by (1) a lack of consistency in how program quality scores—a key component of the program review process—are developed across broadcast entities and (2) the lack of audience size and composition targets, which would help focus language service planning efforts. The International Broadcasting Bureau conducts program reviews for the Voice of America and Radio/TV Marti, while Radio Free Europe/Radio Liberty and Radio Free Asia conduct their own reviews. Program reviews evaluate a number of factors, including audience size, signal strength, affiliates management, and program content and presentation. The latter factor is referred to as “program quality.” Program reviews culminate with a written report with recommendations for improving operations in one or more of the previously listed areas. Board officials acknowledge that there is variability in how program reviews are conducted across broadcast entities. Specifically, they noted that a consistent approach to evaluating program quality remains to be established. Program quality refers to content and presentation issues such as program balance and objectivity, program pacing, use of musical bridges between program segments, and the quality of the announcer’s voice. One key methodological difference that exists today is that some broadcast entities use external experts and in-country listening panels in assessing program quality, and others do not. For example, the International Broadcasting Bureau relies on internal personnel to develop program quality assessments. Voice of America language program directors generally noted that these assessments were not that rigorous and would benefit from input from outside experts, such as journalists and academic specialists. In contrast, Radio Free Europe/Radio Liberty does utilize external experts and in-country listening panels in its program quality review process. Funding permitting, Board officials noted that they eventually intend to move all program reviews toward a uniform process and methodology that incorporates the views of external experts and in-country listening panels in assessing program quality. Finally, we noted that program reviews center on discussions of program operations and a general desire to improve language service performance without the benefit of focussing on specific performance targets such as audience size and composition. Board officials noted that performance targets for individual language services could be established at the Results Act and annual language service review levels and these targets could form the focal point for program reviews. Focused program reviews could, in turn, influence and modify the next iteration of performance targets established at the Results Act and annual language service review levels. The Board has taken actions to fulfill the mandates and expectations contained in the U.S. International Broadcasting Act of 1994. It has implemented the steps necessary to reduce Radio Free Europe/Radio Liberty’s budget to below the $75 million ceiling established by Congress. The Board established a language service review process that is designed to realign budget resources strategically on an annual basis. Finally, the Board has developed a strategic planning and performance management system that consists of Results Act planning, the annual language service review, and the program reviews of individual language services. This system is intended to help ensure that U.S. international broadcasting resources are used in the most effective manner possible. Despite the Board’s overall progress and its continuing efforts to further refine its strategic planning and performance management system, the broadcast entities could benefit from the closer integration of international broadcast missions and strategic objectives and more clearly defined performance goals and indicators as called for by the Results Act. For example, the Board’s global audience goal, in particular, is less useful as a key indicator of broadcast effectiveness than summary data on the success of language services in achieving individual audience size and composition targets. Further, the performance plan lacks an implementation strategy and related resource requirements for the Board’s key initiatives. Addressing these strategic planning issues could help ensure that resources are managed more effectively with more clearly defined results. The Board’s current plans for its next language services review do not include a plan to analyze the deployment of field news-gathering resources among the broadcast entities. Such an analysis could potentially identify areas of unnecessary overlap, which would allow them to redirect resources to areas needing more news coverage. A lack of adequate news coverage ultimately diminishes the quality of U.S. broadcast efforts and potentially affects the size and nature of the listening audience, a key performance indicator. Finally, annual program reviews conducted for individual language services do not employ a consistent approach to assessing program quality and do not focus on specific audience size and composition targets. A standard review approach, which incorporates both outside experts and in-country listening panels, would increase the overall value of program quality assessments and allow meaningful comparisons among individual language services and among broadcast entities. Improved program quality measures would also benefit the annual language service review process and the Board’s Results Act planning, each of which incorporate program quality as a performance measure. Establishing specific audience targets for each language service would enable program review teams to develop action plans listing the specific steps and resources needed to achieve any audience share and composition goals established at the Results Act level. These action plans and related resource discussions could be incorporated in both Results Act planning and the annual language service review process which is the Board’s primary vehicle for assessing the distribution of broadcasting resources. To strengthen the Board’s management oversight and provide greater assurance that international broadcasting funds are being effectively expended, we recommend that the Chairman of the Broadcasting Board of Governors include in the Board’s performance plan a clearer indication of how its broadcast missions, strategic objectives, performance goals, and performance indicators relate to each other; and establish audience and other goals, as appropriate, at the individual language service level; include implementation strategies and related resource requirements in its performance plan; analyze overseas news-gathering networks across its broadcast entities to determine if resources could be more effectively deployed; and institute a standardized approach to conducting program quality assessments and require that program reviews produce a detailed action plan that responds to specific audience size and composition targets established at the Results Act and annual language service review level. The Broadcasting Board of Governors provided written comments on a draft of this report. The Board stated that the report is fair and accurate, and the Board concurred with our recommendations. The Board said that some actions currently underway will serve to partially implement the recommendations and that it will implement additional actions in the future. For example, the Board has launched a review of its existing performance plan that will include drawing clearer linkages between broadcast missions, strategic objectives, and performance goals. The Board also intends to establish audience and other goals, as appropriate, at the individual language service level. The Board agreed with our recommendation that it analyze its overseas news-gathering network next year. However, the Board said that an analysis of its overseas news-gathering resources would be more useful as a stand-alone analysis rather than as part of the annual language service review as we recommended. We recognize the need for such flexibility and modified our recommendation accordingly. The Board expressed concern that the information we provided on U.S. international broadcasting and the British Broadcasting Corporation was unfair and presented a misleading picture of two very different organizations (see app. IV). The Board noted that U.S. international broadcasting has been charged with a far more complex mission, which includes conveying the views of the U.S. government and functioning as a surrogate broadcaster in areas where gaining access to target audiences is difficult. The Board added that caution was needed when comparing total operating costs, listening audience size, the number of language services, and the implied cost per listener, due to the significant differences between the two organizations. To address the Board’s concerns, we modified the introduction to appendix IV. We also adjusted U.S. budget data to remove television production and transmission costs which are not included in the British Broadcasting Corporation budget figure. However, we believe that providing information on the world’s top two international broadcasters is useful and serves to illustrate both the similarities and differences in how these two organizations conduct their business. Further, discussions with U.S. broadcast staff and our review of internal documents indicate that the Board considers the British Broadcasting Corporation to be a key competitor and closely tracks its activities in selected broadcast markets around the world. The comments provided by the Board are reprinted in appendix VI. The Board also provided technical comments in attachment B, which we have incorporated in the report as appropriate. We are sending copies of this report to the Honorable Marc B. Nathanson, Chairman, Broadcasting Board of Governors; and to interested congressional committees. Copies will also be made available to others upon request. If you or your staff have any questions concerning this report, please call me at (202) 512-4268. Other GAO contacts and staff acknowledgments are listed in appendix VII. The core mandate of U.S. international broadcasting is to reach audiences in countries where a fair and open press does not exist or has not been fully established. The Board’s primary basis for assessing the status of press freedom around the world is the annual survey of press freedom conducted by an organization called Freedom House, which is partly supported by U.S. grant funds. As shown in figure 4, Freedom House’s most recent survey shows that the most severely underserved audiences are concentrated in Africa, the Middle East, and Asia. U.S. international broadcasting operates within the context of a complex and evolving transmission environment. Each of the key broadcast methods the United States uses is described in the following section and in more detail in an August 1999 International Broadcasting Bureau study. This transmission mode utilizes the reflective properties of the ionosphere to carry an analog radio signal to listeners typically up to 4,200 miles away or even farther under some circumstances. In many situations the quality of shortwave transmissions can be comparable to that of AM/FM broadcasts. However, over long distances, where shortwave is so valuable, transmission quality can vary considerably. Despite its drawbacks, shortwave remains the primary transmission medium (and sometimes the only option) for international broadcasters seeking to reach target populations where press freedom is completely or largely restricted. One problem with shortwave broadcasts is that countries, such as China, Vietnam, and Cuba, attempt to block U.S. broadcast signals. To counteract these jamming activities, international broadcasters use very powerful transmitters, operating from multiple locations, on multiple frequencies. This increases the costs of shortwave broadcasting relative to most other transmission mediums but it still remains an economical medium for reaching large areas. Shortwave broadcasting is currently carried on a network of 22 U.S.-owned and 16 leased transmission facilities. However, U.S.-owned transmitters in the Philippines and Thailand currently cannot be used for Radio Free Asia broadcasts because of host government prohibitions. The future of shortwave radio could be significantly affected by the development of digital shortwave, which offers several advantages over the current analog form of shortwave transmission. Digital shortwave is capable of producing AM quality audio, which does not degrade over long distances. Digital shortwave receivers (which are not yet commercially available) can be programmed to lock on to a station name as opposed to a specific broadcasting frequency. This development could have major implications for countries such as China and Cuba, which actively jam current shortwave transmissions. Under a digital system, it may be possible to scramble frequencies to frustrate jammers while not affecting listeners, whose preset stations would be available at a touch of a button. However, the International Broadcasting Bureau noted that it is unclear whether these potential anti-jamming features will be available in mass-market products. Transnational AM (medium-wave) broadcasts - U.S.-owned or -leased transmission facilities, AM broadcasts can reach target audiences up to 900 miles away or even farther under some circumstances. One advantage of AM broadcasting is the enormous number of listeners with AM/FM receivers. As is the case with shortwave transmissions, one drawback of medium-wave transmissions is that they can be jammed by hostile governments. AM/FM Radio Affiliates -- Radio affiliates are local AM/FM or television stations that rebroadcast U.S.-produced program content. Some affiliates are paid to carry this content, and others are not. FM signals provide the highest sound quality, but they are limited to a line-of-sight broadcast range typically of about 25 to 75 miles depending on the height of the transmitting antenna and other local conditions. The Board currently has more than 1,300 radio affiliates, with the largest concentration of affiliates in Central Europe, the former Soviet Union, and Latin America. For example, the Board has 516 radio affiliates in Latin America. In contrast, it has only 54 radio affiliates in Africa. Paid leases and licenses are another form of local rebroadcasting. A lease is an agreement with a local station or network for a specific allocation of airtime for a specific cost. The Board currently has 24 AM/FM leases worldwide. Licenses are granted by a national authority to broadcasters for the use of a dedicated AM or FM frequency to broadcast locally using their own equipment. However, in most cases, national regulations require that the license be issued in the name of a local entity. According to a 1999 International Broadcasting Bureau document on transmission strategies, the Voice of America has traditionally placed its emphasis on building its network of AM/FM affiliates, while other international broadcasters, such as Radio Free Europe/Radio Liberty, the British Broadcasting Corporation, and Radio France International, have invested substantially in local leases and licenses. Television via Local Affiliates - is broadcast through local cable and land-based broadcast affiliates. According to Board officials, television has become the predominant media choice for viewers in several key areas, including Russia and China. The Board reports that it has almost 500 television cable/terrestrial affiliates concentrated in the former Soviet Union and Latin America. Television content for U.S. international broadcasting has traditionally been provided by the Worldnet Television and Film Service, which is the official television broadcast arm of the U.S. government. According to the Board, it has transferred the public diplomacy portion of Worldnet to the State Department under the Foreign Affairs Reform and Restructuring Act of 1998. (P.L. 105-277). The Board has submitted a reprogramming request to Congress to transfer Worldnet’s remaining resources (totaling $20.5 million in fiscal year 2000 funding) to Voice of America TV. Satellite Radio and Television - This medium relies on direct satellite transmission to relatively expensive analog or digital receivers or private satellite dishes. While not appropriate for reaching mass audiences, this option does offer the opportunity to reach “elite” listeners who are the key decisionmakers U.S. international broadcasters would like to reach in target countries. Internet Webcasting and E-mail Delivery – The Internet offers the first truly interactive medium for delivering text, audio, and video streams to users’ personal computers. The use of e-mail also provides broadcasters with the ability to send text messages to subscriber lists with the contents of U.S. audio broadcasts. U.S. broadcast entities have also established a presence on the Internet, and the Voice of America, Radio Free Europe/Radio Liberty, and Radio Free Asia have initiated e-mail subscriber programs. Again, the Internet is currently not poised to deliver information to mass audiences around the globe; however, it represents another key delivery option for reaching elite listeners. While Internet webcasting is not susceptible to jamming, it is susceptible to blocking at entry portals by hostile governments. Table 4 provides a brief overview of the criteria and related processes used to support the Board’s language service review process. Audience listening rate is the key variable used to assess the impact a language service is having. However, the Board used additional impact criteria, such as program quality and transmission effectiveness, to help identify potential solutions to low audience listening rates. The British Broadcasting Corporation’s (BBC) World Service has adopted a model for international broadcasting that differs in several key respects from the approach U.S. broadcasters use. Three of the most significant differences between the Board and the BBC are mission, organizational structure, and future operations. The central mission of U.S. international broadcasting is geared toward reaching audiences that are underserved by available media voices. As a result, the United States does not broadcast to fully democratic nations such as Canada, the United Kingdom, or Germany. In contrast, the BBC’s mission is much broader and includes reaching listeners in markets around the world, including media-rich countries such as the United States. The organization of U.S. international broadcasting has evolved along the lines of “official” and “surrogate” broadcast entities. This division has led to the creation of five separate broadcast entities with varying missions, budget resources, and operating styles. The BBC has only one World Service, which, according to BBC officials, varies broadcast content on a country-by-country basis in response to market research and audience demands. Finally, U.S. international broadcasting and certain component operations are either subject to sunset provisions or are required to phase out over a period of time. In contrast, the World Service is not subject to sunset. In the case of U.S. international broadcasting, an original sunset provision in the 1994 International Broadcasting Act generally required the Board to cease funding Radio Free Asia after September 30, 1998. The act was amended in 1999 to provide for explicit sunset of funding for Radio Free Asia after September 30, 2009. Congress has also specified conditions under which Radio Free Europe/Radio Liberty broadcasting should be phased out in a particular country. Radio TV/Marti is required to be terminated upon transmittal by the President to appropriate congressional Committees of a determination that a democratically elected government is in power in Cuba. Even the Voice of America’s goal to serve audiences deprived of full access to an open and free press suggests a diminishing role over time as the long-sought goal of global press freedom is eventually achieved. Information on U.S. international broadcasting and BBC World Service operations is provided in table 5. The table is designed to provide summary data on U.S. and BBC broadcast operations and the table notes should be read carefully to understand the data on total budget costs, listening audience, and number of language services. This numerical data is not sufficient to draw conclusions about the relative efficiency and effectiveness of the two organizations. Additional factors such as the relative costs of reaching different target audiences, the different mixes of broadcast technology, and the nature of operating overheads would need to considered to arrive at valid conclusions. The Chairman of the House Committee on the Budget requested that we examine whether the U.S. Broadcasting Board of Governors (1) responded to the specific mandates regarding Radio Free Europe/Radio Liberty’s operations, (2) implemented an annual language service review process, and (3) instituted a strategic planning and performance management system. He also asked us to provide information on U.S. international Broadcasting and British Broadcasting Corporation operations. To assess whether the Board has responded to the specific cost-cutting mandates and expectations established in the 1994 International Broadcasting Act, we examined the Board’s transmission consolidation efforts, the history of consolidation activities in connection with Radio Free Europe/Radio Liberty’s move from Munich to Prague, the Board’s efforts to privatize Radio Free Europe/Radio Liberty’s operations by fiscal year 1999, and the Board’s efforts to adopt digital production technology for each broadcast entity. We met with Board, International Broadcasting Bureau, Voice of America, Worldnet Television and Film Service, Radio Free Europe/Radio Liberty, and Radio Free Asia senior officials in Washington, D.C., to discuss these issues and review applicable documentation. This documentation included the Board’s report on Congress’s earlier mandate to privatize Radio Free Europe/Radio Liberty’s operations and additional documentation on the Board’s transmission consolidation efforts, the relocation from Munich to Prague, and the Digital Broadcasting Program being implemented by the International Broadcasting Bureau on behalf of the Voice of America. We also met with Radio/TV Marti officials in Miami, Florida, and Radio Free Europe/Radio Liberty officials in Prague to review their respective streamlining and cost- cutting activities. To assess whether the Board implemented a language service review process, we met with International Broadcasting Bureau planning staff in Washington, D.C., to determine the process, evaluation criteria, and outcome of this year’s language service review. We reviewed the Board’s February 2000 reports on this process and the linkage between these documents and the Board’s reallocation decisions. To assess whether the Board has instituted a strategic planning and performance management system, we obtained and reviewed copies of all relevant Results Act planning documents, including the Board’s 5-year strategic plan dated December 1997; annual performance plans for fiscal years 1999, 2000, and 2001; and the Board’s March 2000 annual performance report. We compared the Board’s fiscal year 2001 performance plan against GAO’s guide for evaluating agency annual performance plans. We also met with Board staff to discuss the Board’s latest efforts to update its Results Act planning documents. In order to prepare a comparison of Board and BBC World Service operations, we interviewed BBC officials in London and collected and analyzed relevant documents including World Service strategic plans, marketing and audience research information, and data relating to the BBC’s performance management system. We conducted our review from December 1999 to August 2000 in accordance with generally accepted government auditing standards. The following are GAO’s comments on the Broadcasting Board of Governors’ letter dated September 13, 2000. 1. We agree that U.S. international broadcasters and the BBC World Service have different roles. However, the fact that U.S. international broadcasters have multiple and more complex missions does not obviate the value of examining the BBC’s operations relative to U.S. international broadcasting. The Board acknowledges the value of tracking and evaluating the activities of competitors by maintaining an on-line database to capture this information. The database includes country-by-country audience data that shows how U.S. international broadcasters are doing relative to other major international broadcasters, with a particular focus on the BBC. The Board’s database also summarizes this information into seven regional groups to help identify broader performance trends. For example, with regard to the 35 countries in Africa targeted by the Voice of America and the BBC, the Board’s database shows that the BBC has a higher audience share than the Voice of America in 25 countries, the Voice of America has a higher audience share in 8 countries, and the two organizations are tied for listeners in two countries. 2. The number of language services shown in table 5 in appendix IV is footnoted to indicate that 24 of the U.S. language services are duplicate language services run by the Voice of America and surrogate broadcasters. We revised the applicable table note to point out that many of the Board’s language services have been mandated by Congress. 3. We revised the table to show a total funding figure of $367 million for U.S. international broadcasting. This figure was calculated by deducting $53 million in television production and transmission costs from a total U.S. funding figure of $420 million for fiscal year 2000. We made this change to reflect that the BBC funding figure does not include television costs. 4. We agree that simply dividing the number of total listeners by total broadcast costs does not provide meaningful comparative information in the absence of a more detailed understanding of why costs differ between the two organizations. Explanatory factors might include the relative costs of reaching different target audiences, different mixes of broadcast technology, and the relative efficiency and effectiveness of each organization. We revised the introduction to table 5 to emphasize that our table is designed to provide summary data on U.S. and BBC broadcast operations. We also incorporated the Board’s concern that readers should avoid making a cost-per-listener comparison between U.S. and BBC international broadcasting. In addition to those named above, Michael ten Kate, Wyley Neal, Ernie Jackson, and Rona Mendelsohn made key contributions to this report. The first copy of each GAO report is free. Additional copies of reports are $2 each. A check or money order should be made out to the Superintendent of Documents. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. Orders by mail: U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Orders by visiting: Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders by phone: (202) 512-6000 fax: (202) 512-6061 TDD (202) 512-2537 Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. Web site: http://www.gao.gov/fraudnet/fraudnet.htm e-mail: [email protected] 1-800-424-5454 (automated answering system) | Pursuant to a congressional request, GAO examined whether the Broadcasting Board of Governors: broadcasters (1) responded to the specific limitations and cost-cutting expectations regarding Radio Free Europe/Radio Liberty's operations; (2) implemented an annual language service review process; and (3) instituted a strategic planning and performance management system. GAO noted that: (1) the Board met its mandates under the 1994 U.S. International Broadcasting Act to reduce Radio Free Europe/Radio Liberty's annual budget by lowering its budget from $208 million in fiscal year (FY) 1994 to approximately $71 million in FY 1996; (2) it did this by taking several actions including relocating its operation from Munich, Germany, to Prague, Czech Republic and significantly reducing staff; (3) additional savings were made by: (a) eliminating several hundred hours of broadcast overlap; (b) eliminating and modifying a limited number of language services; (c) consolidating transmission operations under the International Broadcasting Bureau; and (d) deploying digital sound recording and editing technology, which has increased Radio Free Europe/Radio Liberty's staff efficiency and effectiveness; (4) the Board completed a comprehensive language service review in January 2000 that sought to systematically evaluate U.S. international broadcast priorities and program impact; (5) the Board intends to use this information to strategically reallocate approximately $4.5 million in language service funds from emerging democracies in Central and Eastern Europe to several African countries and selected countries in other regions; (6) according to the Board, it intends to continue to use the annual language service review process to strategically analyze broadcast priorities, program funding, and resource allocations; (7) the Board has not yet established an effective strategic planning and performance management system that incorporates Government Performance and Results Act planning, the annual language service review process, and the program reviews of individual language services conducted by the International Broadcasting Bureau and the surrogate broadcasters; (8) the Board's FY 2001 performance plan is deficient because of missing or imprecise performance goals or indicators and a lack of key implementation strategies and related resource requirements that detail the key issues facing the Board; (9) the Board has not established a standard program review approach, which would help ensure that consistent and meaningful measures of program quality are developed across broadcast entities; and (10) it has also not incorporated specific audience size and composition targets into the program review process, which would help ensure that program reviews culminate in a written report that identifies the specific actions needed to achieve agreed-upon performance goals. |
VETS administers national programs to (1) ensure that veterans receive priority in employment and training opportunities from the employment service; (2) assist veterans, reservists, and National Guard members in securing employment; and (3) protect veterans’ employment rights and benefits. VETS carries out its responsibilities through a nationwide network that includes representation in each of Labor’s 10 regions and staff in each state. The Office of the Assistant Secretary for VETS administers the agency’s activities through regional administrators and a VETS director in each state. The state VETS directors are the link between VETS and the states’ employment service system, to whom the DVOP and LVER staff--as state employees--directly report, and which is overseen by Labor’s ETA. In fiscal year 2005, VETS requested $220.6 million for all its programs, including $162.4 million for the DVOP and LVER programs. States plan to use this funding to support more than 2,100 DVOP and LVER positions. In September 2001, we identified some key areas in which VETS could better serve its clients by providing more flexibility and accountability in its programs. With its passage in November 2002, JVA amended the legislation that governs the DVOP and LVER programs by addressing many of the concerns we raised in our prior work. For example, JVA clarified the roles of DVOP and LVER staff, and gave states greater flexibility in determining how the staff are used. Under VETS guidance, the DVOP staff’s duties now focus on providing intensive services--with priority given to disabled veterans--including assessing the veterans’ special needs and skills, developing a plan of action, and coordinating any needed supportive services, such as training and job referrals. The DVOP staff also provide outreach activities to locate candidates who could benefit from intensive services, such as homeless veterans. As stated in VETS guidance, the LVER staff’s duties now include developing regular contact with employers to promote employment and training for veterans, developing relationships with community leaders to further promote veterans’ employment, and promoting and monitoring the participation of veterans in federally funded programs. The JVA legislation required states to develop plans that include details of the specific duties required of the DVOP and LVER positions and the strategy for their integration into the one-stop system. The legislation also required the establishment of a comprehensive performance accountability system to measure performance of the DVOP and LVER staff, using performance measures consistent with those of WIA. In addition, JVA established an incentive program to recognize eligible employees for excellence in providing veterans services and to encourage the improvement of services, with 1 percent of each state’s annual grant allocation to be designated for incentive funding. In addition, JVA required VETS to establish a minimum standard for the rate at which veterans enter employment, a standard which all states are required to meet. The JVA legislation further required annual performance reviews of veterans’ services, which VETS uses to monitor the DVOP and LVER programs to ensure proper accountability. VETS has taken action to implement the changes to the DVOP and LVER programs. VETS has issued policy guidance and conducted training on the DVOP and LVER staff’s new roles and responsibilities. In addition, nearly half the states are taking advantage of JVA’s flexibility to employ part-time DVOP staff. Although VETS has issued guidance on the performance incentive program to recognize exemplary staff as required by JVA, states have implemented this program differently, and 11 states do not plan to implement the incentive program because sometimes it conflicts with the state’s policy if awards are given to individuals. In addition, integrating DVOP and LVER staff into one-stop centers continues to be challenging. Through its policy guidance letters, VETS has clarified the DVOP and LVER staff’s new functions, along with new staffing and reporting requirements, including the use of part-time positions for DVOPs. In addition, shortly after JVA was enacted, NVTI held a series of implementation seminars covering DVOP and LVER staff’s new roles and responsibilities that were attended by representatives from all states. NVTI also conducts case management training aimed at DVOP staff. At the end of its first training year in October 2004 following passage of JVA, NVTI reported training 282 DVOPs and estimated that an additional 144 would be trained each year in the future. Similarly, NVTI conducts employer outreach training focused on LVERs. Because this class is new, NVTI estimates that it will train 264 LVERs by October 2005, and projects that an additional 240 LVERs would be trained each year. One of the key changes in the new law gives states the flexibility to establish part-time DVOP and LVER positions, though this was already permitted to some extent for LVERs. According to their fiscal year 2005 state plans, 23 states planned to use the new flexibility under JVA to employ both full- and part-time DVOPs, while 34 states planned to use the long-standing authority to employ both full- and part-time LVERs. As shown in table 1, part-time DVOP positions would comprise about 18 percent of the total DVOP staff and about 44 percent of the total LVER staff. Some states plan to use part-time DVOPs and LVERs extensively. For example, two states, Maine and Washington, planned to use part-time LVERs exclusively. In addition, South Dakota plans on having 87 percent of its DVOPs be part-time, and Vermont plans to have 91 percent of LVERs be part-time. By contrast, in New Jersey, only 5 percent of DVOPs are to be part-time and, in Indiana, 6 percent of LVERs are to be part-time. VETS has implemented JVA’s requirement to establish a performance incentive awards program by issuing policy guidance that lays out criteria and monetary as well as nonmonetary awards for states to consider in developing an awards program. According to fiscal year 2005 state plans, 11 states did not plan to use the incentive program due to reasons such as conflicts with state law or other policies if the awards are given to individuals. The remaining 40 states planned to implement the incentive program in various ways. For example, in one state, two DVOPs were awarded a one-time maximum award of $1,000. In another state, however, top performing DVOP and LVER staff were given a one-time cash award for as little as $16. Regardless of their current approach to implementing incentives, some VETS officials said they would like to see award eligibility criteria expanded beyond individuals to include entire units. Labor officials acknowledge that integration of DVOP and LVER staff into the one-stop centers has been a persistent challenge. The extent that implementing changes under JVA will assist in breaking down the barriers and entrenched cultures that have precluded integration in the one-stop centers will likely take years. According to the DVOP and LVER staff we interviewed, integration still varied widely among local areas, depending on the level of support provided by the one-stop manager for the DVOP and LVER programs. For example, one DVOP staff told us that the veterans program is highly integrated with the WIA program in her local one-stop, with both sharing case management responsibilities. In addition, she participates in regular meetings with the one-stop partners and attributed this cohesion to the commitment by her one-stop manager to work cooperatively with all the partners. In contrast, a DVOP from another state told us that he was assigned to tasks that prevented him from serving as many veterans as he would have liked. In cases where there was poor integration, several reasons were cited by DVOP and LVER staff we interviewed from various states. One reason was that other one-stop staff were not educated or trained on serving veterans. An NVTI official told us that the institute has provided training to states that have requested it, but was concerned that the states that were struggling with providing veterans’ services were the very ones that did not request training. Other reasons included the perception among DVOP and LVER staff we interviewed that there is little coordination between VETS and ETA to ensure integration among all partner programs, adopt uniform definitions of eligible veterans, and consistently give veterans priority of service regardless of program. VETS has implemented some JVA changes to the accountability system related to the measures used for assessing DVOP and LVER performance, but it estimates that it will be at least 2007 before it can implement a minimum standard for veterans entering employment that all states will be expected to meet. Until the standard becomes available, VETS has used historically based outcomes in negotiating performance goals with states. In addition, Labor has established an entered-employment goal of 58 percent for veterans served through the DVOP and LVER programs. While VETS reported that the DVOP and LVER programs met Labor’s program year 2003 goals for some measures, concerns about data reliability remain, preventing an accurate assessment of how well DVOP and LVER staff are performing. The performance measurement system for the DVOP and LVER programs has been in transition over the last several years. Prior to JVA, performance measures placed more emphasis on process-oriented measures—measures that simply tracked services provided to veterans, not on the employment outcomes veterans achieved. In addition, states used different data sources to report employment-related outcomes, resulting in performance that was not comparable across states. According to VETS officials, VETS adopted performance measures, beginning July 1, 2003, that are consistent with those of WIA, but has not yet specified when it will implement a system for weighting the measures to provide special consideration for such groups as disabled veterans, in accordance with JVA’s requirements. Another fundamental change was the use of Unemployment Insurance (UI) wage records to identify veterans who get jobs rather than the use of time-consuming follow-up procedures. The current performance standards for the DVOP and LVER programs apply to various veterans populations, including disabled veterans. Three measures are based on WIA: (1) veterans that entered employment; (2) retention in employment at 6 months; and (3) job seeker satisfaction. In addition, VETS tracks entered employment following receipt of staff- assisted services and entered employment following receipt of case management. VETS officials told us, however, that the measures will change again on July 1, 2005, when VETS will adopt the Office of Management and Budget’s new common measures. VETS will retain several existing measures that track employment following services provided by DVOP and LVER staff. While the new common measures afford some advantages over existing measures, the frequent shifts in focus have made it difficult to collect comparable data that can be used to establish a pattern of performance for the DVOP and LVER programs and compare outcomes across different time periods. As such, VETS anticipates that it will take at least until 2007 to collect the necessary trend data to establish the minimum standard for the entered-employment rate that all states will be expected to meet. In the interim, states are required to meet performance goals that they negotiate annually with VETS based on historic outcome levels. For example, according to VETS, states’ program year 2004 negotiated goals for entered employment ranged from 46 percent to 67 percent for veterans, and from 41 percent to 65 percent for disabled veterans. Nationwide, VETS reported that the DVOP and LVER programs met Labor’s goals for the entered employment rate (58 percent) for all eligible veterans in program year 2003, while they fell short of their 60-percent target entered employment rate for disabled veterans (see table 2). Similarly, VETS reported that the programs exceeded goals for the rate at which veterans retained employment 6 months later. Even after the new measures will be adopted, VETS officials remain concerned about the reliability of data used to assess performance. VETS officials attribute their concerns about service-related data reliability to DVOP and LVER staff not understanding the new definitions of the performance measures, lacking training on entering data into an automated system, inconsistent registration policies, or simply inputting erroneous data. In addition, VETS officials told us that some states have known data reliability issues with their management information systems. While Labor has established data validation procedures, the reliability of performance data is an issue that is not fully addressed by Labor’s current validation procedures. For example, all states must certify that their data are correct using validation software that cross-checks the totals they report to VETS. However, validation does not extend to the case file level to ensure that DVOP and LVER staff accurately collect and report data at the point of service delivery. In comparing the reliability of data on services to those on employment outcomes, VETS officials believe that outcome data are more reliable because they are based on Unemployment Insurance (UI) wage records. However, as we have noted in past work, while UI wage records are reliable, they suffer from significant time lags, resulting in at least an approximately 1½- year wait to obtain information on outcomes. In response to JVA’s requirement to monitor the DVOP and LVER programs, VETS has shifted greater responsibility for monitoring program performance to the state level, and VETS’ monitoring role continues to evolve from enforcer to partner in achieving state goals. VETS staff completed their first review of annual state self-assessments in program year 2004 and have completed their first round of site visits to a random sample of local offices. However, the extent that this new approach to monitoring DVOP and LVER performance strengthens program accountability may require several years of state and VETS experience collecting, reporting, and using information to improve services to veterans. Beginning in program year 2004, VETS began reviewing all the state plans for compliance with program requirements and, for any deficiencies noted during the review, required states to correct the relevant section of the plan. In addition, VETS requires states to submit annual self-assessments to identify best practices, ensure the approved state plan is being effectively implemented, determine the state’s progress toward meeting its performance goals, and identify areas for technical assistance and training. Besides conducting reviews of the state plans and self-assessments, VETS also conducts annual on-site monitoring reviews of 20 percent of local offices within each state, and all local offices must be visited at least once in 5 years. While we do not know how many offices have DVOP or LVER staff, there are an estimated 1,900 comprehensive one-stop centers and about 1,600 affiliate one-stop centers around the nation. The on-site reviews include interviewing personnel who are involved in providing services to veterans, observing the flow of customers in the lobby, and reviewing local guidance and plans. Now that VETS has completed its first year under the new performance accountability system, it is unclear how it will use its monitoring results to improve DVOP and LVER program performance. At the national level, VETS has developed a system to track corrective actions needed in states’ plans, but has not yet developed a strategy to best meld performance information from its other monitoring efforts to improve program performance at the local, state, and regional levels. For example, VETS officials in two states we visited told us that they use the site visit results to identify local offices needing targeted technical assistance. However, one state VETS official told us that because local offices varied considerably in their performance, he was uncertain whether the 20- percent sample used for site visits would accurately capture areas most in need of technical assistance. While information on DVOP and LVER performance is also available through local office reporting, VETS officials have not provided a consistent methodology to incorporate and analyze relative performance among the local offices, states, and regional offices. VETS and ETA continue to work on issues related to sharing the results of monitoring efforts, coordinating corrective actions, and taking a joint approach to enforcement. Mr. Chairman, this concludes my prepared remarks. I will be pleased to answer any questions you or other members of the subcommittee may have. Our remaining work will examine these and other issues in greater depth to meet our mandated reporting date at the end of the year. For further information regarding this testimony, please contact me at (202) 512- 7215. Key contributors to this testimony were Lacinda Ayers, Jeremy Cox, Meeta Engle, Emily Pickrell, and Stanley Stenersen. Workforce Investment Act: States and Local Areas Have Developed Strategies to Assess Performance, but Labor Could Do More to Help. GAO-04-657. Washington, D.C.: June 1, 2004. Veterans’ Employment and Training Service: Flexibility and Accountability Needed to Improve Service to Veterans. GAO-01-928. Washington, D.C.: September 12, 2001. Veterans’ Employment and Training Service: Proposed Performance Measurement System Improved, But Further Changes Needed. GAO-01-580. Washington, D.C.: May 15, 2001. Veterans’ Employment and Training Service: Strategic and Performance Plans Lack Vision and Clarity. GAO/T-HEHS-99-177. Washington, D.C.: July 29, 1999. Veterans’ Employment and Training Service: Assessment of the Fiscal Year 1999 Performance Plan. GAO/HEHS-98-240R. Washington, D.C.: September 30, 1998. Veterans’ Employment and Training: Services Provided by Labor Department Programs. GAO/HEHS-98-7. Washington, D.C.: October 17, 1997. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | The Department of Labor's Veterans' Employment and Training Service (VETS) administers two programs designed to assist the roughly 700,000 veterans who are unemployed in any given month. These two programs, the Disabled Veterans' Outreach Program (DVOP) and the Local Veterans' Employment Representative (LVER) program, fund employment, training, and job placement services to veterans. In 2002, Congress passed the Jobs for Veterans Act (JVA), which redefined the roles of DVOP and LVER staff and required that VETS establish a new performance accountability system. This testimony is based on GAO's ongoing work in this area and focuses on three aspects: (1) the separation of DVOP's and LVER's roles and responsibilities; (2) VETS' performance accountability system for DVOP and LVER staff; and (3) VETS' system for monitoring DVOP and LVER performance. VETS has established separate roles for DVOP and LVER staff and has provided policy guidance and training to states explaining these changes. Under JVA, states now determine how many DVOP and LVER staff they hire, where to place them within the local workforce areas, and 23 states are planning to use some part-time DVOP staff. There are indications that integrating DVOP and LVER staff into the local workforce offices remains challenging. While VETS has issued guidance on an incentive program to encourage improved performance, state implementation of the program has varied, and 11 states do not plan to participate. VETS has implemented employment measures for DVOP and LVER staff, but a minimum standard that all states must meet for veterans entering employment will not be available before 2007. VETS reported meeting Labor's goal of achieving a 58-percent employment rate for all veteran job seekers during program year 2003, but fell somewhat short of reaching a 60-percent employment goal for disabled veterans. Assessing how well DVOP and LVER programs are serving veterans may continue to be difficult due to ongoing concerns about data reliability. VETS implemented a monitoring system in program year 2004 that relies primarily on state self-assessments of performance in conjunction with onsite reviews. It is unclear, however, how VETS staff at the state, regional, and national levels will use this information consistently to guide or improve the DVOP and LVER programs. VETS is working with other Labor agencies to coordinate monitoring and enforcement efforts. |
The Higher Education Act specifies a formula, known as the federal need analysis methodology, that is used to determine students' eligibility for federal student aid. A variety of federal grants and loans are available to assist students pay postsecondary expenses. While some federal aid is allocated based on a student’s need for financial aid that is determined by the formula, other federal aid is allocated regardless of need. Many states and institutions have their own student aid programs, providing students an additional source of aid to help pay for postsecondary expenses. The federal need analysis methodology is used to determine a student’s need for financial aid by comparing a student’s and/or family’s expected family contribution to the student’s cost of attendance (COA). The EFC is defined as the household financial resources that are considered available to help pay for the student’s postsecondary education expenses and is calculated by reducing the household financial resources reported by aid applicants on the Free Application for Federal Student Aid by certain expenses and allowances, including a state and other tax allowance. A student is classified as either financially dependent on his or her parents or independent in the financial aid process. This classification is important because it affects the factors used to determine a student’s EFC. For dependent students, the EFC is based on both the students and parents’ income and assets, as well as whether the family has other children enrolled in college. For independent students, the EFC is based on the student’s and, if married, spouse’s income and assets and whether the student has any dependents other than a spouse, and the number of family members enrolled in college. To capture and reflect changes in students’ and families’ state and other tax liabilities, Education is responsible for annually updating the state and other tax allowance tables that were established in the HEA. Education determines the state and other tax allowance based on state and local tax information from federal tax returns filed with the Internal Revenue Service and compiled by its Statistics of Income Division. For dependent students and independent students without children, the allowance is composed of state and local income taxes. For parents of dependent students and independent students with children, personal property taxes and real estate taxes are added to the allowance. The costs of attending a postsecondary institution that a student faces include tuition, fees, books, and living expenses. The student may be able to receive financial aid to help cover costs of attendance depending on where the student wants to enroll as well as the student’s and family’s financial resources. If the price of attendance is greater than the expected family contribution, the difference between the two represents the student’s financial need. If the EFC is greater than the price of attendance, the student is not eligible for federal need-based aid but may still qualify for aid that is not based on need. (See fig. 1.) Postsecondary institutions are responsible for determining individual student’s eligibility for specific sources of financial aid and compiling these sources to meet each student’s need—a process known as packaging. Part of this process involves deciding which types or sources of aid should be awarded first—for example, grants or loans, federal or nonfederal aid, need-based or non-need-based aid. In awarding aid, institutions typically first package any grants for which the student is eligible and then offer loans. Another factor considered in packaging aid is whether to reduce aid from any source in a student’s package to offset an aid award from another source. Title IV of the HEA, as amended, authorizes the following federal aid programs: Pell Grants—Pell Grants are grants to low- and middle-income undergraduate students who have federally defined financial need and who are enrolled in a degree or certificate program. In general, a student’s Pell Grant award is determined by subtracting a student and family’s EFC from either the maximum allowable Pell Grant award, currently $4,050, or the COA, whichever is less. Stafford and PLUS Loans—These loans may be made by private lenders and guaranteed by the federal government (guaranteed loans) or made directly by the federal government through a student’s school (direct loans). Subsidized Stafford Loans—Subsidized loans are made to students enrolled at least half-time in an eligible program of study that have federally defined financial need. The federal government pays the interest costs on the loan while the student is in school. Subsidized loans are subject to certain maximum loan limits and are awarded based on the difference between a student’s COA less EFC and other awards of student aid including Pell Grants, state or institutional grants, etc. A change in a student’s EFC may—or may not—affect the amount of a subsidized loan award depending on its effect on other components of the student’s financial aid package. Unsubsidized Stafford Loans—Unsubsidized Stafford loans are non- need-based loans made to students enrolled at least half-time in an eligible program of study. Although the terms and conditions of the loan (e.g., interest rates) are the same as those for subsidized loans, students are responsible for paying all interest costs on the loan. While Stafford unsubsidized loans are not need-based aid, a change in a student’s or family’s EFC may nonetheless affect the amount a student may borrow. Unsubsidized loans are awarded based on the difference between a student’s COA less other awards of student aid—including Pell Grants, state and institutional grants, and subsidized loans. These loans are subject to the combined maximum loan limits for subsidized and unsubsidized loan awards. A change in a student’s EFC that affects Pell Grant, subsidized loan, or state or institutional grant awards may therefore affect the amount of an unsubsidized loan award. PLUS Loans—PLUS loans are non-need-based loans made to creditworthy parents of dependent undergraduate students enrolled at least half-time in an eligible program of study. Borrowers are responsible for paying all interest on the loan. Like unsubsidized loans, PLUS loans are generally awarded based on the difference between a student’s COA less other awards of student aid including unsubsidized loan awards. As is the case with unsubsidized loans, a change in a student’s or family’s EFC can affect the amount of a PLUS loan that a parent may borrow. Dependent students may borrow combined subsidized and unsubsidized Stafford loans up to $2,625 in their first year of college, $3,500 in their second year, and $5,500 in their third year and beyond. Independent students and dependent students without access to PLUS loans can borrow combined subsidized and unsubsidized Stafford loans up to $6,625 in their first year, $7,500 in their second year, and $10,500 in their third year and beyond. There are aggregate limits for an entire undergraduate education of $23,000 for dependent students and $46,000 for independent students and dependent students without access to PLUS loans. Campus-Based Aid—Participating institutions receive separate allocations for three programs from Education. Funds are distributed to institutions in part on the basis of the institution’s previous allocation levels and in part on the basis of the aggregate financial need of eligible students in attendance. The institutions then award the following aid to students: Supplemental Educational Opportunity Grants (SEOG)—SEOG grants are grants for undergraduate students with federally defined financial need. Priority for this aid is given to Pell Grant recipients. In general, an annual SEOG award may not be less than $100 and may not exceed $4,000. Perkins Loans—Perkins loans are low-interest (5 percent) loans to undergraduate and graduate students. Interest does not accrue while the students are enrolled at least half-time in an eligible program. Priority is given to students who have exceptional federally defined financial need. Students can borrow up to $4,000 for any year of undergraduate education with an aggregate limit of $20,000. Work-Study—Work-Study is employment in on- or off-campus jobs for which students who have federally defined need earn at least the current federal minimum wage. The institution or off-campus employer pays a portion of their wages, while the federal government pays the remainder. Work-study is awarded based on the difference between a student’s need less other aid awarded. Students received an estimated $98 billion in financial aid in award year 2003–2004 from the Title IV federal aid programs as well as state and institutional grants, of which the federal government provided more than two-thirds. Federal assistance is composed of both loans and grants, and most federal grant aid is need-based. States distributed about $6 billion in student aid. Institutions provided about $23 billion in the forms of need- based grant and merit-based aid. (See fig. 2.) The current state and other tax allowance is based on 1988 tax data due in part to Education’s limited efforts in updating the allowance. While Education has been required to revise the allowance tables annually since 1993, prior to 2004 it had attempted to update the allowance twice—once in 1993 and once in 2003—but the latter update was suspended. As a result, the 1988 tax data used for the 1993 update are still in effect. The lack of updates is primarily because Education did not annually seek data needed to update the allowance and did not establish effective internal control to guide the updating process. In addition, Education did not consider alternatives when data were not readily available. While Education has published the allowance tables used to award Title IV aid in the Federal Register annually since 1993, prior to 2003 these tables had been based on 1988 tax return information compiled by SOI. Congress incorporated the state and other tax allowance into the HEA in 1986 on the basis of 1983 SOI data but did not establish a mechanism to update the basis of the allowance until 1992. Amending the Higher Education Act in that year, Congress directed Education to “publish in the Federal Register…revised table of State and other tax allowances” annually, and to “develop such revised table after review of the Department of the Treasury’s Statistics of Income file and determination of the percentage of income that each State’s taxes represent” for those residents. Education published the first updated tables of the allowance in 1993 after reviewing SOI’s 1988 tax data, the most recent data available at that time. Tables 1 and 2 present these revised tax allowances, by dependency status and state. Although Education has published allowance tables annually since 1993, the published allowances continued to be based on 1988 SOI data until 2003, when new tables based on 2000 SOI tax data were published. Education intended to use the new tables to award student aid in 2004– 2005 but did not do so in light of legislation that prohibited it from doing so. As a result, the state and other tax allowance used to award financial aid continued to be based on 1988 tax data. Prior to 2003, Education’s efforts to update the allowance were limited: It neither annually sought data to update the allowance nor pursued alternatives when SOI data it had used previously were not readily available. According to Education’s records, the department only sought data to update the allowance for 6 of the 11 years since it was first directed to annually update the tax allowance. SOI records also document that Education did not routinely request data. Even when Education did request data, it is difficult to determine exactly what data were requested because such requests were not made in writing. Rather, Education’s documentation consists of informal file notes of telephone contacts with SOI officials that are minimal and do not describe the substance of what was discussed. Furthermore, as of the end of our audit work, Education could not provide us with written procedures guiding staff on the routine steps necessary to update the tax allowance or to identify what data would be needed to update the allowance. After Education published the 1993 update to the allowance, on the basis of 1988 SOI tax data, Education sporadically sought data from SOI to develop subsequent updates. According to both Education and SOI officials, however, SOI would not have provided these data on a cost-free basis. According to SOI officials, the 1988 tax data was produced to illustrate the type of information SOI could develop that clients, such as states, might find useful and be willing to purchase in the future. SOI never intended to produce the data as a regular series, and the fact that it was useful for Education’s purposes was coincidental. Education’s records do not indicate what actions the agency undertook when it first learned that SOI would not provide data cost-free, including the extent to which it considered paying for such data. Education officials told us, however, that they never sought a cost estimate from SOI because they did not wish to pay for the data. Moreover, Education officials told us that they did not consider using data other than SOI data because they believed Education did not have the discretion to do so under the law. Beginning in 2000, about 1 year after Education last contacted SOI, SOI began to annually publish on its Web site data that Education could have used to update the allowance. However, Education was unaware of these data because it did not contact SOI again until 2003 for the purpose of making its proposed update that year. As we have pointed out in numerous reports, weak internal control can be a contributing factor to, or cause of, insufficient execution of agency responsibilities. Collectively, internal controls are an integral component of an organization’s management intended to provide reasonable assurance that, among other things, operations are effective and efficient. Education’s failure to fully document its attempts to update the allowance over the past several years and its lack of written procedures to guide staff efforts to ensure that they take the steps necessary to update the allowance, such as a checklist, are indicative of an ineffective system of internal control. Our Standards for Internal Control in the Federal Government provides guidance to agencies to help them assess, evaluate, and implement effective internal controls that can be helpful in improving their operational processes. Under the proposed update, the state and other tax allowance would have decreased for most states; the change in the allowance, in turn, would have increased the amount that families are expected to contribute by about $500 on average for a majority of student aid applicants. Of those aid applicants with an increase in their EFC, some would have received lower Pell Grant awards or would have become ineligible for Pell Grants. Increases in EFCs would not only have affected Pell Grants but possibly other forms of aid, and these effects in turn would have affected Stafford and PLUS loan awards. The extent to which the proposed update would have affected federal Campus-Based, state, and institutional aid would likely have varied according to factors such as aid awarding policies and changes in a state’s allowance. Education’s tax allowance update would generally have increased the dollar amount that families would be expected to contribute to a student’s education, but the percentage of student aid applicants affected would have varied by state and household income, and the size of the increase would have varied by state. EFCs would increase by about $500 on average for those with an increase (from an average of about $9,620 to about $10,115), but aid applicants from states with larger decreases in their tax allowance rates would have had a larger increase in their EFCs. Table 3 shows the proposed changes to the allowance and the estimated EFC impacts, by state. For example, Delaware would have had a 4 percentage point decrease in its tax allowance for families earning $15,000 or more—from 7 percent to 3 percent—and a 2 percentage point decrease for individuals, resulting in an EFC increase of $834 on average, among applicants in Delaware with an increase. In contrast, Nevada would have had a 1 percentage point decrease in the allowance for families (and a 1 percentage point increase for individuals), and its residents would have had an expected contribution increase of $186 on average. Similarly, the percentage of applicants affected would have varied from state to state. In Wisconsin, for example, the percentage of student aid applicants affected would have been slightly over 80 percent, in contrast to Connecticut, where just under 1 percent of applicants would have been affected. With regard to household income, over 90 percent of families earning more than $25,000 would have been expected to contribute more under the update, while only about 20 percent of families earning $25,000 or less would have been expected to contribute more. Across all states, we estimate that the update would have affected more than 60 percent of aid applicants and would have resulted in an EFC increase of $3.5 billion collectively in award year 2004–2005. Had Education’s proposed update been adopted, thus raising the expected family contribution for aid applicants, 38 percent of recipients would have either seen a decrease in their Pell Grant award or would have become ineligible for the grant altogether; taken together, the average reduction among those with a decrease in their amount would have been $144. In particular, 36 percent of recipients would have seen a decrease of $133 on average in their Pell Grant award but would have remained eligible for the awards in award year 2004–2005. Another 92,000 recipients, or 2 percent of those receiving Pell Grants, would no longer have been eligible and typically would no longer have received the minimum Pell Grant award of $400. As a result, the proposed update would have decreased overall federal Pell Grant expenditures by $290 million. Students residing in states with larger decreases in their allowances would have faced larger decreases in Pell Grant amounts and are more likely to have become ineligible for them. Table 4 shows the average decrease in Pell Grant awards for those who would have seen a decrease in their Pell Grant award or who would have become ineligible for them, by state. Students with relatively higher household incomes would have been more likely to face a decrease and would have faced substantially greater decreases in their Pell Grant awards than those with lower household incomes. On the other hand, the impact of the proposed update would not seem to have varied much by whether students are financially independent of their families. Figures 3 and 4 show the proportion of those facing a decrease in Pell awards and the median amount of such decreases, by income group. Stafford and PLUS loans could have been affected due to EFC changes as well. Those applicants for whom a change in EFC would have resulted in a change in other aid received—including Pell, state, and institutional grants—would likely have seen a change in their federal loans. This is because federal loan amounts depend, in part, on the amount of other aid received. However, even if a change in EFC would not have changed other aid received, some students may still have seen a change in their subsidized Stafford loan amount. Among those currently receiving federal loans, we estimate that over 20 percent of subsidized and unsubsidized Stafford loan holders and about 85 percent of PLUS loan holders could have seen a change in their loan amount due to their EFC increase. (See app. I for an explanation of our estimation methodology.) Figure 5 shows the proportion of undergraduate Stafford loan recipients who could have had a change in their subsidized loan amounts, by income and dependency status. Our case studies of students at selected schools for whom a change in EFC would have resulted in a change in their federal loans show that as the EFC would have increased, subsidized loans would have decreased, and unsubsidized loans would have increased in response to the decreases in subsidized loans and other forms of financial aid. In addition, PLUS loans could have made up for these decreases as well. While changes in the EFC could have affected Campus-Based awards, institutions have some discretion in allocating federal Campus-Based aid; as a result, the effects of the proposed update would have likely varied across institutions. The effect of the update on state and institutional need- based aid would also have varied based on differences in state and institutional aid awarding policies and on how much the tax allowance would have changed for each state. Our case studies of students at the four selected schools show that even though the EFC for the majority of students would have changed, Campus- Based awards would tend not to have been affected by the proposed update. However, some students would have been affected, and the effect would have varied across schools, due in part to differences in award policies. Specifically, had the proposed allowance been implemented for 2004–2005, we estimate that less than 15 percent on average of case study students receiving Supplemental Educational Opportunity Grants and Work-Study aid would have faced a lower award. The effects would have varied significantly by school because of differences in schools’ eligibility criteria for Campus-Based awards and the maximum awards provided. For example, eligibility for SEOG at one case study school is capped at an EFC of $3,850, whereas eligibility at another is at an EFC of $2,800, so a student whose EFC increased from $2,700 to $2,900 would have become ineligible for an SEOG at one school but not the other. As another example, one school offers $3,000 in Work-Study, and another limits the amount to $1,000, thereby demonstrating the different amounts involved. With regard to Perkins loans, we estimate that about 20 percent of students at the case study schools on average would have seen a decrease in their loan amount due to the proposed update. For students who see decreases in their Perkins loans, the decrease would have been about $1,200 on average but would have varied significantly by school due to differences in eligibility criteria. Table 5 shows the case study results of changes in these three school-administered federal programs. The majority of states use the federal need analysis methodology to allocate state need-based aid; as a result, the proposed update could have affected the amount of and the extent to which students receive state grants. The effect would have varied by state due to, among other factors, differences in changes to the tax allowance by state and differences in state award policies. In Wisconsin, for example, we estimate that over 50 percent of state aid award recipients in our case study would have seen a decrease in their state award. In contrast, in Tennessee, just over 10 percent of recipients in our case study would have seen a decrease in their state award. The average reduction for Wisconsin students would have been less than that for Tennessee students due to the differences in how the states compute awards: Wisconsin's computation decreases aid for each dollar increase in EFC by less than Tennessee's computation. (See table 6.) As with state aid, the effect of the proposed update on the need-based aid provided by schools themselves would have varied significantly across schools due to, among other factors, differences in institutional award policies and changes in the EFC of students attending the institutions. However, the impact would be limited to schools that use the federal methodology to award aid. Since institutional aid may change as a result of both changes in the EFC and changes in other aid awarded, the effect of the increased EFC on institutional aid cannot be easily determined. For example, a school that bases its award solely on EFC might decrease its award as a result of an EFC increase, while a school that bases institutional aid on other aid awarded might increase the institutional award for some students. At the two private nonprofit schools included in our case studies, our results show that while more than 20 percent of students at each school would have faced a decrease in institutional need- based aid under the proposed allowance, more than 10 percent of students at these same schools would have received more institutional aid. Overall, case study students attending one private nonprofit school would have seen a decrease in institutional aid of almost $800 on average, whereas students at the other school would have seen a decrease of over $425 on average. As a result of certain limitations of the SOI dataset for the purpose of calculating the allowance and problems with how Education uses this dataset, the current state and other tax allowance may not reflect the amount of taxes paid by students and families. The dataset is limited for this purpose because the taxpayers included in it are generally not representative of financial aid applicants, the tax data it provides do not include all state and other taxes paid by students and families, and the tax data are several years older than the income information reported by students and families on the FAFSA. In addition to the limitations of the SOI dataset, Education does not make full use of the dataset to account for the varying tax rates paid by taxpayers in different income groups. The tax allowance calculated by Education may not reflect the taxes paid by most financial aid applicants because it is drawn only from those who itemize deductions on federal income tax returns—filers who may be taxed at a different rate than those who do not itemize. Because many FAFSA applicants have lower income—and taxpayers in lower income groups tend not to itemize—many applicants may not itemize. Specifically, we estimate that about 63 percent of FAFSA applicants do not itemize. Further, itemizers and nonitemizers within the same gross income group may have different state and other tax rates. On the one hand, for example, itemizers may be more likely to own a home than nonitemizers and thus would have a higher state and local tax liability due to real estate taxes. Conversely, those who itemize on their federal tax return may be more likely to itemize on their state return—and therefore have larger deductions, a lower state taxable income, and thus a lower state income tax than those who do not itemize on their federal return. Although sales taxes were included in SOI data when Congress formally provided for a tax allowance in the 1986 HEA amendments, tax reform legislation subsequently disallowed the deduction of state and local sales taxes, effectively eliminating them from this dataset. Therefore, the data collected by SOI for tax year 1987 and beyond have not reflected all state and other taxes. Excluding sales taxes may cause the allowance to be lower than it otherwise would be, especially for students and families who reside in states where sales taxes compose a significant portion of state and local revenue. In October 2004, Congress passed and the President signed the American Jobs Creation Act of 2004, which provides taxpayers who itemize deductions the choice of claiming a state and local tax deduction for either sales or income taxes, but only for tax years 2004 and 2005. As a result, the data collected by SOI for tax years 2004 and 2005 will likely include a mix of sales and income tax deductions reflecting the choices made by tax filers. Were these data used to update the allowance, the deductibility of sales taxes could increase the allowance for students and families, especially for those who reside in states where sales taxes compose a significant portion of state and local revenue. Regardless, the SOI data will not reflect both state and local sales and income taxes paid by individual taxpayers, as was the case prior to tax year 1987. SOI data available for any given award year are several years older than the income information reported by aid applicants on the FAFSA. For example, in its proposed update for award year 2004–2005 that was published in May 2003, Education used 2000 SOI data, the most recent available at the time of its data request. Because applicants would report 2003 income information for award year 2004–2005, had the allowance been implemented, there would have been a mismatch of 3 years between the tax data and the income data. Table 7 shows when SOI publishes the state and local tax data after the end of a tax year. Some time lag between the end of a tax year and when SOI publishes data for that year is expected because returns are collected after the end of the tax year and because of the time needed for processing those returns. This time lag could be extended when there are unexpected difficulties in processing the returns. For example, 2002 tax data were published after about 2 years, while 2000 and 2001 were published in 15 months. Further, SOI officials reported that the agency may be unable to publish the 2003 tax tables because it has been experiencing technical problems in processing returns from that year. Education’s method of calculating the state and other tax allowance does not accurately capture the amount in taxes paid by students and families. While Education calculates an allowance for each of the two income categories established by Congress—those earning less than $15,000 and those earning $15,000 or more—its methodology does not take into account the varying level of taxes paid by these two groups. To determine the allowance for families with income less than $15,000, Education uses the total of state and local taxes paid by all tax itemizers regardless of income, despite the fact that the SOI data provide separate information for 12 different income groups. Education’s methodology likely overestimates the taxes paid by the lower-income group for two reasons. First, higher-income individuals generally face higher tax rates than lower- income individuals. Our analysis of 2001 SOI tax data shows that those with an income below $20,000 have a state and other tax liability of about 3 percent on average, while those with an income of $20,000 or more have an average 5 percent tax liability. (See app. III.) Second, higher-income individuals are also more highly represented in the SOI data than lower- income individuals. For example, our analysis of the 2001 income distribution of financial aid applicants and of itemizers shows that about 35 percent of aid applicants have an income of less than $20,000, while less than 10 percent of itemizers have incomes in that range. (See table 8.) To calculate the allowance for the higher-income group, Education deducts a percentage point from the rate it calculates for the lower income group, a process that fails to account for the fact that higher-income individuals face higher tax rates than lower-income individuals. Since the estimate for the lower-income group reflects more of the taxes paid by those with higher income, this methodology likely underestimates the taxes paid by this higher-income group. We have identified four strategies for addressing the limitations of the tax allowance that range from modest to more substantial changes to the process: (1) continue to use SOI data but with a revised method for calculating the allowance, (2) substitute SOI data with one of several alternative data sources, (3) use the same allowance for all aid applicants without regard to state of residence, or (4) collect information directly from the aid applicants themselves. Except for the first option, use of these strategies would require legislative changes. Also, these four strategies differ in their impacts on federal costs and on aid applicants. The first strategy would be to make better use of SOI data to calculate the tax allowance, such as by modifying how the allowance is calculated and coordinating with SOI to ensure that the most recently available data are used. Education could use the SOI data on separate income bands to calculate the allowance for families rather than using the aggregate totals that SOI publishes. This would ensure that tax rates for different income bands are based on information more representative of those groups. With regard to coordinating with SOI, Education obtained SOI data for tax year 2000 for its update in 2003 about 3 months before SOI published data for tax year 2001. Thus, when Education published its proposed update in 2003, it was not based on the most recently available data. Coordinating with SOI could reduce the mismatch between the year of the income data collected from applicants and the tax data collected from SOI from 3 to 2 years. Education officials acknowledged that in the future, they may have the flexibility to wait for more recently available SOI data and still meet their schedule for publishing notice of a proposed update to the state and other tax allowance. Appendix IV shows what the tax allowance would be under this strategy for each state. The second strategy would be to discontinue use of SOI tax data and to replace it with publicly available data, such as the following: Bureau of Economic Analysis Personal Income and U.S. Census Bureau Description—The Bureau of Economic Analysis (BEA) annually publishes “Personal Income” tables, which cover aggregate household income, by state and are based on data from federal and state government programs, such as state unemployment insurance programs. The U.S. Census Bureau annually publishes “State Government Tax Collections” tables, which include overall state figures for individual income taxes, real estate taxes levied by states but not local governments, property taxes, and sales taxes for both individuals and businesses. This information is gathered by the U.S. Census Bureau through a mail canvass of appropriate state government offices that are directly involved with state-administered taxes; locally collected and retained tax amounts are not included in the survey. Both sets of tables and related documentation are available via the Internet. Use—Education could calculate the allowance by combining the information from both sets of tables. This approach has three potential advantages over using SOI data: The BEA data includes income from the entire population, including both filers and nonfilers, and the census data covers all tax filers instead of only itemizers, whereas SOI data only include itemizers, sales taxes are included in the tax collections tables—although they include taxes paid by businesses— and information is available 4 months after the end of a year. This allows income data reported by aid applicants and tax information corresponding to the prior year to be used to develop the allowance. A disadvantage of census data as compared with SOI data is that property tax information is more limited. Like the SOI data, the tables published by the BEA and the U.S. Census Bureau reflect aggregate measures of taxes and income and would not necessarily reflect the experiences of the typical family. Also, because the BEA and the U.S. Census Bureau report information in the aggregate—whereas SOI data are separated into different income bands—Education would need to make adjustments to differentiate tax rates by income. U.S. Census Bureau—Current Population Survey (CPS)—Annual Social and Economic Supplement (ASEC) Description—The Census Bureau annually publishes the Annual Social and Economic Supplement to the Current Population Survey, which includes income, estimated state income taxes, and estimated real estate taxes. The CPS household income information is gathered through a survey of 100,000 households. State income tax information is estimated by the U.S. Census Bureau based on reported income and filing status information and review of state income tax regulations. Real estate tax information is generated in a similar manner. Household characteristics are matched to the Census Bureau’s American Housing Survey to provide simulated real estate and property taxes. The CPS dataset and related documentation are available via the Internet. Use—Education could use CPS household-level data to generate tax allowances by income. An advantage of using the CPS is that it allows Education to estimate the taxes paid by the typical family rather than the taxes paid in aggregate, and CPS data also reflect the entire population—itemizers, nonitemizers, and nonfilers. Two disadvantages are that although we have assessed the information collected by the U.S. Census Bureau in generating the CPS to be reliable, the CPS tax information is not based on actual taxes paid but rather on U.S. Census Bureau tax models and is therefore subject to error and that the CPS does not include sales taxes. In addition, CPS data are available only somewhat sooner than SOI data, and because of the size of its sample, a 3-year average must be taken to generate reliable state-level information. Institute on Taxation and Economic Policy (ITEP)—Who Pays? A Distributional Analysis of the Tax Systems in All 50 States Description—ITEP is a nonprofit research and education organization that has published two reports on state taxes, one in 1996 and one in 2003, both entitled Who Pays? A Distributional Analysis of the Tax Systems in All 50 States. According to an ITEP official, ITEP plans to publish future updates every 3 years. These reports present estimated state information on income, real estate, property, and sales tax rates. The ITEP state tax tables are based on the 1988 public-use SOI sample of 365,000 federal tax returns, stratified so that they are representative at the state level and aged to reflect the most recent statistics on general population and tax filer characteristics published by the IRS and the U.S. Census Bureau. These returns and state tax regulations are analyzed to estimate state, local, real estate, property, and sales taxes paid based on household characteristics. Adjustments are made to reflect potential nonfilers as well. These reports are available via the Internet. Use—Were Education to determine ITEP data reliable, Education could use the ITEP tax figures to generate tax allowances by income band. Two advantages of ITEP data are that they include sales taxes and that an adjustment is made to estimate what nonfilers pay in sales taxes, whereas SOI data do not reflect sales taxes and do not account for nonfilers. A disadvantage is that ITEP’s income bands are not consistent across states and do not match those established by Congress. While these publications are publicly available, Education could also contract with any of these organizations to customize a dataset for the purpose of developing the tax allowance. The third strategy would be to apply the same allowance to all aid applicants, regardless of their state of residence. This would involve creating a standard allowance based on the CPS that reflects the median taxes paid by all households. This strategy would have the advantage of simplifying the need analysis methodology, but a disadvantage is that it would not account for the variation in taxes paid across states or income bands. For example, using a standard allowance may on average underestimate the taxes paid by those from high-tax states but may overestimate the taxes paid by those from low-tax states. The fourth strategy would be to collect tax information directly from aid applicants by adding questions to the financial aid application form. Under this strategy, applicants would report state and other taxes along with their federal taxes paid, information that could be used to reduce available household financial resources directly, making an allowance unnecessary. While documentation would likely be available for aid applicants to use in reporting their state income and property taxes, documentation concerning sales taxes may not be as readily available. Independent of this report, the Advisory Committee on Student Financial Assistance is currently assessing this strategy in the context of simplifying the financial aid application process and is expected to release its report in early 2005. One of the options considered by the Advisory Committee on Student Financial Assistance is to have the FAFSA questions tailored to the applicant, where applicants from different states (and with different financial circumstances) would answer different questions, and questions not relevant to an applicant would not be asked. Education officials expressed concern with this strategy because it might add to the administrative burden of students, schools, and Education. For example, Education’s current guidance directs applicants to specific line items from their federal tax returns for their federal taxes paid, and it would be difficult to do the same with state taxes, given the variations among state tax forms. Because institutions are required, on a limited basis, to verify information reported by students and families on the FAFSA, Education officials noted that having students and families report additional information on the FAFSA could increase the burden on institutions of verifying such information. These various strategies would have differed in their impacts on federal expenditures and on financial aid applicants if they were applied to the 2004–2005 award year. First, each strategy would have changed federal expenditures for grant and loan programs, for acquiring data, and for other administrative activities. For example, we estimate that Pell Grant program expenditures could have increased by as much as $400 million or decreased by as much as $200 million were the different options adopted and used to allocate aid for 2004–2005. Second, each strategy would have affected the amount of federal, state, and institutional aid that financial aid applicants receive and the number of applicants receiving such aid. For Pell Grants, using a standard allowance of 4 percent would have caused about 83,000 recipients to become ineligible for the program, but the other options would have affected fewer recipients. Table 9 shows the potential merits of each option in terms of federal expenditures for the Pell Grant program and the impact on expected family contribution, and table 10 shows the extent to which the tax allowances calculated under each strategy would accurately reflect state and local taxes paid by students and families. Millions of students rely on federal, state, and institutional aid every year to help pay for their postsecondary education. These awards are distributed to students and their families based in part on estimates about what they can afford to pay out of their own pockets. Yet if these estimates are considerably incorrect, the awards may not be distributed as equitably as they could be. Because state and local tax rates may have changed over the past decade, and Education has updated the allowance only once and given the limited way in which Education uses SOI data, it is very likely that the federal government may have been making an allowance for more taxes than were actually paid, or in other cases, undercompensating for taxes that were paid. Although Education has taken some recent steps to update the allowance, these efforts have not been successful. An inaccurate allowance could yield adverse effects for the federal government and students and their families. On the one hand, students and families could erroneously gain eligibility, which would cause federal funds to be misdirected. On the other, students and families could inappropriately lose eligibility for aid. Because state and institutional aid programs also make use of the federal need analysis methodology, such losses may be compounded for students and families. To ensure that relevant tax data from the Statistics of Income are requested systematically and that the most recent data are obtained, we recommend, in the short run, that the Secretary of Education develop formalized updating procedures and document such procedures in writing. Such procedures could include (1) making annual written requests to the Internal Revenue Service for state and local tax information and documenting those requests and (2) coordinating with the IRS to make sure Education knows when SOI data will be publicly released and to ensure that the most currently available data are used. To better capture the amount of taxes paid by students and families, we also recommend, in the short run, that Education revise its methodology for calculating the state and other tax allowance. Revisions could include using tax figures reflective of the different income groups to calculate the allowance rather than figures based on all itemized tax returns. To determine whether alternative methodologies and data would better enable Education to annually update the allowance, we recommend, in the longer run, that Education assess the cost and reliability of available data, including the alternative data sources identified in this report. If Education determines that statutory changes are needed to implement more effective alternatives, it should seek such changes from Congress. In written comments on our draft report Education generally agreed with our reported findings and recommendations. In its letter, Education offered a number of suggestions and observations. Education requested that we refer to the state tax rates as “‘proposed state tax rates under the HEA’ in the final report rather than using the label ‘proposed state tax rates of the Department,’” because it believes it does not have the authority to “ignore the clear statutory requirement to perform the update.” Because we explain in our report that the Congress incorporated the state and other tax allowance in the HEA and required Education to annually revise the allowance, we do not believe our characterization of the state tax rates leads to any confusion. Accordingly, we did not change how we refer to the state tax rates. Education also commented on the strategies we identified that address some of the limitations associated with the tax allowance and noted that it would review each of the alternative data sources discussed in our report that could be used to substitute for the SOI file data, as we recommended. Education noted that it believed all four strategies we identified in our report would likely require congressional action. We agree that those strategies that involve using alternative data sources to substitute for the SOI file data would require legislative changes, as we noted in our report. We also agree with Education’s comment that using income bands other than those specified by Congress would likely require legislative change. We disagree that congressional action is required for Education to continue to use SOI data but with a revised method for calculating the allowance—one of the strategies identified in our report. While the HEA directs Education to use the SOI file to revise the allowance, and establishes the income categories for which the allowance should be calculated for parents of dependent students and independent students with dependents other than a spouse (which we define as “families” in this report), the HEA does not specify a particular method to calculate the allowance. Therefore, we believe that Education could revise its methodology, as we recommended, without congressional action. Education also echoed some of the disadvantages we discussed in our report associated with applying the same allowance to all applicants and collecting tax information directly from aid applicants by adding questions to the application form. Education also stated that it generally agreed with our assessment of the impact of the revised allowance tables on Pell Grant recipients but noted that we could have provided additional information concerning applicants who would no longer have been eligible for Pell Grants. As we stated in the report, these applicants typically would no longer have received the minimum Pell Grant award, reflecting that such applicants typically have higher incomes than those who would have continued to receive Pell Grants. Additionally, we show that Pell Grant recipients with household income over $25,000 would have been significantly more likely to have either received less in Pell Grants or become ineligible for them. Education also suggested in its letter that it would be helpful to clarify that a change in EFC would not necessarily cause an identical change to a student’s award amount with respect to federal student loans, Campus- Based aid, and state and institutional financial aid programs: “in other words,” the department noted, “include a brief explanation of potential interactive effects.” As noted in our report, our case studies of students at selected schools showed that as the EFC would have increased, subsidized loans would have decreased, and unsubsidized loans would have increased in response to the decreases in other forms of financial aid. In response to Education’s comment, however, we added information concerning how EFC changes would have affected need-based aid overall with respect to our case study schools. Education also noted that it understood why we chose to analyze the effects of the proposed 2003 update had it been implemented in 2004–2005. (Soon after we had submitted our draft report to Education for comment, the department published, on December 23, 2004, an updated allowance for the 2005–2006 award year.) In its letter, Education includes the results from its preliminary analysis of the effects of the 2004 update for 2005–2006. Education’s results are generally consistent with the results from our analysis. We did not, however, verify the accuracy of Education’s estimates. Education also expressed concern that we misinterpreted the department’s intentions with respect to updating the allowance for the 2005–2006 award year. While we understood the department’s intentions, we made technical clarifications to the report to address Education’s concern. With respect to our recommendation that the department establish formal procedures to ensure that it annually requests and obtains the most current tax data from the IRS, Education stated that it had such procedures in place as “evidenced by the update published in the spring of 2003.” However, as noted in our report, Education could not provide us with written procedures guiding staff on the routine steps necessary to update the tax allowance or to identify what data would be needed to update the allowance. In response to the department’s comment, we clarified that our recommendation included documenting formalized procedures in writing. Lastly, Education provided technical comments, which we incorporated as appropriate. Education’s written comments appear in appendix V. As agreed with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 7 days from its date. At that time we will send copies of this report to the Secretary of Education, appropriate congressional committees, and other interested parties. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please call me on (202) 512–8403 or Jeff Appel, Assistant Director, on (202) 512–9915. You may also reach us by e-mail at [email protected] or [email protected]. Other contacts and staff acknowledgments are listed in appendix VI. The objectives of this study were to determine (1) what tax data form the basis of the current tax allowance and what factors have affected regular updates, (2) the effect the Department of Education’s (Education) proposed update would have had in award year 2004–2005 on financial assistance for students and families, (3) the extent to which current methods for determining the allowance accurately measure how much students and families have paid in state and other taxes, and (4) the strategies available to address any problems in deriving the allowance. To carry out the objectives, we analyzed Education’s 2002–2003 aid applicant sample file and Education’s Cost Estimation and Analysis Division’s Statistical Abstract (CEAD STAB), the most current versions available at the time of our review. We worked closely with financial aid officials from two states—Tennessee and Wisconsin—and four colleges— one public and one private nonprofit school in each of the two states. We interviewed officials from the U.S. Department of Education, Advisory Committee on Student Financial Assistance (ACSFA), and U.S. Department of the Treasury’s Internal Revenue Service–Statistics of Income (SOI) Division—as well as officials from the states and schools we contacted. We also interviewed officials from associations representing institutions, including the American Association of State Colleges and Universities (AASCU), National Association of Independent Colleges and Universities (NAICU) and the College Board, as well as other experts. In addition, we reviewed and analyzed the statutory requirements and legislative history of the state and other tax allowance. Furthermore, we reviewed and analyzed state and other tax data from SOI, Bureau of the Census, Bureau of Economic Analysis, and the Institute on Taxation and Economic Policy. We performed our work in accordance with generally accepted government auditing standards between October 2003 and November 2004. In estimating how Education’s proposed update would affect students’ and their families’ eligibility for financial assistance, we analyzed two datasets. We used Education’s aid applicant sample file from the 2002–2003 award year to estimate changes in (a) expected family contribution and Pell awards nationally, (b) state need-based aid for Wisconsin and Tennessee, (c) Supplemental Educational Opportunity Grants (SEOG), Perkins loans and Work-Study, and (d) institutional need-based aid for the four institutions. This dataset is a randomly drawn, nationally representative sample of over 450,000 aid applicants. To estimate the percentage of Stafford subsidized and unsubsidized and Parent Loans for Undergraduate Students (PLUS) recipients that are likely to have their loan award changed, we used Education’s CEAD STAB. CEAD STAB is a randomly drawn, representative sample of 1.8 million borrowers (about 7 million loans) from the National Student Loan Data System (NSLDS), which is a comprehensive national database of Title IV loan and grant recipients. Our analysis of the CEAD STAB focused on Stafford subsidized and unsubsidized and PLUS borrowers who originated loans from July 2002 to June 2004. We assessed the reliability of both datasets by conducting electronic testing of key variables for obvious problems in accuracy and completeness, interviewing appropriate Education officials, and reviewing related documentation. Based on these tests and reviews, we determined that the samples were sufficiently reliable for our purposes. To estimate changes in expected family contribution (EFC) and Pell Grant awards nationally, our analysis followed Education’s approach to estimating EFC and Pell awards in the 2004–2005 award year. To do this, the 2002–2003 aid applicant sample file was converted to better reflect aid applicants in the 2004–2005 award year by adjusting all income and asset amounts for inflation and changing the weights assigned to each sample applicant so that the sample takes into account projected changes in the number and type of applicants. We reviewed Education’s approach to converting the sample file to the 2004–2005 award year and calculating EFC and Pell awards for accuracy and interviewed Education officials about the approach’s reliability. We determined that Education’s approach was sufficiently reliable for our purposes. EFC and Pell awards in the 2004–2005 award year were estimated for each sample aid applicant using both the current 2004–2005 state and other tax allowance, which is based on 1988 SOI data, and the proposed 2004–2005 state and other tax allowance, which is based on 2000 SOI data. To assess the impact of the update on EFC and Pell Grant awards, these amounts were compared. We also examined how these impacts vary by family income, dependency status, and state of residence. We designated student state of residence as the state of residence of the parent(s) when they differed. Our methodology for obtaining national-level estimates on the percentage of loan recipients who could have had their loan award affected involved the steps listed below. 1. Using the aid applicant sample file, we estimated the percentage of applicants in award year 2004–2005 who would have had their EFC changed because of the proposed update for each combination of dependency status, state of residence, and specified income group. 2. We used the resulting percentages to estimate the likelihood that each CEAD-STAB sample borrower’s EFC would have been changed due to the update. 3. We estimated the likelihood that each individual borrower would have had his or her Stafford subsidized and unsubsidized loan award affected as equal to this percentage if the recipient borrowed less than the maximum allowed by law. We estimated as zero the likelihood that each individual borrower would have had his or her Stafford subsidized and unsubsidized loan award affected if the recipient borrowed the maximum allowed by law. 4. For PLUS recipients, the likelihood that each recipient would have had his or her aid award affected was estimated as equal to the percentage who would have had their EFC changed because of the proposed update that we estimated from our analysis of the aid applicant sample file. Estimating whether students would have seen a change in their loan amounts because of the proposed update is complex largely because these loan amounts depend on the extent to which all other financial aid awards—including Campus-Based, state, and institutional aid—would have been affected, and no complete information is available on the specific awarding policies of all states and institutions for these types of aid. To compensate for this lack of information, we made several assumptions regarding how Stafford and PLUS loans would have been affected, which may either over- or underestimate what the actual changes would have been. However, these assumptions may somewhat offset each other, and we believe our estimates are informative of the percentage of borrowers whose loan awards could have been affected. Stafford and PLUS loan estimates may be biased upward for the following reasons. Stafford loan estimates may be biased upward because we assumed that all borrowers who currently receive less than the maximum allowed and whose EFC would have changed under the proposed update would have had their loan award amount changed as well, yet this is not always the case. For example, because the subsidized loan award equals the cost of attendance less EFC and other financial aid awards, subject to the loan limits, subsidized Stafford loan amounts would not have been affected if the decrease in other financial aid awards exactly offset the EFC increase resulting from the proposed update. Because the unsubsidized Stafford loan award equals the cost of attendance less other financial aid awards (including subsidized loan awards), subject to the loan limits, unsubsidized loan amounts would not have been affected if other aid awards did not change because of the update. Furthermore, we assumed that all PLUS borrowers whose EFC would have changed because of the update would have had their loan award affected, but, similar to unsubsidized Stafford loans, this would not have occurred had other aid awards not changed because of the update. It is difficult to know the size of this upward bias because the dataset does not include information on the extent to which other financial aid awards would have been affected. Our estimates for Stafford loans may be biased downward because we assumed that all borrowers who receive the maximum allowed would not have had their loan award affected, which also is not always the case. For unsubsidized Stafford loans, this bias appears to be very small because unsubsidized loans would only decrease if students have a cumulative net increase in their other financial aid awards, which case studies and expert interviews show to be unlikely. For subsidized Stafford loans, this bias may be larger, yet we believe that it is still relatively small. Subsidized Stafford loan awards that are currently at the maximum would only decrease when the EFC plus other aid increase enough to cause the student to lose eligibility for the maximum loan amount, and analysis of the National Postsecondary Student Aid Survey (NPSAS) shows that most students are not likely to face this circumstance. We also assumed that the borrowers in award year 2004–2005, the year of the proposed update, are from the same states, have the same incomes, and have the same costs of attendance as the most recent CEAD-STAB borrowers, and the extent to which they differ would cause our estimates to be less accurate. For example, if we underestimate the number of students in likely high-impact states, our estimates would likely underestimate the overall proportion of students who could face a change in their loan award. To complement this national-level loan analysis, we determined the percentage of students at our case study schools who would have experienced a change in their subsidized and unsubsidized loans, along with the size and direction of the changes. We could not determine this information for PLUS loans for our case study schools since the schools did not package PLUS loans for the purpose of estimating potential impacts of the proposed update. To provide illustrative examples of how the proposed update can affect state and institutional need-based aid and Campus-Based aid, we worked closely with two states—Wisconsin and Tennessee—and four colleges, including one public and one private nonprofit institution in each of the two states. We chose these states because they use the Higher Education Act (HEA) federal methodology to disburse state aid, are geographically dispersed, and represent high- and low-impact states, based on an index we calculated for the following components: (1) the average EFC change resulting from Education’s update, (2) the percentage of full-time undergraduates receiving grant aid, and (3) the average state need-based grant per undergraduate. We averaged the three components to generate an index of the overall average impact. The 44 states and the District of Columbia that use the federal methodology were then sorted in descending index order and separated into three groups of 15, with the highest index group being designated “high-impact states,” the next group being designated “medium-impact states,” and the last group being designated “low-impact states.” For the components and index value we estimated for each state, see appendix II. The institutions we chose include the University of Wisconsin at Madison, Wisconsin’s Marian College, Middle Tennessee State University, and Tennessee’s Carson- Newman College. We chose schools within the two selected states based on the following criteria: (1) use of the HEA federal methodology to disburse aid, (2) participation in the federal Campus-Based aid program, (3) provision of institutional need-based aid, (4) number of students with household income between $25,000 and $75,000, and (5) willingness and capacity to calculate estimated impacts on need-based aid. For each of these two states and four institutions, we generated a subsample from Education’s aid applicant sample file that reflects the students who attended school in these two states and the students who attended school at the four institutions in the 2002–2003 award year. For each student, the dataset contained information on the student’s estimated EFC under the current and proposed allowance and the student’s dependency status. Using these subsamples, examples of the effect of the proposed update on state, institutional, and Campus-Based aid were calculated. While the aid applicant sample file is a nationally representative sample, it may not be representative of the aid applicants who attend school in these specific states or at these specific institutions. The subsamples had the following number of observations: Wisconsin: 7,469, Tennessee: 8,385, University of Wisconsin at Madison: 580, Wisconsin’s Marian College: 54, Middle Tennessee State University: 533, and Tennessee’s Carson-Newman College: 62. We estimated the impacts on Wisconsin’s state need-based aid using the Wisconsin subsample, along with the state aid award methodologies provided to us by the Wisconsin Higher Educational Aids Board. The state need-based aid programs that we analyzed were the Wisconsin Higher Education Grant and the Wisconsin Tuition Grant programs. The Tennessee Student Assistance Corporation, the agency responsible for determining state aid in Tennessee, performed its own analysis of the impact on state need-based aid using the Tennessee subsample and then shared the results with us. The aid program analyzed was the Tennessee Student Assistance Award. To verify the validity of the Tennessee aid determination, we verified that the aid awarded fell within the maximum award limit and that those within an EFC range received similar award amounts; that is, that those with a lower EFC received a higher award amount. To estimate the impacts on institutional need-based and Campus-Based aid, financial aid directors or financial aid specialists at each of the four selected schools calculated the impacts using their relevant subsample. While the focus was on need-based institutional aid and Campus-Based aid, three institutions also calculated the effect on Stafford loans. To check the validity of these simulations, we checked (1) the order in which different forms of aid were awarded to see if they were consistent with common aid packaging protocols, (2) the formulas used to calculate remaining need at each stage of the award packaging process to make sure they were accurate, (3) the range of aid levels awarded to make sure they fell within bounds defined by regulation, (4) the total aid awarded to make sure it did not exceed financial need, and (5) the relationship between aid awarded and EFC levels to make sure that those with lower EFCs were provided more aid than those with higher EFCs. For the purposes of this report, we define “families” to include parents of dependent children (students) and independent students with dependents other than a spouse, and we define “individuals” to include dependent students and independent students without dependents other than a spouse. The specific methodologies used for each source are described in the footnotes to table 9. In assessing the reliability of state personal income estimates from the U.S. Bureau of Economic Analysis (BEA), we reviewed information available online from the BEA Web site on its data quality assurance processes and interviewed relevant officials. On the basis of the results of our document review and discussions with relevant officials, we concluded that the BEA data we used were reliable for our purposes for this analysis. In assessing the reliability of state government tax collections estimates from the U.S. Census Bureau, we interviewed relevant officials, who stated that there was no published data reliability documentation. Thus, we were unable to determine if the Census data we used were reliable for our purposes for this analysis. In assessing the reliability of data from the U.S. Census Bureau’s Annual Social and Economic Supplement, we reviewed information available online from the U.S. Census Bureau Web site on its data quality assurance processes and interviewed relevant officials. On the basis of the results of our document review and discussions with relevant officials, we have determined that the information collected by the U.S. Census Bureau in generating the Current Population Survey (CPS) is reliable, but we were unable to determine whether the CPS tax data we used, which is not collected directly but rather generated from U.S. Census Bureau tax models, were reliable for our purposes for this analysis. In assessing the reliability of data from the Institute on Taxation and Economic Policy’s Who Pays publication, we interviewed a relevant official and reviewed available documentation. However, we were unable to reach a determination as to the reliability of the data, primarily because of a lack of sufficient documentation. Because our analysis relied on samples of aid applicants and borrowers, our estimates are subject to sampling errors. Sampling errors are often represented as a 95 percent confidence interval: an interval that 95 times out of 100 will contain the true population value. For the percentages and numbers presented in this report on the EFC, Pell award, Stafford loan, and PLUS loan impacts, we are 95 percent confident that the results we would have obtained had the entire population been studied are within plus or minus 5 percent of the results, unless otherwise noted. The results for state and institutional need-based aid and Campus-Based aid are not necessarily based on representative samples and therefore should be considered as case study findings, or illustrative examples. Thus, we did not calculate sampling errors for these three categories of aid. Report on State-Sponsored Student Financial Aid, 2001-02 Academic Year. Appendix III: Average Tax Rates on Adjusted Gross Income, by State and Income Level level (percentage) level (percentage) Appendix IV: Simulation of Tax Allowance Percentages under Various Options, by State—Families with Adjusted Gross Income of $15,000 or More allowance (4%) 4 allowance (4%) In addition to those named above, the following people made significant contributions to this report: Jeff Weinstein, Cynthia Decker, Bob Parker, Sue Bernstein, Amy Buck, James Wozny, and Melba Edwards. | In 2003, the Department of Education (Education) proposed an update to the state and other tax allowance, a part of the federal need analysis for student financial aid. Most federal aid as well as some state and institutional aid is awarded based on the student's cost of attendance less the student's and/or family's ability to pay these costs--known as the expected family contribution (EFC). The allowance, which accounts for the amount of state and other taxes paid by students and families, effectively reduces the EFC. Given the potential impact of the allowance on the awarding of aid, we determined what factors have affected the updating of the tax data on which it is based, the effects the proposed 2003 update would have had on financial assistance for aid applicants, any limitations in the method for deriving the allowance, and strategies available to address them. While Education has been required to revise the allowance annually since 1993, prior to 2004 it attempted to update the allowance only twice--in 1993 and again in 2003--but the latter update was suspended. As a result, the 1988 IRS tax data used for the 1993 update remained in effect. The lack of updates is primarily because Education did not annually seek data needed to update the allowance or establish effective internal control to guide the updating process. Also, Education did not consider alternatives when data were not readily available. Had the update been implemented in 2004-2005, the allowance would have decreased for most states; as a result, the EFC would have increased by about $500, on average, for a majority of aid applicants. Of those with an EFC increase, 38 percent would either have received less in Pell Grants ($144 less on average) or would have become ineligible for them; the percentage of recipients affected would have varied by income. Overall Pell Grant expenditures would have decreased by $290 million. Increases in EFCs could also have affected other forms of aid, including state aid; these effects in turn could have affected Stafford loans and Parent Loans for Undergraduate Students. The impact of the proposed update on Campus-Based, state, and institutional need-based aid would likely have varied based on state and institutional aid awarding policies and changes in state allowances. Due to certain limitations of the IRS dataset with respect to calculating the allowance, and problems with how Education uses this dataset, the current allowance may not reflect the amount of taxes paid by students and families. The dataset is limited because the taxpayers included in it are generally not representative of aid applicants, it does not include all state and other taxes paid by students and families, and the tax data are several years older than the income information reported by applicants on aid applications. In addition to these limitations, Education does not make full use of the dataset to better reflect the varying tax rates paid by taxpayers in different income groups. Strategies we identified for addressing the limitations of the tax allowance include (1) using IRS data with revisions to the method for calculating the allowance, (2) substituting IRS data with one of several alternative data sources, (3) using a standard allowance for all aid applicants irrespective of state of residence, or (4) collecting tax information directly from aid applicants. These could require modest to substantial changes, would differ in their impact on applicants and federal costs, and could require legislative changes |
As shown in table 1, CMS and TMA contract with numerous firms to perform many of the functions necessary to administer the Medicare and TRICARE programs. In addition, state agencies administer the Medicaid program. Federal contractors and state Medicaid agencies perform a wide variety of functions that require the use of personal health information. Such information may include medical diagnosis and treatment records and patient identifiers, such as name, address, date of birth, Social Security number, and evidence of insurance coverage. For example, when making a claims payment determination, federal contractors and state Medicaid agencies verify patient eligibility and assess whether the services provided were medically necessary. In some cases, assessing medical necessity requires a review of the patient’s medical history and treatment records. In addition to claims processing, federal contractors and state Medicaid agencies use personal health information when enrolling beneficiaries, operating telephone call centers, conducting disease management programs, administering pharmaceutical benefit management services, and performing fraud investigations. A number of laws provide protection for personal health information. Under the HIPAA Privacy Rule, certain health care organizations and individuals—known as covered entities—are required to ensure that patients’ personal health information is not improperly disclosed. Covered entities—health care providers, health plans, and health care clearinghouses—must develop policies and procedures for protecting health information. These include restricting the amount of information disclosed to the minimum necessary to accomplish the intended purpose and to the workforce needing access. Other requirements under the HIPAA Privacy Rule include designating a privacy official and training employees on the covered entity’s privacy policies. Certain HIPAA Privacy Rule safeguards also apply to “downstream users”—whether or not they are covered entities—through contractual agreements. The HIPAA Privacy Rule requires covered entities to enter into “business associate agreements” with other firms or individuals to which they transfer personal health information for certain clinical, operational, or administrative functions. Business associate agreements must establish the conditions under which a downstream vendor may use and disclose personal health information and the privacy safeguards they must apply. Covered entities are not required, under the rule, to monitor their business associates’ use of privacy safeguards, but must take corrective action if they become aware of a pattern of activity or practice that amounts to a material breach of the agreement. The HIPAA Privacy Rule applies directly to state Medicaid agencies, Medicare Advantage contractors, and TRICARE contractors that act as health plans or providers, and indirectly to Medicare FFS contractors and other TRICARE contractors. Specifically, state Medicaid agencies, Medicare Advantage, and TRICARE contractors that act either as health plans or providers are covered entities under the HIPAA Privacy Rule, while Medicare FFS contractors and the remaining TRICARE contractors are considered business associates to CMS and TRICARE, respectively, in their capacity as program contractors. Requirements under the HIPAA Privacy Rule also apply to certain downstream vendors that receive personal health information from federal contractors and state Medicaid agencies through outsourcing arrangements. In addition to the HIPAA Privacy Rule, U.S. law includes a number of statutes that provide privacy protections, and some of them are applicable only to federal agencies and their contractors. The Privacy Act of 1974, for example, places limitations on agencies’ collection, disclosure, and use of privacy information. Furthermore, the Federal Information Security Management Act of 2002 generally concerns the protection of personal information in the context of securing federal agencies’ information, and requires agencies to develop information security programs that include contractors. Finally, the Social Security Act requires that state Medicaid agencies limit the use and disclosure of personally identifiable information to purposes directly related to administering the state’s Medicaid program. A majority of the federal contractors and state Medicaid agencies we surveyed engage domestic vendors to perform services involving personal health information, but rarely transfer personal health information directly offshore. However, offshore outsourcing is initiated by some domestic vendors, which transfer personal health information to offshore locations. The actual prevalence of offshore outsourcing by domestic vendors may be greater than reported, as many federal contractors and state Medicaid agencies did not know whether their domestic vendors further transferred personal health information. A majority of federal contractors and state Medicaid agencies use domestic vendors to perform services involving personal health information. (See table 2.) At the same time, only one Medicare Advantage contractor and one state Medicaid agency reported direct offshore outsourcing of services involving personal health information. No Medicare FFS contractors or TRICARE contractors reported direct offshore outsourcing. When outsourcing domestically, the federal contractors and state Medicaid agencies typically rely on more than one vendor, although the extent to which this occurs varies across the three insurance programs. In our survey, Medicare Advantage contractors reported outsourcing services involving personal health information to a median of 20 domestic vendors per contractor. In contrast, TRICARE contractors and Medicaid agencies reported a median of 7 domestic vendors, while Medicare FFS contractors reported a median of 3 domestic vendors per contractor. Although only one federal contractor and one state Medicaid agency reported transferring personal health information directly to an offshore vendor, contractors and Medicaid agencies also reported offshore outsourcing through the activities of their domestic vendors. Specifically, federal contractors and state Medicaid agencies reported that their domestic vendors further transfer personal health information either to the vendors’ offshore locations or to another vendor located outside the United States through downstream outsourcing. Nineteen percent—33 of 173—of the Medicare Advantage contractors who responded to our survey reported that one or more of their largest domestic vendors transfer personal health information to a location outside of the United States. Four percent (2 of 45) of Medicare FFS contractors and 2 percent (1 of 45) of Medicaid agencies reported offshore outsourcing initiated by domestic vendors. Although each respondent indicated that these offshore transfers involved personal health information, we did not ask for detailed information about amount of data transferred. No TRICARE contractors reported offshore outsourcing by their domestic vendors. Our survey results may underestimate the full extent of offshore outsourcing of services involving personal health information. Some federal contractors and state Medicaid agencies did not always know whether their domestic vendors engaged in further transfers of personal health information—domestically or offshore—while others indicated that they did not have mechanisms in place to obtain such information. Medicare Advantage contractors—which have more domestic vendors per contractor than other federal contractors or state agencies in our survey— were least likely to have information about whether further data transfers were occurring on behalf of their program. When asked about their three largest domestic vendors, 57 percent of Medicare Advantage contractors reported that they did not know whether these vendors further transferred personal health information. Similarly, 29 percent of Medicare FFS contractors and 26 percent of Medicaid agencies reported that they did not have this information for all three of their largest domestic vendors. (See table 3.) According to our survey, most instances of offshore outsourcing by vendors occur when the domestic vendor transfers personal health information to one of its own locations outside of the United States or to an affiliated entity, such as a subsidiary, located in another country. Of the 33 Medicare Advantage contractors that reported offshore outsourcing by vendors, 30 described instances that fit this pattern. For example, one Medicare Advantage contractor reported outsourcing to a Midwest vendor a contract to scan paper claims and create and store electronic records. The vendor, which has multiple domestic and several international locations, performs these services in Mexico. In another case, a Medicare Advantage contractor reported using its wholly owned subsidiary to provide claims data entry services. Rather than using employees at its U.S. location, the subsidiary transfers the personal health information to a location it has in India, where the data entry services are performed. A Medicare FFS contractor reported a similar instance in describing its vendor’s offshore outsourcing. Its domestic vendor transfers personal health information to the vendor’s own facility in Jamaica to process Medicare claims. Offshore outsourcing was also reported to occur when domestic vendors transfer data to independent, third-party vendors located in other countries. According to our survey, this type of offshore outsourcing is less common than the type in which the offshore vendor is related to the domestic vendor. Three of the 33 Medicare Advantage contractors who reported vendor-initiated offshore outsourcing indicated that their domestic vendors transfer personal health information to an independent foreign vendor. For example, a Medicare Advantage contractor reported using a domestic subsidiary to provide claims data entry services. This subsidiary, in turn, engages in downstream outsourcing with an independent vendor located in India, where the data entry services for the Medicare Advantage contractor are performed. Medicare Advantage contractors were not the only respondents to report such downstream outsourcing relationships. A state Medicaid agency reported that its domestic vendor for customer services, which include handling call center operations and member enrollment, relies on an independent vendor located in India to perform these services. Although our survey identified several countries as locations for offshore vendors, India was the predominant destination for outsourcing services that involve personal health information. Of the 33 Medicare Advantage contractors whose domestic vendors were responsible for most of the offshore outsourcing reported in our survey, 25 reported that personal health information had been transferred to workers located in India. Less common locations included Ghana and Mexico, with nine and six instances of offshore outsourcing, respectively. (See table 4.) Privacy experts have emphasized that the contracts between firms and their vendors are important to ensuring privacy when outsourcing services that involve personal information. They also suggest safeguard measures that should be considered to protect privacy when outsourcing. These include measures to be taken during the vendor selection process and after personal health information has been outsourced. Federal contractors and state Medicaid agencies responding to our survey varied substantially in their reported use of these safeguard measures. Privacy experts indicated that having specific provisions in contractual agreements is key to ensuring that personal information is properly protected when transferred to a vendor. They noted that contracts should specify the vendors’ responsibilities for maintaining safeguards to protect personal information, circumstances under which personal information may be disclosed, and rules for subcontracting. In fact, the HIPAA Privacy Rule requires such contractual agreements to protect against unauthorized disclosure of personal health information by vendors that receive such information from covered entities to perform certain clinical, operational, or administrative functions. The Privacy Rule further specifies certain contract elements, including the conditions and safeguards for uses and disclosures of personal health information. To ensure that these conditions and safeguards also apply to downstream vendors, the Privacy Rule requires a firm’s or individual’s business associates to agree in writing that any subcontractor to which they subsequently transfer personal health information will also contractually agree to the same set of safeguards. At the same time, however, privacy experts point out that differences in national data privacy laws may influence the significance of a firm’s contracts with its vendors. Countries differ in the scope of their data privacy laws, with some offering broader data privacy protections than those available in the United States and others with essentially no legal protections for data privacy. For example, personal data transferred to a member country of the European Union (EU) would have to be handled in a manner consistent with the European Commission’s Data Protection Directive, which is generally considered to require more comprehensive data protection than does the United States. By contrast, India has no law that establishes protections for personal data. When a U.S. firm does business with a vendor in a country with relatively weak or narrow data privacy protections, experts noted that the contract between the outsourcing firm and the vendor can be used to help ensure data privacy. In the United States, vendors could be held liable according to the terms of their contract with the covered entity, which they are required to have by the HIPAA Privacy Rule. To make certain that data are similarly protected when outsourcing to a country with weaker privacy protections, experts indicate that the contract should be used to specify, in detail, the vendor’s privacy practices and the right to terminate the contract in the event of a privacy breach. The contract also may specify which country’s laws will be applied to resolve disputes that arise under the contract, which has implications for both interpretation and enforcement of the contract. When considering the implications of foreign privacy laws on data transferred offshore, another factor to consider is the legal status of the vendor. The experts we consulted generally agreed that transferring personal data to an entity with an offshore location may afford—at least in theory—the same level of privacy protections available in the United States, if the offshore entity is subject to U.S. law, such as may be the case with entities with offshore locations that are incorporated in the United States. For firms seeking data protections beyond those afforded by contracts, experts recommend several safeguard measures. Specifically, experts suggest that firms transferring personal health information to vendors should assess potential vendors’ privacy practices when selecting a vendor, monitor vendor performance on privacy practices, and be aware of downstream outsourcing. Experts recommended that in the vendor selection process, firms assess potential vendors’ privacy practices. In addition to evaluating a vendor’s written policies, experts suggested that the overall importance afforded privacy within the organization’s culture may be an equally significant factor, as it drives the likely implementation of written privacy policies. Experts noted different approaches to evaluating potential vendors. Describing his organization’s informal approach, the privacy officer for a large provider group explained that he consults with other clients of the vendor about their level of satisfaction and considers the vendor’s long- term stability and reputation. In contrast, the chief privacy officer for a large information technology company described her firm’s formal process for evaluating potential vendors. Using written risk-rating criteria, her firm’s legal and procurement departments evaluate potential vendors’ privacy practices. Beyond informing selection decisions, the criteria subsequently serve as the basis for vendor evaluation and auditing. When considering a potential vendor, some experts suggested that the extent of the assessment should be determined by the perceived data privacy risk— such as the sensitivity of the data being transferred. Experts also emphasized the importance of ongoing oversight of vendors and their activities, noting that monitoring vendor performance on privacy practices helps to ensure that contractual agreements are implemented. Experts described monitoring activities as a good risk management practice, and particularly important if the vendor is performing a critical business function or handling very sensitive personal health information. As one approach, a privacy expert suggested that outsourcing firms should require regular reports from vendors describing compliance efforts, privacy violations, and the use of any downstream vendors. While privacy experts recognized monitoring as a valuable safeguard, some said that adequate monitoring may be a challenge to implement. Vendors—especially those with substantial market power—may be reluctant to allow monitoring of their operations. In other cases, outsourcing firms may find it impractical or may not have sufficient resources to monitor each of their vendors. In such a situation, experts suggested that monitoring efforts should be focused on vendors that handle the most sensitive information, handle the largest volume of personal data, or have the highest risk for privacy breaches. With respect to monitoring the operations of geographically distant vendors, experts stressed that alternatives to traditional monitoring may be used to minimize logistical challenges, such as hiring a third-party audit organization to conduct regular on-site visits. Experts stressed that information about the number, and identity, of vendors that handle personal information is critical to the outsourcing firm’s ability to assess and mitigate privacy risks. One expert we spoke with explained that with information about its vendors’ downstream data transfers, the outsourcing firm is in a better position to monitor how its data are being handled. Some outsourcing firms require their vendors to obtain approval prior to subcontracting, while others require vendors to report regularly on all subcontractors. In some cases, however, information about downstream vendors can be difficult to obtain, experts noted. One expert on corporate compliance cautioned that vendors may resist such prior approvals and reporting requirements, citing the need for flexibility in responding quickly to changes in workload. Federal contractors and state Medicaid agencies that outsource services involving personal health information varied substantially in their reported use of the three expert-recommended safeguard measures. For example, 39 percent of Medicare FFS contractors reported taking steps to assess potential vendors’ privacy practices compared with 67 percent of state Medicaid agencies. With respect to monitoring vendors’ privacy practices, 42 percent of Medicare FFS contractors reported doing so compared with 100 percent of TRICARE contractors. Forty-five percent of Medicare Advantage contractors reported awareness of downstream outsourcing compared with 74 percent of Medicaid agencies. With respect to the three recommended measures together, Medicare Advantage and Medicare FFS contractors reported the lowest use rates, at 27 and 29 percent, respectively. Use of the three recommended measures was more common among Medicaid agencies, at 51 percent, and TRICARE contractors, with 60 percent. (See table 5.) Our survey results show that a substantial number of federal contractors and state Medicaid agencies reported privacy breaches involving personal health information. However, TMA and CMS—the federal agencies that oversee the TRICARE, Medicare, and Medicaid programs—differ in their requirements for notification of privacy breaches involving personal health information. TMA requires reports of privacy breaches from all of its contractors. CMS collects such information from FFS contractors but not from Medicare Advantage contractors or from state Medicaid agencies. In responding to our survey, over 40 percent of federal contractors and state Medicaid agencies indicated that they, or one of their vendors, experienced a privacy breach involving personal health information in 2004 or 2005. Among Medicare Advantage contractors, 47 percent reported recent privacy breaches, as did 42 percent of Medicare FFS contractors, 44 percent of Medicaid agencies, and 38 percent of TRICARE contractors. (See table 6.) These rates are comparable to the rate recently reported by commercial health insurers. In a 2005 health care industry survey, 45 percent of commercial health insurers reported the occurrence of at least one privacy breach from January through June 2005. It is difficult to interpret these data, because we did not ask respondents for information about the frequency or severity of their privacy breaches. The reported privacy breaches could have involved inappropriate disclosure of limited personal health information, such as mailing an insurance statement to the wrong address, or extensive disclosures, such as privacy breaches that involved information on many individuals or that occurred repeatedly. The federal agencies with responsibility for these programs vary in their requirements with respect to notification of privacy breaches. Since 2004, TMA has required all TRICARE contractors to report monthly on privacy breaches, including those experienced by each vendor handling enrollees’ personal health information and by health care providers. According to TRICARE officials, monthly reports provide detailed information about each privacy breach, including the contractor’s assessment of the “root cause” of the breach and steps taken to prevent further occurrences. TMA officials indicated that most privacy breaches occur at the vendor level or with health care providers, rather than with TRICARE contractor staff. During 2005, three large regional TRICARE contractors reported more than 130 separate privacy breaches to TMA officials. TMA officials told us that most breaches occurred inadvertently, such as when personal information was transferred to the wrong person because of incorrect mailing addresses (electronic and paper mail) or fax errors. In other cases, breaches occurred when health care providers or contractor staff—such as call center employees—inappropriately discussed personal health information with other employees. TMA officials said that the agency analyzes trends in the monthly reports and follows up with federal contractors that report recurring lapses in privacy. In May 2005, CMS began requiring Medicare FFS contractors—but not Medicare Advantage contractors or Medicaid agencies—to report privacy breaches. CMS officials told us that in prior years, FFS contractors reported privacy breaches to CMS regional office staff responsible for contractor oversight. The agency changed its approach to monitoring privacy breaches by establishing a policy for federal contractors to notify CMS central office staff directly. Under the new policy, CMS requires FFS contractors to provide written notice, within 30 days of discovery, of all known or suspected privacy breaches, including those experienced by a vendor. These federal contractors must describe the privacy breach and subsequent corrective action plan—including any changes to policies, procedures, or employee training. From May through December 2005, under the new reporting requirement, CMS received eight reports of privacy breaches from four FFS contractors. CMS officials noted that most breaches occurred as a result of accidental disclosure of personal information. For example, the most commonly reported incident during 2005 occurred when beneficiary health information was mailed by a FFS contractor to the wrong health care provider. CMS does not have comparable notice requirements for privacy breaches occurring with personal health information held by Medicare Advantage contractors or state Medicaid agencies. Agency officials told us that they do not require routine reporting of privacy breaches that may occur at these federal contractors and state Medicaid agencies or their vendors. However, based on our survey results, these contractors and agencies, and their vendors, are likely to experience privacy breaches at a rate similar to FFS contractors. When federal contractors and state Medicaid agencies outsource services involving personal health information, they typically engage U.S. vendors that may further transfer the personal health information they receive to downstream domestic or offshore workers. CMS and TMA officials have only recently taken steps to oversee their federal contractors’ and vendors’ management of sensitive health information. While reporting data transfers and data privacy breaches is now required under the TRICARE program and the Medicare fee-for-service program, CMS has yet to establish a reporting requirement for Medicare Advantage contractors and Medicaid agencies. We believe that federal contractors and state Medicaid agencies should be held accountable for how well personal health information, held by them or disclosed to their vendors, is protected. To help ensure that the personal health information entrusted to federal and state health programs is being adequately protected and to facilitate prompt corrective action when appropriate, the privacy breach notification requirements that currently apply to TRICARE and Medicare FFS contractors should also apply to other Medicare contractors that handle personal health information (such as Medicare Advantage contractors) and to state Medicaid agencies. We recommend that the Administrator of CMS require all Medicare contractors responsible for safeguarding personal health information and state Medicaid agencies to notify CMS of the occurrence of privacy breaches. We received written comments on a draft of this report from CMS and DOD. CMS agreed with our recommendation and described recent steps the agency has taken to obtain information on privacy breaches from Medicare Advantage contractors. Specifically, CMS highlighted its June 9, 2006, memo to Medicare Advantage contractors requiring them to notify agency officials of breaches involving personal health information. CMS noted that it is developing specific instructions for its regional and central office staff about how to respond to such reports of privacy breaches. CMS also indicated that the HHS Office of Inspector General will be assisting the agency in assessing the adequacy of the Medicare Advantage contractor’s systems for securing personal health information. In addition, CMS stated that it sent privacy reminder notices to the FFS contractors and selected other CMS contractors that handle beneficiaries’ personal health information. Although the administration of the new Medicare Part D outpatient prescription drug benefit was outside the scope of our work, CMS noted that its new requirements for reporting privacy breaches will also apply to the contractors that implement this benefit. CMS pointed out that the Social Security Act requires that state Medicaid agencies limit the use and release of personally identifiable information to purposes directly related to administering the state’s Medicaid program. We included a reference to relevant provisions of the Social Security Act in the background section of this report. Finally, CMS indicated that it has added language to its FFS contracts that would require contractors and subcontractors to obtain written approval from CMS prior to performing work at locations outside of the United States. In further discussion, agency officials clarified that CMS will be including this contract language in future Medicare FFS contracts. Thus, the revised language will take effect over the next several years as the current Medicare FFS contracts are competed and awarded to entities called Medicare administrative contractors (MACs). CMS noted that 4 of the 23 MAC contracts have been awarded to date; the agency plans to complete its transition to the new MAC contracts by the end of fiscal year 2009. DOD concurred with our report findings and provided a technical comment which we incorporated. We have reprinted the letters from CMS and DOD in appendixes II and III. We will send copies of this report to the Administrator of CMS, the Secretary of Defense, appropriate congressional committees, and other interested parties. Copies will be made available to others upon request. The report is also available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about matters discussed in this report, please contact me at (312) 220-7600 or at [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who contributed to this report are listed in appendix IV. We focused our review on Medicare, Medicaid, and the Department of Defense’s TRICARE program, which together cover over 100 million Americans. In this report we (1) examined the extent to which the Medicare and TRICARE federal contractors and state Medicaid agencies outsource—domestically or offshore—services involving the use of personal health information; (2) identified measures recommended by privacy experts for safeguarding outsourced personal information and examined use of these measures by the federal contractors and state Medicaid agencies; and (3) determined whether the federal contractors and state Medicaid agencies have experienced privacy breaches and whether the federal agencies that oversee Medicare, Medicaid, and TRICARE require notice from them when privacy breaches occur. To determine the extent of service outsourcing, use of recommended practices, and experience with privacy breaches, we surveyed the federal contractors and state Medicaid agencies responsible for performing many of the administrative tasks associated with the day-to-day operations of Medicare, Medicaid, and TRICARE. In August 2005, we sent our survey to all 56 state Medicaid agencies, 252 Medicare Advantage contractors, 59 Medicare fee-for-service (FFS) contractors, and 11 TRICARE contractors. The federal contractors included in our survey were all those that held contracts with the Department of Health and Human Services’ Centers for Medicare & Medicaid Services (CMS) and the Department of Defense’s TRICARE Management Activity (TMA) to participate in these programs at the national level, as of January 2005. In some cases, a firm could have more than one contract. For example, the 59 Medicare FFS contracts included in our study were held by 42 firms in January 2005. In these instances, we sent the firms a separate survey for each of their contracts with the federal agencies. Consequently, for analysis and reporting purposes, we considered each contract separately. Survey response rates ranged from 69 percent (Medicare Advantage contractors) to 80 percent (state Medicaid agencies). (See table 7.) Survey questions addressed whether the federal contractor or state Medicaid agency outsourced services during 2005—domestically or offshore—that involved the use of personal health information. We asked the federal contractors and state Medicaid agencies that used outsourcing to provide the total number of domestic and offshore outsourcing agreements. To obtain information about downstream outsourcing, we asked respondents whether each of their three largest vendors further transferred personal health information, and if so, to which country. For most survey items, we did not independently verify information provided by respondents. However, we performed quality checks, such as reviewing survey data for inconsistency errors and completeness. When necessary, we contacted survey respondents to obtain clarification before conducting our analyses. Our analysis of respondents and nonrespondents in each survey group, on variables such as entity size, type, and geographic location, did not identify substantial differences, suggesting that the risk of respondent bias is low. Among the survey items we reported on, we did not find substantial variation in item response rate. Based on these efforts, we determined that the survey data were sufficiently reliable for the purposes of this report. To identify privacy practices recommended by industry experts to protect personal information from inappropriate disclosure when outsourcing, we reviewed relevant literature on privacy practices, domestic outsourcing, and offshore outsourcing. Our review included perspectives from the health care and financial business sectors, including syntheses of best practices. Using a structured interview guide, we then interviewed privacy experts to identify commonly recommended business practices for protecting the privacy of personal information when outsourcing. We selected individuals to interview based upon literature they published on the topics of outsourcing and privacy protections and through referrals from other experts. We interviewed experts representing industry, consumer, and regulatory perspectives. We did not independently evaluate the feasibility, potential cost, or effectiveness of implementing experts’ recommended practices. Survey questions asked whether federal contractors and state Medicaid agencies routinely use these expert- recommended practices. We did not review to what extent the practices used by the federal contractors and Medicaid agencies comply with existing statutory and administrative requirements. Through the survey, we also asked the federal contractors and state Medicaid agencies to report on their experience with privacy breaches during the previous 2 years. To obtain information on federal agencies’ requirements for notification of privacy breaches experienced by the federal contractors and state Medicaid agencies, we interviewed officials at TMA and CMS—the federal agency with oversight responsibility for Medicare and Medicaid. We asked agency officials to provide us with summary data on the number and type of privacy breaches reported by federal contractors and state Medicaid agencies during 2004 and 2005. We did not provide a definition of privacy breach in the survey. We also examined the Health Insurance Portability and Accountability Act and its implementing regulations, but did not assess compliance with them or with other federal laws and regulations. In addition, we reviewed information on data privacy laws in selected countries that are destinations for offshore outsourcing. We conducted our work from October 2004 through July 2006 in accordance with generally accepted government auditing standards. In addition to the contact named above, Rosamond Katz, Assistant Director; Manuel Buentello; Adrienne Griffin; Jenny Grover; Kevin Milne; and Daniel Ries made key contributions to this report. | Federal contractors and state Medicaid agencies are responsible for the day-to-day operations of the Medicare, Medicaid, and TRICARE programs. Because these entities may contract with vendors to perform services involving the use of personal health data, outsourcing and privacy protections are of interest. GAO surveyed all federal Medicare and TRICARE contractors and all state Medicaid agencies (a combined total of 378 entities) to examine whether they (1) outsource services--domestically or offshore--and (2) must notify federal agencies when privacy breaches occur. Survey response rates ranged from 69 percent for Medicare Advantage contractors to 80 percent for Medicaid agencies. GAO interviewed officials at the Department of Health and Human Services' Centers for Medicare & Medicaid Services (CMS), which oversees Medicare and Medicaid, and the Department of Defense's TRICARE Management Activity (TMA), which oversees TRICARE. Federal contractors and state Medicaid agencies widely reported domestic outsourcing of services involving the use of personal health information but little direct offshore outsourcing. Among those that completed GAO's survey, more than 90 percent of Medicare contractors and state Medicaid agencies and 63 percent of TRICARE contractors reported some domestic outsourcing in 2005. Typically, survey groups reported engaging from 3 to 20 U.S. vendors (commonly known as subcontractors). One federal contractor and one state Medicaid agency reported outsourcing services directly offshore. However, some federal contractors and state Medicaid agencies also knew that their domestic vendors had initiated offshore outsourcing. Thirty-three Medicare Advantage contractors, 2 Medicare fee-for-service (FFS) contractors, and 1 Medicaid agency indicated that their domestic vendors transfer personal health information offshore, although they did not provide information about the scope of personal information transferred offshore. Moreover, the reported extent of offshore outsourcing by vendors may be understated because many federal contractors and agencies did not know whether their domestic vendors transferred personal health information to other locations or vendors. In responding to GAO's survey, over 40 percent of the federal contractors and state Medicaid agencies reported that they experienced a recent privacy breach involving personal health information. (The frequency or severity of these breaches was not reported.) By survey group, 47 percent of Medicare Advantage contractors reported privacy breaches within the past 2 years, as did 44 percent of Medicaid agencies, 42 percent of Medicare FFS contractors, and 38 percent of TRICARE contractors. TMA and CMS differ in their requirements for notification of privacy breaches. TMA requires monthly reports on privacy breaches from its TRICARE contractors and follows up with contractors that report recurring lapses in privacy. While CMS requires Medicare FFS contractors to report privacy breaches within 30 days of discovery, such oversight is lacking for privacy breaches that may occur with personal health information held by state Medicaid agencies and Medicare Advantage contractors, as CMS does not require reports of privacy breaches from these entities. |
DOD’s science and technology community—including research laboratories, test facilities, industry, and academia—conducts initial research, development, and testing of new technologies to improve military operations and ensure technological superiority over potential adversaries. Afterwards, the acquisition community typically manages product development, in which technologies are further advanced and system development begins. These activities are generally supported by DOD’s RDT&E budget, which DOD groups into seven budget activity categories for its budget estimates and the President’s Budget. The categories follow a mostly sequential path for developing technologies from basic research to operational system development, as is shown in figure 1. The first three budget activity categories represent DOD’s science and technology activities to advance research and technology development, while the remaining budget activity categories are typically associated with product development for acquisition programs. See appendix II for a description of each budget activity. Funding for prototyping is mostly found in advanced technology development and advanced component development, budget activity 6.3 and 6.4 respectively. Appendix III provides a breakdown of budget activity 6.3 and 6.4 funding by organization for fiscal year 2016. Funding in budget activity 6.3 is not directly tied to acquisition programs whereas budget activity 6.4 is typically used for that purpose. Funding for acquisition programs, including budget activity 6.5, was $28 billion in fiscal year 2016 and has varied over time, whereas science and technology funding was $13 billion and has remained relatively flat, as is shown in figure 2. There are numerous types and definitions of prototyping. One construct used by parts of DOD refers to conceptual, developmental, and operational prototypes, each of which has a different purpose and time horizon for when they can be expected to be incorporated into or become their own acquisition program. Figure 3 includes more information about each of these types of prototypes. Although each type is more mature or closer to a capability that can be fielded than its predecessor, prototyping does not have to proceed sequentially. For example, an operational prototype might not be preceded by a conceptual or developmental prototype, if it is based on existing mature technologies or concepts. Prototyping can involve a variety of different approaches, in terms of what is being developed and demonstrated, who is building the prototype, and how it is being acquired or managed. System prototyping is when a prototype that includes components for an entire system is developed, such as a prototype of a ground vehicle or missile. Subsystem prototyping is when a prototype is developed that includes a group of components that combine to perform a major function for a system, such as a power supply system for a radar. In a DOD context, prototypes can be developed by contractors or groups of contractors, government labs, or both, and efforts can be managed by the science and technology community, acquisition programs, or other types of research and development organizations. When two or more contractors or other entities prototype the same component, subsystem, or system, the effort is referred to as competitive prototyping. Over the years, DOD and Congress have taken steps to encourage prototyping during the technology development phase of weapon system acquisition programs. In 2007, the Office of the Under Secretary of Defense for Acquisitions, Technology, and Logistics issued a memorandum on prototyping and competition that expressed concern that DOD decisions on acquisition programs were being based largely on paper proposals that provided inadequate knowledge of technical risk and a weak foundation for estimating development and procurement cost. To help address these concerns, the memorandum required pending and future acquisition programs to formulate acquisition strategies that call for conducting competitive prototyping up through the start of system development. Not long after, in 2009, Congress passed WSARA, which included a provision on competitive prototyping for MDAPs as well as many other reforms. WSARA called for competitive prototyping at the system level or—if not feasible—for critical subsystems, and allowed competitive prototyping to be waived only if the cost of producing competitive prototypes would exceed the life-cycle benefits of producing them or if without a waiver, DOD would be unable to meet critical national security objectives. If competitive prototyping was waived, WSARA required that programs produce a prototype before milestone B, if the expected life-cycle benefits of producing such prototypes exceeded the cost and its production was consistent with achieving critical national security objectives. Originally implemented by DOD in December 2009, Congress repealed WSARA’s competitive prototyping provision in 2015. However, as of the time of this review, DOD still required program officials to consider using prototyping and competitive prototyping at the system or subsystem level as a risk mitigation technique. Congress also included several new prototyping-related provisions in the fiscal year 2017 NDAA, which are discussed later in this report. DOD prototyping also occurs outside or independent of acquisition programs. One of the purposes of this type of prototyping can be to further disruptive innovation. Disruptive innovation attempts to shift the balance of military power in our favor by providing new capabilities, potentially unforeseen by the warfighter. The capabilities can be a result of new technologies, new ways to integrate existing technologies, or changes to how systems are employed. Disruptive innovation can also include providing existing capabilities at substantially lower cost, thereby increasing military advantage. Examples of potentially disruptive technologies include directed energy, hypersonics, and low cost missile defense capabilities. Prototyping can be a way to “test the waters” or experiment with new and potentially disruptive concepts and technologies without the level of commitment associated with starting acquisition programs. Prototyping in this environment may involve more risk, including less mature technologies. There may also be no residual value at the end of a project other than increased knowledge and potentially a prototype “on the shelf” for further maturation. Most major weapon system acquisition programs we examined used prototyping to reduce technical risk, investigate integration challenges, and validate designs, among other things. Program officials chose prototyping approaches to align with their assessments of program risks, with riskier programs prototyping more extensively. They generally found that prototyping provided a good return on investment. It helped programs better understand requirements, the feasibility of proposed solutions, and cost—the key elements of a program’s business case. Identifying key risks early and structuring prototyping efforts to inform key decisions helped maximize the utility of programs’ prototyping efforts. The programs we reviewed used prototyping primarily to reduce technical risks, investigate integration challenges, validate designs, and mature technologies. Of the 22 MDAPs we reviewed, 17 used some form of prototyping during technology development and 5 did not prototype. Of the 17 programs that prototyped, officials from 15 programs told us they chose to prototype because it made the most sense given the program’s needs. Officials from the other 2 programs told us their programs prototyped at the direction of the Under Secretary of Defense (USD) for Acquisition, Technology and Logistics (AT&L)—the milestone decision authority for their programs. All 17 programs prototyped for multiple reasons and officials from 11 programs identified four or more reasons. The reasons cited by each program are depicted in figure 4. Program officials stated that they tailored their prototyping approaches to their program’s risks, with riskier programs prototyping more extensively. Ten of the programs we reviewed conducted system-level prototyping, 7 programs conducted subsystem prototyping, and 5 did not prototype (see appendix IV for a brief overview of each program’s prototyping efforts or the reasons it did not prototype). Prototyping approaches varied within these categories. Some programs prototyped one or two subsystems while others used multiple contractors and multiple-phased prototyping efforts at the system and subsystem level. The five programs that did not conduct any prototyping used known designs and existing technologies, which DOD generally considers less risky. Four of the five entered the acquisition process without a technology development phase and most obtained waivers from the competitive prototyping requirements in DOD policy. Figure 5 shows examples of programs’ prototyping efforts—or lack thereof—and how they align with the program officials’ understanding of their risks. The 10 programs that developed system level prototypes ranged from the Joint Light Tactical Vehicle, which involves less expensive, lower complexity items that will eventually be purchased in the tens of thousands, to the Space Fence Ground-Based Radar, which will only produce one, large ground-based radar to detect, track, and provide information about objects in Earth’s orbit. System level prototyping led to improved understanding of: (1) integration challenges and the feasibility of system designs; (2) significant unknowns, such as costs for ambitious requirements; (3) uncertainties related to integrating mature technologies in new ways; and (4) new technologies. Six of these programs first conducted subsystem prototyping to mature new or existing technologies used in new ways before using system-level prototyping to investigate integration challenges. The seven programs that developed subsystem prototypes ranged from an F-22 modernization program that was upgrading the aircraft’s weapon and communication systems to the Ship to Shore Connector Amphibious Craft, an air-cushioned landing craft that transports personnel, weapon systems, and cargo. Subsystem level prototyping efforts focused on narrower areas of perceived risk, such as maturing critical technologies, integrating a subsystem with other hardware or software, or testing specific components that are being considered for use in a system design. The programs that conducted subsystem prototyping were often building on or well-positioned to leverage existing weapon systems. Five of the seven programs in this category used existing platforms, either hardware or software, for their subsystem prototyping efforts. For example, the Ship to Shore Connector Amphibious Craft utilized the Navy’s existing landing craft as a platform to prototype and test new components. Of the 17 programs that prototyped, 12 used competitive prototyping. They did so for a variety of reasons, including to prove out potential solutions when more than one solution could be feasible and to gain knowledge from multiple sources about uncertainties, such as integration challenges, design feasibility, or cost. We found that most of the competitive prototyping efforts included system-level designs and almost all programs that conducted system-level prototyping used competitive approaches, as is shown in figure 6. Competitive prototyping may generate more information about proposed solutions because contractors sometimes propose different design approaches or system concepts to meet DOD’s capability needs. DOD’s 2007 prototyping and competition memorandum also noted that competition would generate more knowledge about technical risk and build a stronger foundation for estimating costs. According to officials from 16 of the 17 programs that prototyped, the benefits gained from their prototyping effort were worth the cost and provided a positive return on investment. The benefits gained were central to the development of a sound business case, which includes evidence that (1) the customer’s needs are valid and can best be met with the chosen concept and (2) the chosen concept can be developed and produced with existing resources, such as time, money, and available technology. We have previously found that establishing a sound business case is essential to achieving better program outcomes. See appendix V for an analysis of acquisition outcomes to date for the programs we examined. Prototyping provided programs with information on technology maturity, the feasibility of the design concepts, potential costs, and the achievability of planned performance requirements, which helped inject realism into their business cases. Appendix VI includes examples of these benefits, several of which we highlight below: Prototyping demonstrated key technologies or proposed design solutions. For example, Space Fence officials stated that prototyping helped them determine that a riskier, cutting edge design involving a 15 percent smaller radar was feasible. Without prototyping, the program would not have had sufficient information to be confident in the riskier option, nor would the contractor have proposed it without the opportunity to demonstrate that it worked. Prototyping informed programs’ understanding of prices and helped validate business case cost estimates. During the prototyping process, contractors select vendors, develop supplier relationships, purchase materials, and build a version of the system or parts of the system, all of which provide information on potential costs. Air and Missile Defense Radar program officials stated that prototyping increased the cost information available to the program and led to cost reductions. They explained that competitive prototyping incentivized the contractors to determine their cost drivers in order to be more competitive in the next phase. Prototyping helped programs better understand requirements and—in most cases—helped them make performance tradeoffs to meet cost targets. In the case of Joint Light Tactical Vehicle, prototyping helped program officials determine that two versions of the vehicle were too heavy to be transported as planned. The program then lowered its transportability requirements by eliminating the need to airlift the vehicles in extreme conditions. This change allowed the vehicles to be heavier and resulted in $35,000 in savings per vehicle, according to the Army. Prototyping provided a variety of other benefits as well. For example, prototyping helped programs improve system performance. Small Diameter Bomb II officials said that data collected during its prototype testing set the stage for improvements in its target classification software. It also helped identify potential reliability issues early. The Next Generation Jammer Increment 1 and Ship to Shore Connector Amphibious Craft programs changed certain subsystem materials based on information learned about wear during prototype testing. Further, for 11 programs, the prototypes served as test assets during system development or were used to continue development efforts. Competitive prototyping approaches generated additional benefits, such as enabling more favorable business terms. According to the Office of the Under Secretary of Defense for Acquisition, Technology, and Logistics, the Air and Missile Defense Radar program’s use of competition resulted in over $100 million in savings and will reduce operation and support costs over the life of the program. Air and Missile Defense program officials explained that having three competitors was helpful because it reduced the likelihood that contractors would team up in the next phase, leaving the government with only one proposal. In other cases, competition improved the quality of the systems being offered to DOD. For example, Space Fence program officials told us that competition spurred contractors to introduce cutting edge designs and continue refining those designs in order to remain competitive in the next phase of the program. Finally, officials from several programs stated that contractors supplemented prototyping efforts using their own funds and believed contractors did this in order to make their subsequent offers more competitive. A common perception is that competitive prototyping might cost more up front because multiple contracts are awarded, but our analysis showed that programs using multiple contractors did not have higher relative costs for technology development. Programs that used a competitive approach planned to spend a similar percentage or less of their total expected RDT&E funding prior to the start of system development as those programs that did not competitively prototype. Using competitive prototyping approaches did create additional administrative burdens in the short term because program offices had to manage multiple contractors and maintain firewalls to ensure fair competitions, but officials from across the programs we examined stated it was worth the investment. Officials from all 17 programs that prototyped told us their prototyping efforts were useful; however, we found some programs got more out of their prototyping efforts than others. Based on those programs’ experiences and lessons other programs shared, we identified the following five practices that helped programs maximize the utility of their prototyping efforts. 1. Identify risks early and target prototyping efforts to address them. Officials from seven programs described early activities that helped identify risks and shape their prototyping efforts. The Air and Missile Defense Radar program planned a big leap in technology over the existing radar, which was fielded over 30 years ago. The program conducted early technology maturity assessments that identified the large aperture digital beamforming and calibration critical technology as the program’s key risk area. The program focused its prototyping efforts on maturing this and other critical technologies and demonstrating them in a relevant environment. It is on track to complete system development on time and within its estimated research and development cost. 2. Structure prototyping efforts to be completed in time to inform key decisions—particularly source selection. We found that many of the programs prototyped before selecting their system development contractor and about three-quarters of the programs also held preliminary design review before entering system development. Prototyping helped inform these decisions and related assessments. Common Infrared Countermeasure program officials told us they required contractors’ proposals for the program’s system development contract to include solutions to address reliability failures identified during prototyping. 3. Specify the level of fidelity needed to provide the necessary information about which risks to address. Officials from 15 programs told us they prototyped designs similar to the actual design of the system they will develop. This is known as high fidelity prototyping. Officials from a few programs noted that doing so enabled them to understand what the system entailed and, if needed, make trade-offs accordingly. In contrast, Integrated Air and Missile Defense program officials stated that the results from prototyping were not as helpful to make programmatic decisions because they were limited to demonstrating the feasibility of a certain system concept using a generic government design. 4. Ensure the appropriate level of insight into the design and cost information. Officials from each program said they had sufficient visibility into the prototyping efforts, but some officials described having more insight into the efforts and their results than others. The level of insight can be affected by factors, such as the type of information a program requires a contractor to provide under a prototyping contract. Space Fence officials noted that in addition to conducting a live prototype demonstration, they had access to reports and data from contractors’ efforts. The officials said that the data obtained through prototyping were helpful for pricing. The program used the data to mature its cost estimate and was able to use a firm fixed price contract for system development. Space Fence finalized its design just over a year after entering system development, has had no cost growth to date, and anticipates that the contractor will deliver the system earlier than initially planned. 5. Keep plans flexible to adapt to information learned during the effort. Officials from four programs told us they used multi-phased prototyping approaches and a few described adding or removing contractors. Officials from at least three programs told us they changed strategies or modified their approach based on information learned or in response to a tighter budget environment. For example, after the Common Infrared Countermeasure program determined its technologies were less mature than expected, it added two prototyping phases to continue maturing the technologies, testing the system with related systems, and demonstrating manufacturing processes. The program entered system development with its technologies nearing maturity and completed its system design in just over a year. It has had about 6 percent cost growth as of October 2016 and is estimating completing system development on time. To help ensure U.S. military capabilities outpace those of potential adversaries, DOD has expanded prototyping efforts focused on innovation, including disruptive innovation, and has started several new initiatives outside of major acquisition programs to address gaps in its innovation portfolio. However, these initiatives face barriers, such as limited funding and competing priorities. Literature on private sector innovation, including the use of prototyping, identifies key enablers for these types of efforts, such as developing a strategy for innovation, identifying relative levels of investments that align with innovation goals, and protecting funding for technology investments that have higher risk, but perhaps more reward across the enterprise. DOD has taken steps that are consistent with a few of these enablers, but lacks others, such as an innovation strategy that could also address the role of prototyping. Since 2012, DOD and the military departments have established seven new offices to increase prototyping and experimentation and further innovation. Prototyping can be a way to “test the waters” with new and potentially disruptive concepts and technologies. Experimentation puts prototypes into the warfighter’s hands, so that the capabilities they provide can be assessed in an operational context. Most of the efforts we examined aim to mature technologies for future capabilities, but without the rigidity, commitment, and additional cost associated with starting new acquisition programs. Other than the experimentation initiatives, all of them involve demonstrations that seek to improve DOD’s or the military services’ understanding of the viability, maturity, and potential utility of the technologies, subsystems, or systems being prototyped. The demonstrations also inform decisions regarding potential next steps, such as transition to a military service in the case of mature capabilities that are ready to be put into use or to an acquisition program for those that need further development. Table 2 provides an overview of these initiatives. The new initiatives help address gaps in DOD’s science and technology and weapon system investments and expand efforts to identify and mature potentially innovative and disruptive technologies. For example, the Army Technology Maturation Initiative uses budget activity 6.4 funding, which is typically associated with acquisition programs, to conduct higher-fidelity prototyping and further mature technology outside of those programs. Other initiatives focus on modifying already fielded equipment and technologies to use them in new ways, combining prototyping and rapid acquisition practices to field capabilities faster, and encouraging experimentation to explore how capabilities being prototyped could be employed in an operational setting. The two most mature prototyping-related initiatives have made some progress. For example, the Strategic Capabilities Office reported that it is currently in the process of transitioning six of its technology demonstration and prototyping projects to the military services. The Army’s Technology Maturation Initiative has also demonstrated some progress—it has six projects that have either transitioned to a program of record or are in the process of transitioning. However, with the exception of the Navy’s Technology Innovation Games, the other four initiatives are still in the early planning phases. Some of them are still in the process of developing charters, determining project selection processes, and documenting priorities. Most of the new rapid capabilities offices were developed so recently that they were also not in the fiscal year 2017 budget request, but the Army plans to temporarily support its office with funding from existing Army accounts. DOD’s new prototyping initiatives face several barriers that can make it challenging to obtain funding to start projects, manage the initiatives to achieve innovation, and transition the prototypes to acquisition programs. Literature on private sector innovation, including the use of prototyping, suggests that private sector firms face some of these same barriers. Key barriers we identified include: Funding structure: Several studies have suggested that maturing technologies outside and independent of acquisition programs to higher technology readiness levels can promote innovation and facilitate technology transition. However, DOD’s funding structure and how it is commonly interpreted may limit the amount of higher fidelity prototyping conducted outside of acquisition programs. DOD’s science and technology community manages and invests research and development funding in budget activities 6.1-6.3, but does not typically use budget activity 6.4 funds. According to DOD regulation, projects funded with budget activity 6.3 are to mature technologies to technology readiness levels 4, 5, or 6, while those funded with budget activity 6.4 are to result in the achievement of technology readiness level 6 or 7 (see app. VIII). Due in part to this budget activity structure, the science and technology community typically sees its role as maturing technologies to no higher than technology readiness level 6. As a result, until DOD and the military services’ recent prototyping initiatives, there were not many offices focused on further maturing technologies outside of acquisition programs. Risk averse culture: Although it is appropriate to minimize risks in acquisition programs, some officials stated that excessive risk aversion outside of acquisition programs can stifle innovation. According to the Defense Science Board, over time, DOD has become increasingly risk averse and experimentation has moved towards scripted demonstrations, testing, and training. Pressure to justify budgets, demonstrate utility to the warfighter, and advance careers all contribute to this risk aversion. Many prototyping and innovation initiatives we reviewed emphasized high transition rates of between 80 and 100 percent. Generally speaking, transition means that a technology has been sufficiently matured and is ready to transition to a user such as a weapon acquisition program or the warfighter in the field. On one hand, a high transition rate can be an indicator that an initiative is generating a good return on investment and is developing capabilities that meet customers’ needs. But, for prototyping initiatives with the stated purpose of encouraging innovation, particularly disruptive innovation, making high transition rates a goal could be counterproductive and lead to a lower tolerance for risk or failure. For private sector projects focused on innovation, companies can aim for transition rates as low as 20 to 50 percent. Competing priorities: Officials identified competition with projects the military services have previously funded and prioritized as a barrier to innovation efforts—both when requesting funding to prototype and later when trying to transition. Innovation literature suggests that companies frequently face this same problem. Resources are often not available for bolder projects because funds are consumed by pre- existing projects; furthermore, companies are more likely to devote resources to sustaining innovation, which gradually improves on existing products, rather than riskier disruptive innovation. The Secretary of Defense testified to Congress in September 2016 that he has seen the constant temptation over the years to starve new and future-oriented defense investments in favor of more established and therefore well-entrenched programs. He expressed concern that funding was being taken away from initiatives such as the Strategic Capabilities Office, to instead pay for existing acquisitions. In fiscal year 2016, 6.4-funded initiatives that focus on prototyping and innovation represented less than 4 percent of budget activity 6.4 funds. Long budget timelines: Long budget timelines make it difficult to start prototyping projects that address emerging threats in a timely manner. For example, as is illustrated in figure 8, a project conceived in February 2017 might not be authorized and appropriated funding until October 2018. Projects that are expected to take 3 to 5 years to complete in effect require 5 to7 years from conception to completion. If there is a continuing resolution, it could take longer. These long timelines make it difficult to achieve the adaptability and faster capability development and fielding times that DOD seeks to keep pace with rapidly evolving threats. DOD can take special steps to provide funding in other ways, such as through reprogramming; but, in general, long budget timelines not only make it difficult to succeed fast, they also make it difficult for initiatives to “fail fast” and for DOD to move on to potentially more promising projects. Synchronization with acquisition programs: Prototyping efforts may not be complete at the most opportune time for acquisition programs, as is reflected in figure 9. If the effort is completed too early, technology can rapidly become obsolete before a relevant acquisition program is begun. If a prototyping effort is completed after an acquisition program has begun, the program may not be willing to adopt it. Defense Advanced Research Projects Agency officials noted that partners must budget 2 years in advance to further mature or transition technologies, which exacerbates this problem. Congress included a provision in the Fiscal Year 2017 NDAA that, depending on how it is implemented by DOD, could help make it easier to transition new technologies and components to programs that have already begun system development. The NDAA requires that certain MDAPs be designed and developed, to the maximum extent practicable, with a modular open systems approach.This type of approach, which includes a modular design and standard interfaces, enables system components to be more readily replaced. The literature we reviewed on private sector innovation highlights several key practices related to how to organize and manage innovation units, fund projects, and address potential culture barriers. These partially align with recent DOD actions. Some of these practices apply directly to prototyping, while others address innovation more broadly. The key practices or enablers we identified are listed in table 3 below. We compared DOD’s new prototyping initiatives with these enablers to determine whether DOD is well-positioned to generate the type of innovation, including disruptive innovation, that it is seeking. DOD has issued multiple memorandums related to prototyping and innovation, as reflected in table 4, but these documents fall short of a strategy. Specifically, with regard to prototyping and innovation, none of the documents we reviewed communicate strategic goals and priorities or delineate roles and responsibilities among DOD and the military services’ initiatives, which are elements of the innovation strategies described in the literature as well as standards for internal control. Congress included a provision in the Fiscal Year 2017 NDAA that provides some strategic direction for certain prototyping projects. It calls for the military services to establish or identify oversight boards that will develop triennial strategic plans to prioritize capability and weapon system component portfolio areas for prototyping projects, among other things. However, it is not yet clear whether there will be a mechanism to tie these efforts into a department-wide strategy. DOD’s lack of an innovation strategy means it has to rely on other mechanisms to coordinate and provide strategic direction for its prototyping initiatives, although those mechanisms do not cover some of DOD’s prototyping and innovation activities and do not establish department-wide priorities. For example, Communities of Interest (COI), which are organized by the Assistant Secretary of Defense for Research and Engineering (ASD(R&E)), help plan, coordinate, and share knowledge on science and technology activities for budget activities 6.2 and 6.3, but there are no analogous mechanisms for 6.4-funded activities, including those related to prototyping and innovation. The 17 COI working groups are generally organized by portfolio—for example, advanced electronics—and include representatives from across DOD. They periodically develop roadmaps for their portfolios. ASD(R&E) officials explained that the road mapping process is not directly tied to budget decisions and does not establish department-wide science and technology priorities. However, it does help identify investment gaps, opportunities for collaboration, and areas of potential overlap or overinvestment. DOD’s Long Range Research and Development Planning Program suggested using COIs to prioritize the technology investments it identified, which would expand their focus beyond science and technology. We have previously found that one way to better manage potentially fragmented activities is to improve collaboration and coordination. Without an approach that covers relevant 6.4-funded activities, DOD may be missing out on opportunities to take a more strategic approach to prototyping and innovation across the department, including sharing information and identifying areas of potential under- or overinvestment related to prototyping and experimentation. DOD is undergoing organizational changes that could provide more focused leadership, strategy development, and coordination for prototyping and innovation-related activities: The NDAA for Fiscal Year 2017 establishes the position of Under Secretary of Defense for Research and Engineering. According to the conferees, the creation of this position was part of organizational changes to DOD that seek to, among other things, advance technology and innovation. The duties of the Under Secretary of Defense for Research and Engineering include advancing technology and innovation and establishing policies on all defense research and engineering. DOD’s Defense Innovation Board has recommended that DOD establish the position of Chief Innovation Officer, to coordinate, oversee, and synchronize innovation activities across the department. DOD has established the position of Deputy Director for Prototyping and Experimentation to oversee program execution, provide technical and programmatic advice, and work with DOD entities to identify shortfalls and potential technologies and projects to address them. However, the position only has authority over prototyping and experimentation efforts within the office of the Deputy Assistant Secretary of Defense for Emerging Capability and Prototyping. To influence prototyping activities outside of that office including military service led initiatives, the Deputy Director stated that he has to leverage his personal relationships and experience. With DOD’s increased level of effort and investment in prototyping and innovation comes the potential for inefficiencies if efforts are not coordinated and aligned with an overarching strategy. Although these offices are generally attempting to meet different needs and are using a variety of approaches to achieve innovation, without an articulated strategy, there is a potential for overlap if their goals and approaches evolve over time. With the exception of the Strategic Capabilities Office, DOD and the military services have not allocated large amounts of funding to their new prototyping and innovation initiatives in their budget requests and they will have to compete with other priorities to receive funding in the future. DOD’s fiscal year 2017 budget request included less than $100 million for each of the six other initiatives we examined. One approach identified in the academic literature that helps ensure innovative projects receive sufficient funding, in the face of competing priorities and a risk averse culture, is called a “strategic buckets” approach. Under this approach, management makes a strategic decision to allocate set “buckets” of resources for different types of projects, including breakthroughs, and then takes steps to ensure adequate funding for innovation efforts. The distribution of resources among different buckets is dictated by the organization’s strategy. This approach is consistent with portfolio management best practices, which call for organizations to use an integrated approach to prioritize needs and allocate resources in accordance with strategic goals. Figure 10 includes a notional depiction of how this approach could be adapted to the basic tenets of DOD’s prototyping and innovation efforts. To implement a strategic buckets approach for innovation, an organization needs to develop innovation goals, reflect those goals in its innovation strategy, inventory current projects and funding allocations, and then adjust funding levels, if needed, to make sure they align with its goals and strategy. Decisions to change relative levels of investment in different buckets may be made over time in response to changing world events. For example, DOD’s concern about losing its eroding warfighting edge in certain areas could cause it to place a higher priority on prototyping systems that could lead to disruptive innovations. The Navy’s approach to managing its science and technology investments, including its prototyping and innovation initiatives, has elements of a “strategic buckets” approach. It maps out roughly the percentage of funding that it plans to request for different parts of its science and technology portfolio, including some of its prototyping and innovation initiatives, as is reflected in figure 11 below. The Navy uses this information to help develop its science and technology budgets. The Navy has not extended the concept to other research and development budget activities, such as 6.4, which are largely driven by decisions on individual acquisition programs. DOD has also employed aspects of this approach to set and enforce minimum funding levels for its science and technology investments, but it lacks certain prerequisites needed to apply it more broadly to prototyping. ASD(R&E) officials explained that, in recent years, the Office of the Secretary of Defense has communicated an investment floor for budget activities 6.1 through 6.3 in the Defense Planning Guidance, which provides strategic direction for DOD budget formulation. ASD(R&E) enforces these levels by reviewing military service budget requests and directing funding increases or other shifts to ensure the floor is met across the department. Although ASD(R&E) has responsibility for overseeing research and development activities under budget activities 6.1 through 6.4, ASD(R&E) officials stated that they do not review budget requests for budget activity 6.4 because they are primarily allocated to acquisition programs. By not exercising its authority over the full range of budget activity 6.1 through 6.4 funding, ASD(R&E) is missing an opportunity to assess prototyping activities collectively from an enterprise level to determine if and how this funding might best be used to support DOD’s prototyping and innovation initiatives and its strategic goals. DOD also lacks the innovation strategy and baseline understanding of its prototyping projects and their associated funding, to identify areas of potential over- and under-investment as well as appropriate investment targets. Better Buying Power 3.0 called for (1) the USD(AT&L) and Vice Chairman of the Joint Chiefs of Staff, who oversees the weapon system requirements process, to conduct annual reviews of each service’s budget activity 6.3- and 6.4-funded prototyping and experimentation activities, and (2) the ASD(R&E) to develop, maintain, and publish a database of government and industry experimentation capabilities and events, and make annual recommendations to the military services and USD(AT&L) for additional prototyping. USD(AT&L) and military service officials stated that the reviews have not been held due to difficulties with scheduling. In addition, although ASD(R&E) officials took steps to develop the database, it was not completed and efforts to make the information available through a different database were unsuccessful. Both the reviews and the database could have provided useful information about prototyping and experimentation activities and opportunities across the department. The NDAA for Fiscal Year 2017 included a provision that could provide more information about funding for select prototyping initiatives outside of acquisition programs. This provision could better position DOD to set and track prototyping investment targets. In budget requests after fiscal year 2017, for budget activity 6.4, the provision requires DOD to state the amounts requested for prototyping and experimentation of weapon system components and technologies separate from acquisition programs of record. Those requests are to reflect priority areas for prototyping. Furthermore, the legislation calls for military services to establish or identify prototyping oversight boards to, among other things, annually recommend funding levels for prototype projects across capability or weapon system component portfolios although no analogous recommendations are required for efforts outside of the military services. Most of DOD and the military services’ prototyping and innovation initiatives use more of a “demand pull’ approach to selecting projects, which could limit their likelihood of generating disruptive capabilities (see app. IX for a list of these initiatives). This was the case for both older and newer initiatives. Demand pull initiatives focus on prototyping technologies or systems to address validated requirements, which means they have built in constituencies ready to support them. On the other hand, “supply push” initiatives take on projects without a stated customer need and do not align with existing organizational structures. This can make it difficult to gain support for supply push type projects, particularly when it is time to transition them into programs. For example, the Navy had to establish an Unmanned Maritime Systems program office when unmanned underwater capabilities languished because there were no “customers” given existing organizational structures. DARPA, with an annual budget of over $1 billion, is DOD’s largest example of an organization that primarily uses a supply push model. An overreliance on demand pull can lead to incremental improvements in capabilities without ever achieving a more disruptive breakthrough. The Deputy Assistant Secretary of Defense for Research stated that DOD’s requirements process is a model for slightly improving how DOD conducts operations now rather than thinking outside the box of the art of the possible. DOD does not have a similar process designed to foster more innovative solutions. Without an innovation strategy that sets goals and aligns funding for demand pull and supply push projects accordingly, DOD’s prototyping and innovation initiatives might not produce the types of disruptive capabilities and breakthroughs the department is seeking. Most DOD prototyping and innovation initiatives we reviewed took steps so that they could learn quickly through their projects. Almost all of them have expected project turnarounds of 3 to 5 years or less. Initiatives such as the Army’s Technology Maturation Initiative and the Strategic Capabilities Office also regularly reviewed projects to determine whether they were still needed or feasible based on initial efforts and, if they were not, terminated projects accordingly. Two longstanding initiatives employed approaches to speed up the funding process. DOD’s Joint Concept Technology Demonstration program notifies Congress about new projects via letter prior to starting them rather than waiting to request approval in each budget request. Officials from the Future Naval Capabilities Program stated that they use funding left over from projects completed in a given year for other projects, as long as the amount falls below a certain threshold. DOD and military department officials acknowledge that there is a risk averse culture across the department, even with respect to prototyping and innovation initiatives. However, neither the officials we spoke with nor recent memorandums have described ways DOD is changing its metrics or incentives to encourage more risk tolerance within these initiatives, which is one of the enablers highlighted in the literature on private sector innovation. Other enablers, such as developing an innovation strategy and ensuring adequate funding to support it, could also help foster a more risk tolerant environment. DOD’s Defense Innovation Board is also in the process of identifying ways to develop a culture of innovation in DOD in which new ideas can be tested and fail without fear of ending or derailing the career of a science and technology manager, acquisition professional, or military officer. Prototyping is a tool that can help DOD address a variety of both long- standing and recent weapon system acquisition and modernization challenges. When used effectively, it can help reduce risks and improve the likelihood that a weapon system acquisition program will be completed on time and on budget. Furthermore, it helps keep DOD’s technology pipeline stocked with new and innovative technologies that might provide the next great leap ahead in military capabilities and may even deter adversaries by demonstrating advanced capabilities. In the period since the Weapon System Acquisition Reform Act (WSARA) of 2009 was implemented, DOD acquisition programs have used prototyping to reduce risk and inject realism into their business cases, which has helped place them on sound footing for future success. The results were notable on the programs we reviewed—lower technical risk, better understanding of requirements, and more information on potential costs, among other benefits. With the recent repeal of WSARA’s competitive prototyping requirements, there is a risk that programs will choose not to prototype. In doing so, those programs would forfeit the significant benefits that early prototyping can offer. DOD’s efforts to expand prototyping and experimentation to help achieve the innovation and disruption needed to maintain its technological and military advantage are in a more nascent stage. However, challenges, such as limited funding, a risk averse culture, and competing priorities, are already apparent and may make it difficult for the efforts to gain momentum. Pending organizational changes, including the creation of the positions of Under Secretary of Defense for Research and Engineering and Chief Innovation Officer, provide an opportunity for DOD to elevate and take a more strategic approach to the mission of advancing technology and innovation. The literature on private sector practices provides a roadmap for how this new DOD leadership can enable innovation, including through the use of prototyping. But DOD will need to fully embrace certain key enablers that are not currently present in the department, including a strategy that addresses its disparate prototyping and innovation efforts and strategic goals that can be used to guide resource decisions. It will also need to work across funding structures for science and technology and more advanced development work that usually separate certain types of prototyping efforts. The recent increased level of effort and investment in prototyping and innovation comes with the potential for inefficiencies if efforts are not strategic and coordinated. Other high-risk investments in categories such as disruptive technologies may need to be protected from a risk averse culture, as well. DOD has taken several steps to adopt aspects of private sector innovation practices and has developed mechanisms to coordinate and review its science and technology investments, but without a more strategic, inclusive, and deliberate approach overall, its new prototyping and experimentation initiatives might not generate the levels and types of innovation the department is seeking. To help ensure DOD takes a strategic approach for its prototyping and innovation initiatives and overcomes funding and cultural barriers, we recommend that the Secretary of Defense direct the Assistant Secretary of Defense for Research and Engineering to take the following four actions: Develop a high-level DOD-wide strategy, in collaboration with the military services and other appropriate DOD components, to communicate strategic goals and priorities and delineate roles and responsibilities among DOD’s prototyping and innovation initiatives. Take steps, such as adopting a “strategic buckets” approach, to help ensure adequate investments in innovation that align with DOD-wide strategy. Review budget activity 6.4 funding requests to help maintain a level of investment for budget activity 6.4-funded prototyping and innovation efforts that is consistent with DOD-wide strategy. Expand the Community of Interest working groups to include budget activity 6.4-funded prototyping and innovation initiatives in their science and technology planning and coordination processes or employ a similar coordination mechanism for budget activity 6.4- funded prototyping and innovation initiatives. We provided a draft of this report to DOD for comment. In its comments, reproduced in appendix X, DOD concurred with our four recommendations. DOD also provided technical comments, which we incorporated as appropriate. We are sending copies of the report to the appropriate congressional committees; the Secretary of Defense; the Secretaries of the Army, Navy, and Air Force; and the Assistant Secretary of Defense for Research and Engineering. In addition, the report will be available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at 202-512-4841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix XI. Our objectives were to assess (1) how the Department of Defense (DOD) has used prototyping prior to system development on major defense acquisition programs, and (2) what steps DOD has taken to increase innovation through prototyping activities conducted outside of major defense acquisition programs. To address our first objective, we examined 22 major defense acquisition programs that had a Milestone B decision, which approves entry into system development, between December 2009 and February 2016, or anticipated receiving Milestone B approval by February 2016 when we began our selection process. We selected December 2009 as the starting date because it was when DOD implemented the Weapon System Acquisition Reform Act of 2009 and its associated prototyping provisions. These programs and the dates they entered system development are included in table 5. To determine what prototyping approaches the 22 programs used, if any, and to identify costs, benefits, challenges, and lessons learned from their prototyping efforts, we reviewed program documents, such as technology development strategies, acquisition strategies, prototyping waivers, acquisition decision memorandums, independent cost estimates, and budget requests. We also conducted semi-structured interviews with officials from each of the 22 programs and reviewed prior GAO reports. We also examined prior GAO work related to acquisition program outcomes and the technology development phase. To examine the proportion of research, development, test, and evaluation (RDT&E) funds planned for development prior to each program’s entry into system development, we reviewed program funding stream data obtained from December 2015 Selected Acquisition Reports. We calculated the RDT&E funds planned as of the month prior to the program’s Milestone B approval date, which we obtained from program documents. We then divided the prorated amount by the program’s current RDT&E cost estimates to obtain the proportion of RDT&E funds planned for use prior to system development. We excluded six programs that did not have complete data available. See appendix VII. To examine how these 22 programs fared in terms of cost and schedule performance, technology maturity, and design stability, we compared prototyping programs’ data with non-prototyping programs’ data. Specifically, for programs’ cost outcomes, we examined the difference between programs’ first full and current RDT&E cost estimates. Programs’ first full estimates are typically developed upon program entry into system development at Milestone B. For programs’ schedule outcomes, we examined the growth between when the program entered and completed system development using programs’ first full and current estimates. Completion of system development usually occurs when Milestone C is achieved. For first full estimate data, we leveraged data collected as part of our annual assessment of DOD weapon systems. This included cost, quantity, and schedule data from the Defense Acquisition Management Information Retrieval Purview system, referred to as DAMIR. The team entered this data into a database and verified that the data were entered correctly. We converted all cost information to fiscal year 2017 dollars using conversion factors from the DOD Comptroller’s National Defense Budget Estimates for Fiscal Year 2017 (tables 5-9). To assess the reliability of the data the annual assessment team talked with a DOD official responsible for DAMIR’s data quality control procedures and reviewed relevant documentation. They also confirmed selected data reliability with program offices. For current estimate data, we obtained RDT&E, total acquisition cost, quantity, and schedule estimates from the August 2016 Defense Acquisition Executive Summary reports. We determined that data were sufficiently reliable for the purposes of this report. The selected programs in our review entered system development in December 2009 or after and are generally newer. To address concerns about examining outcomes given the relative newness of many of the programs, we excluded the following six programs from our cost and schedule analyses because they are too recent to have current estimates separate from the program’s baseline or do not have approved first full estimates: Amphibious Combat Vehicle, B-2 Defense Modernization System Modification, Common Infrared Countermeasure, Next Generation Jammer Increment 1, Offensive Anti-Surface Warfare Increment 1, and Three-Dimensional Expeditionary Long-Range Radar. We excluded two additional programs—Enhanced Polar System and Space Fence Increment 1—from our schedule analysis because these programs will not hold a Milestone C. To examine the technology maturity and design stability of programs, we leveraged survey response data provided in support of our annual assessments of selected weapon programs. These assessments rely on data collected from program offices related to the technology readiness levels of their critical technologies and their percentage of completed design drawings. Our best-practices work has shown that a technology readiness level (TRL) 7—demonstration of a technology in an operational environment—is the level of technology maturity that constitutes a low risk for starting a product development program. For shipbuilding programs, we have recommended that this level of maturity be achieved by the contract award for detailed design. In our assessment, the technologies that have reached TRL 7, a prototype demonstrated in an operational environment, are referred to as mature or fully mature. Those technologies that have reached TRL 6, a prototype demonstrated in a relevant environment, are referred to as approaching or nearing maturity. Satellite technologies that have achieved TRL 6 are assessed as fully mature due to the difficulty of demonstrating maturity in an operational environment—space. No programs needed to be excluded from the technology maturity analysis. See appendix VIII for TRL definitions. Our best practices work shows that completion of at least 90 percent of engineering drawings at critical design review provides tangible evidence that the product’s design is stable. Completed design drawings were defined as the number of drawings released or deemed releasable to manufacturing that can be considered the “build to” drawings. For shipbuilding programs, they asked program officials to provide the percentage of the three-dimensional product model that had been completed by the start of lead ship fabrication, and as of our annual assessment. Five programs were excluded from this analysis. The Joint Light Tactical Vehicle program does not track the percent of releasable drawings and the Combat Rescue Helicopter, Next Generation Jammer Increment 1, Global Positioning System Next Generation Operational Control System, and Three-Dimensional Expeditionary Long-Range Radar programs have not yet held their critical design reviews. Although the technology maturity and design stability information provided at key knowledge points provide excellent indicators of potential risks, by themselves they do not cover all elements of risk that a program encounters during development, such as funding instability. See appendix V for a summary of program outcomes. To address our second objective, we reviewed fiscal year 2017 budget documentation and interviewed DOD and military service officials responsible for research and development to identify initiatives that DOD started in the past five years with the stated purpose of promoting innovation through prototyping and experimentation. We focused on broad-based initiatives, rather than ones focused on a specific technology area. We also examined key preexisting initiatives for contrast. We only included initiatives funded with budget activities 6.3 and 6.4, advanced technology development and advanced component development and prototypes respectively, because those budget activities fund the development and testing of new concepts and capabilities using higher fidelity prototypes that have the potential for short- or medium-term application. We did not meet with rapid prototyping offices established for direct support to the conflicts in Afghanistan and Iraq because they were designed for a temporary contingency. To identify the initiatives’ goals, focus areas, scope, approaches, funding characteristics, strategies, coordination mechanisms, and barriers, if any, they face, we reviewed documentation from the initiatives, such as budget requests, charters, and briefings. We also interviewed program officials and obtained written responses to questions. To determine what direction and strategy DOD has provided for the initiatives, we analyzed DOD memorandums on the following subjects: Long Range Research and Development Plan, Defense Innovation Initiative, Wargaming and Innovation, and Better Buying Power 3.0 as well as Navy memorandums on: Task Force Innovation, Wargaming, and Innovation Funding within the Naval Research and Development Establishment. We also reviewed DOD’s Long Range Research and Development Planning Program briefing. We examined DOD’s process for coordinating science and technology investments, called “Reliance 21,” to determine the extent to which it addressed prototyping for innovation and whether it has the potential to do so. We also conducted a review of literature on innovation in the commercial sector, including the use of prototyping, to identify enablers that could be applicable in DOD and to identify barriers commercial sector organizations face. The literature was primarily from academic sources, but included some literature from the private sector. Specifically, we began with recognized experts in the field of innovation. We then used a snowball methodology to identify other key authors on innovation through databases such as ProQuest and WorldCat. We also asked DOD officials for recommendations regarding relevant authors and articles. Our literature search covered articles published from 1996 onward, with a majority written between 2005 and 2016. We identified 19 sources that were specific to our work. They primarily relied on interviews, surveys, and case studies. Through the literature search, we identified a number of general themes about spurring innovation across articles and interviews. We then developed a list of key enablers from these themes that could potentially apply to DOD prototyping for innovation activities. We also noted when these sources identified barriers to innovation that aligned with the barriers we identified as existing in DOD. To determine whether DOD’s practices are consistent with these enablers, we compared them with memorandums related to prototyping for innovation, the Navy’s and DOD’s approach to managing funding for innovative research and development as reflected in the Navy Science and Technology Strategy and in DOD briefings, demand pull and supply push emphases of prototyping for innovation initiatives, and initiatives’ approaches to learning quickly as reflected in their documentation. When applicable, we also compared DOD’s approach to its prototyping and innovation initiatives with additional sources including the Standards for Internal Control in the Federal Government (for strategy and goals); GAO work on fragmentation, duplication, and overlap (for coordination); and portfolio management best practices (for funding and prioritization). To inform all assessments for this objective, we interviewed officials from the Office of the Under Secretary of Defense (Comptroller) and officials from each military department’s comptroller’s office; Office of the Assistant Secretary of Defense for Research and Engineering; Office of the Deputy Assistant Secretary of Defense for Emerging Capabilities and Prototyping; Office of the Assistant Secretary of the Air Force for Acquisition; Office of the Assistant Secretary of the Army (Acquisitions, Logistics, and Technology); and Office of the Deputy Assistant Secretary of the Navy for Research, Development, Test, and Evaluation. We conducted this performance audit from September 2015 to June 2017 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The technology transition-related budget activities 6.3 and 6.4—which received $5.8 billion and $14.6 billion respectively in fiscal year 2016— support numerous entities across the Department of Defense (DOD). In fiscal year 2016, the Defense Advanced Research Projects Agency, the Army, and the Office of the Secretary of Defense received the highest levels of budget activity 6.3 funds—with each component receiving about 20 percent of total funding—and the remaining 38 percent was divided among several organizations (see figure 12). That same year, the Missile Defense Agency received the largest percentage of budget activity 6.4 funds, with 42 percent, while the Navy received 35 percent and the remaining 23 percent was shared among several organizations (see figure 13). Overall, prototyping appears to have reduced risks on more complex programs, as evidenced by the fact that most programs that prototyped had similar cost and schedule outcomes and attained key knowledge at similar rates as non-prototyping programs that were generally considered less complex and risky. See appendix I for the programs included in each of the following analyses. Table 9 below includes individual program outcomes to date. Our analysis of cost and schedule performance to date showed that for the programs we reviewed, most prototyping and non-prototyping programs’ outcomes were similar (see figures 14 and 15). Nine of 11 prototyping programs had similar levels of research, development, test and evaluation cost growth as the five non-prototyping programs. Officials from the two prototyping programs with higher levels of cost growth— Global Positioning System Next Generation Operational Control System (GPS OCX) and Integrated Air and Missile Defense (IAMD)—told us that they could have learned more about their prototyping efforts if they produced higher fidelity prototypes or better addressed key risk areas. Six of nine prototyping programs with comparable data experienced similar schedule growth between the start and completion of system development as the five non-prototyping programs. The IAMD and GPS OCX programs were also among the prototyping programs with the largest schedule growth. We also found similar outcomes between prototyping and non-prototyping programs with respect to the technology and design knowledge they had at key points in the acquisition process (see tables 10 and 11). Most programs that prototyped during technology development matured or nearly matured their critical technologies before entering system development and released 90 percent of the system’s drawings at critical design review (see appendix VIII for definitions of technology readiness levels). Demonstrating high levels of technology and design knowledge by critical points in the acquisition process is a GAO best practice that helps to reduce program risk. To get a general sense of the relative cost of programs’ prototyping efforts, we compared programs’ planned research, development, test and evaluation (RDT&E) funding during technology development to their overall RDT&E cost estimate. Table 12 presents the programs’ prototyping approaches and percent of RDT&E acquisition costs planned prior to system development for 16 of the programs we examined. Non- competitive programs ranged from 50 to 80 percent, whereas competitive programs ranged from 10 to 77 percent. Since 2012, DOD has expanded its prototyping and experimentation efforts, in order to increase innovation, rapidly field new technologies, explore operational concepts, and hedge against threats from potential adversaries, among other purposes. Table 13 below includes newer prototyping and innovation initiatives and select older initiatives. We focused on the newer initiatives in the body of the report. Although broadly speaking, these initiatives are all seeking to achieve the same goal of innovating to maintain military superiority, we found that they differ in the types of innovation they are trying to achieve, the scope of their efforts, and their approaches. The initiatives we examined varied in the primary types of innovation they sought to achieve (see table 14). Initiatives that primarily focus on incremental or sustaining innovation gradually improve existing products and capabilities. Initiatives focused on disruptive innovation attempt to shift the balance of military power by providing new capabilities, potentially unforeseen by customers or adversaries. The type of innovation sought should not necessarily be equated with risk. For example, the Strategic Capabilities Office (SCO) considers itself to be focusing on low technical risk solutions that are intended to have potentially disruptive results. We excluded experimentation initiatives from this analysis. Most of the prototyping and experimentation initiatives we examined, including all but one of the newer initiatives, were focused on addressing military service specific needs (see table 15). Three initiatives address capabilities that could potentially benefit multiple services or that are seen as “falling through the cracks” of military service efforts. Those initiatives are among the most well-funded, with the SCO requesting $902 million for budget activities 6.3 and 6.4 and Defense Advanced Research Projects Agency (DARPA) requesting $1.2 billion for budget activity 6.3 for fiscal year 2017. We excluded experimentation initiatives from this analysis. Most of the initiatives we examined focused on prototyping technologies or systems that address validated customer or warfighter requirements— meaning they have more of a “demand pull” approach to selecting projects (see table 16). In contrast, “supply push” initiatives take on projects without a stated customer need and may not align with existing organizational structures or warfighting concepts. DARPA has historically focused almost exclusively on “supply push” type projects, as does the Navy’s Innovative Naval Prototypes initiative. We excluded experimentation initiatives from this analysis as well as the Marine Corps Rapid Capabilities Office and the Navy’s Rapid Prototyping Experimentation and Demonstration initiative because it is not yet clear which approach they will take. DOD’s newer prototyping and innovation initiatives tended to differ from older initiatives in several ways, such as the DOD budget activity used, the transition paths available, and senior leadership involvement. The new initiatives are almost exclusively funded with budget activity 6.4. Budget activity 6.4 funds are viewed as allowing for higher-fidelity prototyping efforts, which are closer to the intended end item. Some previous studies have suggested that maturing technologies outside of acquisition programs to higher readiness levels using budget activity 6.4 can promote innovation and facilitate technology transition. Older initiatives primarily rely on budget activity 6.3 funds. The Army Rapid Capabilities Office and Navy Rapid Prototyping Experimentation and Demonstration plan to conduct rapid prototyping and then oversee a modified acquisition process for the most promising prototypes. These offices may take responsibility for the rapid prototyping pathway called for in the National Defense Authorization Act for Fiscal Year 2016. Older initiatives typically transition technologies to a different organization or acquisition program. Many of the new prototyping initiatives report to senior department leaders, which can help maintain support for investments and streamline decision-making, according to the literature on innovation in the private sector. The SCO director reports to the Deputy Secretary of Defense, which provides him flexibility to work throughout the department. Furthermore, several new initiatives have unique characteristics or functions that the department is trying to invigorate. The SCO focuses on retooling fielded equipment and technologies to employ them in new ways. This reflects a new tactic in DOD’s innovation efforts because no other initiative has focused on this approach. The office limits its efforts to specific platform-centric solutions with the intention of delivering capabilities quickly. According to SCO’s director, one advantage of this approach is that it can help “buy time” for other initiatives to develop and field the next generation of systems. SCO covers all development costs initially to prove out the idea and develop a prototype, then turns projects over to the services for consideration in their budget deliberation processes. DOD has increased SCO funding since fiscal year 2014 from $140 million to a planned $902 million in fiscal year 2017. This increase is driven by new projects as well as testing and development efforts on existing projects. Two offices—the Air Force’s Experimentation Initiative and the Navy’s Technology Innovation Games—reflect a renewed emphasis on experimentation in DOD to explore how new capabilities being prototyped could be employed in an operational setting, among other purposes. These offices use virtual or conceptual environments to explore early ideas for implementing new technologies. They can also help identify promising concepts, or allow officials to discuss how to adjust tactics, doctrine, and training prior to development of new technologies. For example, the Air Force’s Experimentation Initiative is using experimentation to explore how directed energy could be used in an operational context and plans to use this information to inform decisions about pursuing directed energy systems. The Air Force is also considering conducting hardware prototyping under this initiative to demonstrate the operational value of proposed concepts. The Navy’s Technology Innovation Games employ a progression from workshops and discussions up through wargames and demonstrations. In addition to the contact named above, Ron Schwenn, Assistant Director; Pete Anderson; Leslie Ashton; Lorraine Ettaro; Daniel Glickstein; Laura Holliday; Richard Hung; Katherine Lenane; Loren Lipsey; Michael Sweet; Alyssa Weir; and Robin Wilson made significant contributions to this report. | DOD invests roughly $70 billion annually in weapon system research, development, test, and evaluation, including prototyping activities. Prototyping can help reduce risk in weapon system acquisition programs by improving understanding of technologies, requirements, and proposed solutions. It can also contribute to innovation by demonstrating the value of new technologies or systems. House Conference Report 114-102 accompanying a bill for the fiscal year 2016 National Defense Authorization Act included a provision for GAO to review how DOD's research and development funds are used and whether this approach effectively supports activities such as prototyping. This report assesses (1) how DOD has used prototyping prior to system development on major defense acquisition programs, and (2) what steps DOD has taken to increase innovation through prototyping activities outside of major defense acquisition programs. GAO examined prototyping activities for 22 MDAPs that planned to enter system development between December 2009 and February 2016 and 7 prototyping-focused initiatives with the stated purpose of promoting innovation. The Department of Defense (DOD) has used prototyping on its major defense acquisition programs (MDAP) primarily to reduce technical risk, investigate integration challenges, validate designs, mature technologies, and refine performance requirements. Of the 22 programs GAO reviewed, 17 used prototyping before starting system development. For many of those programs, prototyping provided information that helped introduce realism into their business cases by providing information on technology maturity, the feasibility of the design concepts, potential costs, and the achievability of planned performance requirements. DOD has developed new initiatives that are outside of major defense acquisition programs to increase prototyping and further innovation. However, these initiatives face barriers, such as limited funding, a risk averse culture, and competing priorities. Literature on private sector innovation identifies key enablers for these types of efforts, such as developing an innovation strategy, aligning investments with innovation goals, and protecting funding for riskier projects. DOD has taken steps that are consistent with a few, but not all, of these enablers. For example, DOD does not have a department-wide strategy that communicates strategic goals and priorities and delineates roles and responsibilities to guide the prototyping initiatives. This could lead to unproductive or poorly coordinated investments later. DOD's initiatives also face competition for funding, particularly with acquisition programs. One strategy to address funding issues called “strategic buckets” involves allocating resources to different types of projects based on an organization's strategy (see figure). DOD has not set strategic funding targets for its initiatives. Failing to do so could prevent them from gaining traction and puts their long-term success at risk. GAO is making four recommendations, including that DOD develop a department-wide innovation strategy that includes prototyping and adopt a more strategic approach for funding prototyping efforts across DOD. DOD concurred with the recommendations and is currently working on this strategy. |
The Tongass National Forest covers about 16.8 million acres in southeast Alaska and is the largest national forest in the United States, equal to an area about the size of West Virginia. The U.S. Department of Agriculture’s Forest Service manages the Tongass for multiple uses, such as timber production, outdoor recreation, and fish and wildlife. The Forest Service’s Alaska Region, headquartered in Juneau, Alaska, carries out the management responsibilities. Because of its magnitude, the Tongass is divided into three administrative areas—Chatham, Stikine, and Ketchikan—each having an area office headed by a forest supervisor. Each area office has between two and four ranger districts, headed by a district ranger, to carry out daily operations. In the 1950s, the Forest Service awarded 50-year (long-term) contracts to the Ketchikan Pulp Company (KPC)—now a wholly owned subsidiary of the Louisiana Pacific Corporation—and the Alaska Pulp Corporation (APC)—a Japanese-owned firm—to harvest Tongass timber. As stipulated in their contracts, each company built a pulp mill to process the harvested timber—KPC near Ketchikan and APC in Sitka. In return, the Forest Service guaranteed a 50-year timber supply totaling about 13.3 billion board feet for both contracts. KPC’s contract expires in 2004. APC’s contract was to expire in 2011, but the Forest Service terminated it for breach of contract on April 14, 1994, because APC shut down its pulp mill in September 1993. The Forest Service also sells Tongass timber to companies other than APC and KPC. These companies, referred to as independent short-term contractors, purchase timber under contracts usually lasting 3 to 5 years. Since 1980, about 30 percent of all Tongass timber sales have been made under independent short-term contracts. Although some of these short-term contracts have been awarded to APC and KPC, most have been awarded to other contractors. Since the early 1980s, the Congress has expressed concern about the adverse impacts of the long-term contracts on competition for timber in southeast Alaska and on the Forest Service’s ability to effectively manage the Tongass. Part of the concern centered on the perceived competitive advantages to APC and KPC that resulted from differences between certain provisions of the long-term and short-term independent contracts. Another part of the concern centered on the relationship of the long-term contracts to the overall management of the Tongass National Forest and, more specifically, to issues related to other forest resources such as fish and wildlife. “. . . it is in the national interest to modify the contracts in order to assure that valuable public resources in the Tongass National Forest are protected and wisely managed. Modification of the long-term timber sale contracts will enhance the balanced use of resources on the forest and promote fair competition within the southeast Alaska timber industry.” Among other things, the act directed the Secretary of Agriculture to unilaterally revise the long-term contracts in order to reflect nine specific modifications (see app. I for a complete list). A number of these modifications called for making long-term contracts consistent with short-term contracts in such respects as timber sale planning, environmental assessment, and the administration of road credits. Other provisions of the act added new environmental requirements, such as leaving timber buffers at least 100 feet in width along designated streams. Four months after the act was passed, and pursuant to one of the act’s requirements, we issued a report to the Senate Committee on Energy and Natural Resources and the House Committee on Interior and Insular Affairs. That report described the Forest Service’s revisions to the long-term contracts for each of the nine modifications and discussed whether the changes reflected the modifications specified in section 301(c) of the act. We concluded that, with the exception of dealing with the administration of road credits, the contract changes complied with the act’s requirements. We also concluded that more time would be needed to determine how these changes were actually carried out. You requested that we review the Forest Service’s implementation of certain contract modifications and other provisions of the Tongass Timber Reform Act. As agreed with your office, we focused this report mainly on two issues—road credits and timber buffers. More specifically, we determined whether credits that timber harvesters receive for building harvest-related roads are used consistently between long-term and short-term timber sale contracts and whether buffers of standing timber have been left along designated streams as the act requires, and how the Forest Service monitors the buffers’ effectiveness. During our review, we also noted inconsistencies in the Forest Service’s documentation of the environmental significance of changes to timber harvest unit boundaries after environmental impact statements had been prepared. As agreed with your office, we included an analysis of this issue in this report. To address the first objective, we analyzed the use of road credits by short-term contractors in fiscal years 1990-93 and compared this usage with road credits used by long-term contractors. Using Forest Service accounting data, we also determined the extent to which the long-term contractors had applied road credits against the cost of purchasing Tongass timber since the inception of the long-term contracts through the end of fiscal year 1993. To address the second objective, we reviewed and analyzed the results of buffer monitoring conducted in 1992 and 1993 by the Forest Service and the Alaska Department of Fish and Game, reviewed the monitoring reports for 1991-93 from the Forest Service’s Alaska Region and visited the Craig and Thorne Bay Ranger Districts within the Tongass National Forest to observe stream buffers. We also reviewed changes made in buffer-related policies and procedures by the Forest Service’s Alaska Region in 1993-94. To address the third objective, we reviewed and compared the planned harvest unit boundary maps included in the environmental impact statements with maps of the actual harvest boundaries. On the basis of discussions with the Forest Service, the state of Alaska’s Department of Environmental Conservation, and a private conservation group, we selected 19 APC timber harvest units and 41 KPC harvest units where the boundary changes may have been significant enough to require further environmental analyses. Our sample constituted about 33 percent of the APC units and 18 percent of the KPC units in which harvests had occurred outside the original boundaries. To determine the adequacy of documentation, we reviewed and analyzed harvest unit files. More specifically, we determined whether the files contained evidence that the forest supervisor had determined that the proposed boundary changes would not significantly change the effects discussed in the environmental impact statement or that the change was significant and would require a supplement to the environmental impact statement. In conducting our work, we also obtained additional information and comments from the Forest Service, the state of Alaska, timber industry officials, and representatives of conservation groups. Within the Forest Service, we performed work at the headquarters in Washington, D.C.; the Alaska Regional Office in Juneau, Alaska; the Ketchikan Area Office in Ketchikan, Alaska; and the Thorne Bay Ranger District in Thorne Bay, Alaska and the Craig Ranger District in Craig Alaska. Our work with Forest Service officials was focused on the timber management and wildlife and fisheries staffs. In September 1993, while our review was under way, APC closed its pulp mill, charging that it was losing money because the prices it paid for timber as a result of the long-term contract modifications were too high. The Forest Service responded that closure of the pulp mill constituted a breach of contract, and in April 1994 the Forest Service terminated APC’s long-term contract. Although the APC contract is not active, we elected to retain certain data on APC in this report for illustrative purposes, and also because the courts have not yet ruled on the Forest Service’s action in terminating the contract. We conducted our review between September 1992 and October 1994 in accordance with generally accepted government auditing standards. As requested, we did not obtain official agency comments on a draft of this report. However, the information in this report was discussed with timber management officials, including the Director, Timber Management Staff, at Forest Service headquarters, the Director’s counterpart in the Alaska Region, and officials in the Department of Agriculture’s Office of General Counsel. As chapter 2 will discuss, these officials disagreed with our conclusions about purchaser road credits. In other respects, however, they agreed that the information presented was accurate. We have incorporated their suggested changes where appropriate. Purchasers of timber in the Tongass National Forest often pay for part of the timber they purchase with credits they have received for building harvest-related roads. The Tongass Timber Reform Act required modifications to KPC’s and APC’s long-term contracts to ensure that credits KPC and APC received for building such roads would be provided in a manner consistent with procedures used in providing road credits to short-term contractors. This provision was aimed at eliminating KPC’s and APC’s competitive advantage of being able to maintain certain road credits for much longer periods of time than short-term contractors. As we pointed out in our March 1991 report, the Forest Service did not modify the APC and KPC contracts to address this provision of the act. Forest Service officials continue to believe this contract modification is not required. They maintain that consistency already exists because road credits are canceled at the end of all timber sale contracts, whether long-term or short-term. However, this approach leaves the long-term contractors’ competitive advantage intact and is not consistent with congressional direction that the contracts be modified. Harvesting timber often requires that the company harvesting the timber build roads to move logging equipment in and out of the harvest area and transport harvested logs. As compensation to the timber purchaser, the Forest Service gives road credits equal to the estimated cost of building the roads. Timber purchasers can use these credits instead of cash to pay for timber. Certain limitations apply to road credits used to pay for harvested timber. When the Forest Service prepares a timber sale, it establishes a base value for the timber. This base value must be paid in cash. For example, if a timber sale has a base value of $400,000 and is sold under competitive bid for $900,000, the purchaser must pay the base value ($400,000) in cash. The remaining $500,000 can be paid in whole or in part with road credits. Because timber purchasers cannot use road credits to pay the entire cost of the timber, situations may arise in which they cannot use all the road credits they have earned. To continue the example above, if the purchaser earned road credits worth $700,000, the purchaser could apply only $500,000 in credits against the cost of the timber, because the difference between the purchase price and the base value is only $500,000. Those road credits that can be applied against the cost of timber are called “effective”; those road credits left over are called “ineffective.” In this example, the timber purchaser has $500,000 of effective credits and $200,000 of ineffective credits. Under Forest Service contracts, a timber purchaser retains ineffective road credits until the expiration of the timber sale contract in which the credits are earned. Although such credits may appear valueless, for long-term contractors they can become effective—and therefore acquire value—if the timber’s purchase price is adjusted upwards to reflect higher current market values for timber. Again using the earlier example, a subsequent adjustment in the purchase price from the original $900,000 to $1 million would also mean that $100,000 of ineffective road credits would be made effective. This additional amount could be used to offset the increased purchase price. APC and KPC have made extensive use of road credits as a means of paying for timber. Each used road credits to pay for about three-fourths of the value of timber harvested under its long-term contract. Through the end of fiscal year 1993, the value of timber sold to the two companies since the inception of the long-term contracts has been about $268 million (in constant 1993 dollars). The two companies used road credits to pay for 75 percent, or $201 million, of the total price of timber. KPC used road credits to pay for 73 percent of its timber; APC used road credits to pay for 79 percent. (See table 2.1.) The Forest Service did not revise the provision on the use of road credits in its long-term contracts to make them similar to the provision in its short-term contracts, as required by the reform act. Because this modification was not made, APC and KPC have been able to use ineffective road credits from timber offering to timber offering throughout the remaining life of their long-term contracts. By contrast, ineffective road credits for short-term contracts are canceled at the end of the contracts. We pointed out this inconsistency in our March 1991 report and recommended that action be taken. The Forest Service, however, has not acted on our recommendation. The Forest Service maintained—and continues to do so—that for ineffective road credits, no modification was needed to make the treatment of road credits consistent between long-term and short-term contracts. The Forest Service believes that the treatment is consistent, in that ineffective road credits are terminated at the end of either type of contract. It maintains that the amount of time the long-term contractors could hold the credits is not relevant. Our concern about the Forest Service’s argument is that although ineffective credits are canceled at the end of both types of contracts, long-term contractors continue to hold a competitive advantage. Short-term contractors can use ineffective road credits only during the length of their contracts, which are considerably shorter than the 50-year long-term contracts—short-term contracts usually last 3 to 5 years. The long-term contractors are able to keep these credits available for possible use over a longer period by transferring them from timber offering to timber offering. Their competitive advantage is that they have greater ability to retain and use ineffective credits to offset timber payments if the price of timber rises during the life of their contracts. In our view, the language of the Tongass Timber Reform Act, as well as its legislative history, makes it clear that the Congress intended the Forest Service to make changes in road credits so that they would be treated substantially the same under both long- and short-term contracts. Comparisons between the two types of contracts show that this competitive advantage can be substantial. For example, as of March 1993, APC and KPC held $5.4 million in ineffective road credits; four short-term contractors held $3 million in ineffective road credits. The contracts held by the short-term contractors are scheduled to expire in 1995 and 1996, at which time any remaining ineffective credits will be canceled. By contrast, KPC retains the ability to convert or transfer its ineffective credits between offerings until the year 2004. APC would have been able to carry forward its ineffective credits to 2011 had its contract not been terminated. The following are more specific illustrations of how KPC has been able to use ineffective road credits in ways that short-term timber contract holders cannot: In March 1992, KPC transferred $7,510,248 in road credits it had received from five previous timber offerings back to the long-term contract’s main account for use in subsequent offerings. Of this amount, only $26,086 was effective road credits. Had the credits been treated consistently with those of short-term contracts, KPC would not have been able to transfer the $7,484,162 in ineffective credits. In January 1993, KPC paid cash in the amount of $407,747 instead of using road credits for timber that it had harvested. Had this been a short-term contract, the financial transaction would have been closed and the credits could not have been used. However, because it was under a long-term contract, KPC was able to transfer ineffective road credits from other offerings to this one, replace the cash with ineffective credits, and thus receive a refund of the cash it paid above the base rate. In our March 1991 report, we noted that the Forest Service did not modify the long-term timber sales contracts to comply with the requirements of the reform act that road credits be treated substantially the same under both long- and short-term contracts. We pointed out that the language of the Tongass Timber Reform Act, as well as its legislative history, makes it clear that the Congress intended the Forest Service to make changes in road credits so that they would be treated substantially the same under both long- and short-term contracts. In that report, we recommended that the Forest Service revise the contracts accordingly. We continue to believe that ineffective road credits resulting from each timber offering should be canceled under KPC’s long-term contract after each timber offering is completed. Unless the Forest Service revises KPC’s long-term contract to bring this change about, KPC will continue to have a competitive advantage over short-term timber contract holders. Our conclusions would also be applicable to APC if the Forest Service had not terminated APC’s long-term contract or if for some reason APC’s contract is reinstated in the future. In its response to our earlier report and in its discussions on a draft of this report, the Forest Service has continued to maintain that its current policy complies with the act and intends to take no action to modify the provision for road credits in long-term contracts. The Forest Service maintains that the treatment of road credits is consistent, in that ineffective road credits are terminated at the end of either type of contract. They maintain that the length of time that the long-term contractors can hold the road credits is not relevant. Our concern about the Forest Service’s argument is that although ineffective credits are canceled at the end of both types of contracts, long-term contractors continue to hold a competitive advantage. Their competitive advantage is that they have greater ability to retain and use ineffective credits to offset timber payments if the price of timber rises during the life of their contracts. In our view, the language of the Tongass Timber Reform Act, as well as its legislative history, makes it clear that the Congress intended the Forest Service to make changes in road credits so that they would be treated substantially the same under both long-and short-term contracts. In light of the Forest Service’s position that it needs to take no action to comply with the Tongass Timber Reform Act’s provision on road credits, the Congress may wish to consider directing the Secretary of Agriculture to modify the Ketchikan Pulp contract so that ineffective road credits generated during a timber offering would be canceled after the timber offering is completed. The Tongass Timber Reform Act directs the Forest Service to protect fish and wildlife habitat in streamside, or “riparian,” areas of harvest units by designating 100-foot buffers of timber to be left standing along the sides of many streams in timber harvest areas. During inspections of these buffers in 1992 and 1993, however, both the Forest Service and the state of Alaska found buffers that, at some point along their length, did not meet the minimum 100-foot width requirement. The Forest Service has since taken sufficient steps to ensure greater compliance with this requirement. The Forest Service’s management plan for the Tongass National Forest, as well as its agreement with the state of Alaska for managing water quality, calls for monitoring the effectiveness of buffers. We found that before 1994, the Forest Service’s monitoring efforts had been limited in scope and often did not include measurements against important criteria that could help determine how effectively buffers were working. This situation was partly the result of the lack of specific monitoring guidance from the Alaska Regional Office. In fiscal year 1994, the Forest Service implemented a new program to monitor buffers’ effectiveness that, among other things, provides clearer direction for the types of information to be gathered. The reform act requires that timber harvesters leave 100-foot buffers of standing timber along two classes of streams in the Tongass National Forest—class I streams and class II streams that flow directly into class I streams: Class I streams are perennial or intermittent streams that (1) are direct sources of domestic-use water; (2) provide spawning, rearing, or migration habitat for migratory and resident fish; or (3) have a major effect on the water quality of another class I stream. Class II streams that flow directly into a class I stream are perennial or intermittent streams that (1) provide spawning and rearing habitat for resident fish or (2) have moderate influence on the water quality of other class I or class II streams. Such buffers are designed to protect riparian areas, which are important in such ways as providing fish and wildlife habitat, protecting stream channels and stream banks, and stabilizing floodplains. Whenever the stream lies within the harvest area, the act requires a 100-foot buffer on each side. Whenever the stream forms a boundary of the harvest area, the buffer must be at least 100 feet wide on the side where timber is to be harvested. The act required buffers for those timber harvest units from which timber was either sold or released for harvest on or after March 1, 1990. The Forest Service took two main steps to implement this provision of the act. First, it modified APC’s and KPC’s long-term contracts to require that buffers of at least 100 feet be established along class I and class II streams. Second, the Forest Service modified its regional Soil and Water Conservation Handbook in February 1991 to incorporate changes resulting from the act. The handbook now identifies the management practices needed to maintain and protect water quality and fisheries habitat and to minimize adverse effects on riparian areas from logging and other land-disturbing management activities. The handbook’s changes reinforce the importance of the buffers by calling for special attention to land and vegetation for 100 feet from the edges of all streams, lakes, and other bodies of water. Under an agreement with the Alaska Department of Environmental Conservation, the Forest Service is to monitor how well the buffers have been implemented. Among other things, the Forest Service is to determine whether established buffers comply with applicable standards and guidelines, including checking whether the buffers are at least 100 feet wide. In addition to the Forest Service’s monitoring, the Alaska Departments of Fish and Game and Environmental Conservation monitor buffer widths. On-site monitoring inspections during 1992 and 1993 by the Forest Service and the Department of Fish and Game of portions of KPC’s and APC’s buffers showed instances in which the 100-foot minimum requirement was not met. More specifically: In September 1992, the Department of Fish and Game reported that during an inspection of harvest units on northern Prince of Wales Island, at least 16 of the 20 buffer measurements taken did not meet the 100-foot requirement. The narrowest portions of the buffers measured were about 50 feet wide, and portions of 11 buffers were less than 75 feet wide. In October 1992, Thorne Bay Ranger District staff made 132 buffer measurements and found that portions of 38 buffers—almost 29 percent—were less than 100 feet wide; most were narrower by 10 to 20 feet. In July 1993, an interdisciplinary team from the Sitka Ranger District reviewed more than 120 timber harvest units and found that portions of the buffers in more than 100 of the units were less than 100 feet wide. However, these buffers were usually only narrower by a few feet. The inspectors noted that such factors as uneven terrain, dense vegetation, and meandering, multichannel stream courses can lead to errors in designating buffers and adhering to minimum widths across the many miles of riparian areas affected by timber harvests. Changes have been made to address the problems identified in the inspections of buffer widths by the Forest Service and the Alaska Department of Fish and Game. Each of the three area offices of the Tongass National Forest—Ketchikan, Stikine, and Chatham—recognized the need to take corrective action to attain a higher degree of conformity with the requirement and have taken actions to ensure greater compliance. The Ketchikan area office, where the greatest concentration of buffers exists, provides an example. In March 1993, in response to a December 1992 directive from the area office, the area’s three district rangers reported that corrective actions had either been taken or would be taken in the near future. For example, the rangers said that a certification statement on buffer widths had been added to the planning documents for all harvest units, cloth tapes and laser guns were being used to provide precise measurements of buffer widths, and district personnel received training on buffer measurements and other aspects of harvest unit layout. Similar steps have been taken or are under way in the Stikine and Chatham areas. We believe the steps taken at the area and district levels will help ensure that buffers with the appropriate widths are established. The Tongass Land Management Plan and the Forest Service’s agreement with the Alaska Department of Environmental Conservation specify that the Forest Service is to monitor the effectiveness of its projects, activities, and practices. As part of its monitoring effort, the Forest Service is to determine if buffers have been effective in minimizing the adverse effects that logging and other land-disturbing activities could have on riparian areas. We found that before 1994, the Forest Service did not have a regional program to monitor the buffers’ effectiveness. Each of the area offices had its own monitoring procedures. However, these procedures to monitor buffer effectiveness were limited in scope and often did not include measurements against important criteria (such as water quality) needed to determine how effectively buffers were working. For example, within the Stikine area, monitoring of the buffers’ effectiveness consisted of visual observations of the extent to which the buffers contained timber that had been blown down by wind. While these observations yielded insights into the relative lack of effectiveness of buffers with blown-down timber, the focus on this single characteristic left many questions about effectiveness unaddressed. Similarly, the Ketchikan area limited its monitoring efforts to steep, deeply cut drainages. Again, the efforts yielded useful information, but the effectiveness of buffers that did not fall into this one limited category went largely unaddressed. According to Stikine area officials, the lack of sufficient funds, staff, and monitoring objectives were the primary reasons why monitoring buffers’ effectiveness has been limited. In addition, Ketchikan area officials told us that more specific direction was needed from the Alaska Regional Office identifying the kinds of information needed to monitor buffers’ effectiveness. Alaska Regional Office officials said that they initiated a monitoring project in 1992 that would lead to establishing a regionwide program to monitor buffers’ effectiveness. The project reviewed the condition of buffers, evaluated their effectiveness at maintaining riparian habitat and water quality, and recommended improvements to buffers’ design. The project identified six types of information for use in assessing buffers’ effectiveness, including measuring the volume of large woody debris in a stream and determining the stability of stream banks. According to the regional office monitoring coordinator, the project was tested at eight sites in the Chatham area in 1993. For example, in June 1993 the Forest Service and the Alaska Department of Environmental Conservation jointly monitored the effectiveness of two buffers along a class II stream. The environmental specialist with the Alaska Department of Environmental Conservation told us preliminary indications showed that the two buffers were meeting expectations in being able to protect riparian areas. The regional office monitoring coordinator also told us that the 1994 buffer monitoring plans for each of the area offices included the types of information identified as contributing to the evaluation of buffers’ effectiveness in the eight-site project. Currently, each of the three areas is also participating in a multiyear, forestwide study of the stability and effectiveness of stream buffers. According to the regional monitoring coordinator, the interim results of the study will be available in the spring of 1995. The Forest Service has taken steps to improve both monitoring the width of buffers and evaluating their effectiveness. These steps should help ensure that buffers more consistently meet minimum width requirements and that their overall effectiveness is assessed more systematically. Because the buffer requirement is relatively new and because the effectiveness of buffers has been studied only to a limited degree, more time will be needed to determine how well they are working to help protect fish and wildlife habitat in timber harvest areas. If the boundary of a timber harvest unit is changed after the environmental impact statement (EIS) for the area has already been prepared, the Forest Service’s policy requires that forest supervisors determine and document whether the changes are environmentally significant enough to require additional environmental study. Forest supervisors were not, in all cases, documenting the environmental significance of the harvest units’ boundary changes or the need for additional analysis beyond what had been described in the existing EIS. This was particularly the case for KPC’s harvest units. We examined 41 instances in which boundary changes had occurred in areas harvested by KPC and found that in 39 instances the documentation was not adequate. In 17 instances, there was no documentation at all, and in 22 instances the documentation had not been reviewed according to the Forest Service’s policy. We also examined 19 instances in which boundary changes had occurred in areas harvested by APC and found that adequate documentation was present in 18 of them. As a result, the Forest Service had no assurance that the environmental consequences of the boundary changes were analyzed. During our review, in October 1993 the current forest supervisor responsible for KPC’s harvest units sent instructions to district rangers detailing a process for assessing boundary changes and specifically stated that he would document the environmental significance of any changes and the need for any additional environmental analysis. Under the Forest Service’s policy and in compliance with the National Environmental Policy Act, the Forest Service is required to assess the environmental impacts of proposed timber harvests and prepare an EIS. Among other things, an EIS documents the location and design of the planned timber harvest units within the area covered by the timber offering and identifies the volume of timber to be cut. For a number of reasons, the boundaries of timber harvest units analyzed in the EIS may subsequently be revised. At the time the EIS is developed, precise information about the volume of economically harvestable timber, unique habitat for endangered species, or other specific characteristics of the land may not be known with complete accuracy. For example, more detailed on-site review could show that the planned boundaries contain less harvestable timber than originally projected or that additional eagle nesting areas or streams requiring buffer protection might be found. To deal with such circumstances and still provide the needed volume of harvestable timber, boundary adjustments may be needed. However, by this time the EIS may have been developed, made available for comment, and approved. The Forest Service’s policy contains several requirements for assessing and documenting the environmental effects of boundary changes made after environmental review has already been completed. The EIS specifies that for any proposed action (such as a boundary change) that deviates from a planned activity, the forest supervisor is to document the environmental significance of the proposed action. In doing so, if the forest supervisor determines that the impacts of the change do not deviate significantly from the impacts discussed in the EIS, the timber sale can proceed without further environmental study. However, if the forest supervisor determines that the change is significant, a supplemental EIS must be prepared. Contrary to the Forest Service’s policy, forest supervisors had not in all cases documented the environmental significance of changes to harvest unit boundaries or the need for additional environmental analysis—particularly for KPC’s harvest units. This situation occurred primarily because the forest supervisor inappropriately delegated his authority to district rangers to determine if boundary changes were signifcant and did not require the district rangers to provide documentation if they determined that the change was not significant. The Forest Service’s policy does not allow this authority to be delegated to district rangers and in all cases requires documentation of the environmental significance. We reviewed the files for 60 harvest units—19 for APC and 41 for KPC—that had boundary changes after the EIS had been prepared. These units represented about 33 percent of APC’s units and 18 percent of KPC’s units in which harvests had occurred outside the original boundaries. Adequate documentation was present in 18 of the 19 files for APC’s units but in only 2 of the 41 files for KPC’s units. More specifically, for KPC’s units, 16 units had no documentation at all of the environmental significance of 1 unit had adequate documentation of the environmental significance of one boundary change but no documentation for a second boundary change, and 22 units had documentation prepared by someone other than the forest supervisor—such as a district ranger—with no indication that the forest supervisor had reviewed the results. Guidance from the region places the responsibility for such determinations with the forest supervisor. Documentation of environmental impacts is important because it clearly demonstrates that the impacts were considered. However, the lack of documentation goes beyond simply being out of compliance with the Forest Service’s policy. When no documentation was present in the file, the Forest Service had no assurance that the environmental significance of the boundary changes had actually been analyzed. While the absence of a forest supervisor’s review of documentation may seem of less concern than the absence of documentation altogether, the absence of review has been a concern that the Forest Service has tried to correct for some time. In a November 1990 review, personnel in the Alaska Region noted that the forest supervisor responsible for KPC’s harvest units at that time had inappropriately delegated to others the authority to make determinations about the environmental significance of boundary changes. Contrary to the Forest Service’s policy, the delegation of authority did not require documentation if it was determined that the boundary change was not significant. The Alaska Region personnel recommended that the delegation of authority be withdrawn. When those personnel followed up in February 1992, they noted that the practice had apparently stopped since the forest supervisor had verbally withdrawn his delegation of authority. However, 9 of the 22 instances we examined in which the forest supervisor’s review was lacking occurred after February 1992. We discussed our findings with the current forest supervisor and he agreed that there was a need for better documentation of boundary changes and their significance. In October 1993, the forest supervisor sent a letter to district rangers setting forth a detailed five-step process for assessing boundary changes and specifically stating that the forest supervisor will determine the significance of any changes and the action necessary. The Forest Service needs to ensure that the problems of missing or inadequate documentation of the environmental significance of boundary changes to timber harvest units are addressed. In recent years, although the problem has been noted, progress in correcting it has been slow. Improved compliance is important in providing assurance that environmental concerns associated with timber harvesting activities under long-term contracts have been fully addressed. Accordingly, we believe the Alaska Regional Office needs to continue its oversight of forest supervisors’ compliance with the documentation requirements for changes to harvest unit boundaries that are made after the EIS have been issued. To ensure full consideration and disclosure of the environmental impacts of boundary changes to harvest units, we recommend that the Secretary of Agriculture direct the Chief of the Forest Service to require Alaska Regional Office officials to periodially check to ensure that forest supervisors are properly documenting the environmental significance of boundary changes to timber harvest units made after EIS’s have been issued in the Tongass National Forest. We discussed the facts and our conclusions with the Forest Service officials responsible for timber management activities at headquarters and the Alaska Regional Office. These officials generally agreed with our facts and conclusions concerning documenting changes to timber harvest units and provided some technical clarifications that we incorporated, as appropriate. | Pursuant to a congressional request, GAO reviewed the Forest Service's implementation of certain unilateral modifications to long-term contracts in Alaska and other requirements of the Tongass Timber Reform Act, focusing on whether: (1) road credits are used consistently between long-term contracts and short-term contracts; (2) buffers of standing timber have been left along designated streams as required; and (3) the Forest Service is requiring full documentation of environmental effects whenever changes are made to timber harvest area boundaries. GAO found that: (1) the Forest Service believes it treats road credits consistently across all contracts, since unused road credits are cancelled at the end of all timber sales contracts; (2) the long-term contractors' ability to carry unused road credits forward for longer periods than short-term contractors gives them an unfair competitive advantage; (3) some streamside buffers did not meet the 100-foot minimum width during the first years immediately following the act's passage, but the Forest Service has since taken steps to enforce this requirement; (4) in 1994, the Forest Service issued guidance and initiated a new monitoring program to ensure the buffers' effectiveness; (5) the Forest Service often does not document the environmental effects of timber harvest boundary changes; (6) in some instances, the forest supervisor has inappropriately delegated his documenting authority to district rangers and waived documentation where he believed boundary changes were insignificant; and (7) the forest supervisor has since withdrawn the authority delegation and established a detailed process for assessing boundary changes. |
The Coast Guard is a multi-mission, maritime military service within the Department of Homeland Security (DHS). The Coast Guard’s responsibilities fall into two general categories—those related to homeland security missions, such as port security, vessel escorts, security inspections, and defense readiness; and those related to non-homeland security missions, such as search and rescue, environmental protection (including oil spill response), marine safety, and polar ice operations. To carry out these responsibilities, the Coast Guard operates a number of vessels and aircraft and, through its Deepwater Program, is currently modernizing or replacing those assets. At the start of Deepwater, the Coast Guard chose to use a system-of-systems acquisition strategy that would replace its assets with a single, integrated package of aircraft, vessels, and communications systems through Integrated Coast Guard Systems (ICGS), a system integrator that was responsible for designing, constructing, deploying, supporting and integrating the assets to meet Coast Guard requirements. The decision to use a system integrator was driven in part because of the Coast Guard’s lack of expertise in managing and executing an acquisition of this magnitude. In a series of reports since 2001, we have noted the risks inherent in the systems integrator approach and have made a number of recommendations intended to improve the Coast Guard’s management and oversight. In particular, we raised concerns about the agency’s ability to keep costs under control in future program years by ensuring adequate competition for Deepwater assets and pointed to the need for better oversight and management of the system integrator. We, as well as the DHS Inspector General and others, have also noted problems in specific acquisition efforts, notably the National Security Cutter and the 110-Foot Patrol Boat Modernization, which the Coast Guard Commandant permanently halted in November 2006 because of operational and safety concerns. Over the past year, the Coast Guard’s Deepwater Program has been in the midst of a major shift, from heavy reliance on a system integrator to greater government control and a greater government role in decision- making. Coast Guard officials acknowledged that the initial approach gave too much control to the contractor. The Coast Guard has made a number of significant program decisions and taken actions, including: an increase in the Coast Guard’s management role through a reorganization of its acquisition directorate; a restructured approach to the review and approval of individual planned improvements to the use and quality of information on program performance, and initiatives to develop a workforce with the requisite acquisition and program management skills. Although many of the changes the Coast Guard has undertaken are positive and may assist the program in meeting its goals, these initiatives are in their preliminary stages, with many processes and procedures yet to be implemented. Maintaining momentum will be important in improving the Deepwater Program; we will continue to evaluate the Coast Guard’s progress in all of these areas as part of our ongoing work. As of July 2007, the Coast Guard began consolidating acquisition responsibilities into a single Acquisition Directorate, known as CG-9, and is making efforts to standardize operations within this directorate. Previously, Deepwater acquisitions were managed separately from other Coast Guard acquisitions by the Deepwater Program Executive Office. The Coast Guard’s goal for the reorganization is that it will provide greater consistency in the Coast Guard’s oversight and acquisition approach by concentrating acquisition activities under a single official and allowing greater leveraging of knowledge and resources across programs. Figure 1 depicts the changes. As part of asserting a larger management role in Deepwater, the Coast Guard has taken additional steps, such as the following. Integrated product teams—a key program management tool—are in the process of being restructured and re-chartered. In the past, the teams were led and managed by the contractor, while government team members acted as “customer” representatives. Now, the teams are led by Coast Guard personnel. The teams are responsible for discussing options for problem solving relating to cost, schedule, and performance objectives. For example, one team oversees management of the National Security Cutter project. The Coast Guard has formally established a technical authority for engineering to oversee issues related to Deepwater; Coast Guard officials told us a similar authority for C4ISR is pending. The role of the technical authority in program acquisition is to review, approve, and monitor technical standards and ensure that assets meet these standards, among other duties. Previously the contractor had some decision making power and the Coast Guard held an advisory role. In some cases this led to bad outcomes. For example, Coast Guard officials told us their engineering experts had raised concerns during the National Security Cutter’s design phase about its ability to meet service life requirements and recommended design changes, but they were ignored. If the recommendations had been heeded, changes to the ship’s design could have been made earlier and some additional costs may have been avoided. Coast Guard project managers, who manage individual Deepwater assets, now have increased responsibility and accountability for acquisition outcomes. Previously, the project managers’ role was less significant. For example, the contractor, not the project manager, provided Coast Guard management with quarterly updates on the status of assets. Now, project manager charters for individual assets outline project managers’ responsibilities and authorities, including ensuring projects are on time and within budget. The Coast Guard is moving away from the ICGS contract and the systems- of-systems model to a more traditional acquisition strategy, where the Coast Guard will manage the acquisition of each asset separately. Agency officials told us that they are in the process of re-evaluating their long term relationship with ICGS, including an assessment of the value of continuing this contractual relationship. The government is under no further obligation to acquire services under this contract, as the minimum specified quantity of services was met during the 5-year base term. However, Coast Guard officials told us they may continue to issue task orders under the contract for specific efforts, such as logistics, or for assets that are already well under way. The Coast Guard recently demonstrated this new approach by holding its own competition for the Fast Response Cutter-B (FRC-B), in lieu of obtaining the asset through the ICGS contract. The Coast Guard issued a request for proposals in June 2007 for the design, construction, and delivery of a modified commercially available patrol boat. Coast Guard officials told us they are currently evaluating proposals and expect to award the contract by the third quarter of fiscal year 2008, with the lead cutter expected for delivery in 2010. The Coast Guard plans to hold other competitions outside of the ICGS contract for additional assets in the future, including the Offshore Patrol Cutter. The Coast Guard’s transition to an asset-by-asset acquisition strategy is enabling increased government visibility and control over its acquisitions. Cost and schedule information are now captured at the individual asset level rather than at the overall, system-of-systems program level. For example, while cost and schedule breaches in the past were to be reported at the Deepwater system-of-systems level only, the Coast Guard is now reporting breaches by asset, as occurred recently with the cost increase on the C-130J long range surveillance aircraft and the first National Security Cutter. In implementing this new acquisition approach, the Coast Guard also plans to start following the processes set forth in its Major Systems Acquisition Manual (MSAM), which include acquisition milestones, documentation requirements, and cost estimates for individual assets. Previously, the Coast Guard was authorized to deviate from the MSAM requirements for the Deepwater Program. Reviews were required on a schedule-driven basis—planned quarterly or annually—as opposed to the more disciplined, event-driven process outlined in the MSAM. In addition, the Coast Guard scheduled key decision points only occasionally and focused primarily at the Deepwater Program as a whole, as opposed to at an individual asset level. Coast Guard officials told us that little, if any, documentation of key decisions was maintained. The MSAM process requires reports on specific elements of program knowledge at milestones in the acquisition process, supplemented by annual briefings. For example, reports on the maturity of technology and estimates of an asset’s life cycle cost are required at Milestone 2, before an asset enters the capability development and demonstration phase. Figure 2 depicts the key phases and milestones of the MSAM process. Although the Coast Guard’s decision to follow a more formalized and asset-driven acquisition process is a positive step, the Coast Guard faces challenges in implementing the process. The transition to the MSAM process is estimated to take at least 2 years to complete, as the Coast Guard is determining where Deepwater assets are in the process and is having to create basic documentation that was not required under the prior process—such as statements of requirements and technology assessments—to bring assets into compliance. For example, the National Security Cutter is in the production phase, but the Coast Guard is reviewing what documentation should be completed for milestones that already passed. Coast Guard officials also acknowledged the hurdles they face in bringing C4ISR efforts under the MSAM process, as this asset may require a broader Deepwater-level approach to tie individual assets together. GAO’s work on best practices for major acquisitions has demonstrated that a knowledge-based approach to decision making, where specific knowledge is gathered and measured against standards at key points in the acquisition process to inform decisions about the path forward, can significantly improve program outcomes. While the MSAM process contains some characteristics of a knowledge-based approach, there are key differences that could affect acquisition outcomes. For example, the Milestone 2 decision to approve low-rate initial production precedes the majority of the design activities in the capability development and demonstration phase. We will continue to evaluate the Coast Guard’s process as compared to established commercial best practices in our ongoing work. The MSAM requires, as part of the acquisition approval process, the Coast Guard to report to DHS on all major program decisions beginning with the start of an acquisition program. Coast Guard and DHS officials told us that the processes and procedures for coordinating acquisitions with DHS’s Investment Review Board, which is tasked with reviewing major acquisition programs, are currently undergoing revision. According to the Coast Guard, DHS approval of acquisition decisions is not technically necessary because the department delegated oversight responsibility for the Deepwater Program to the Coast Guard in 2003. Recently, however, the Coast Guard has increased communication and coordination through good will and informal procedures such as personal working relationships. We are currently conducting work on DHS’s investment review process for this committee and will release our findings later this year. The proper functioning of an acquisition organization and the viability of the decisions made through its acquisition process are only as good as the information it receives. In the past, much of the Deepwater Program information was collected on an ad-hoc basis and focused more at the Deepwater Program level, as opposed to the individual asset level. The Coast Guard is now putting processes in place to improve the use and quality of its information on program performance through a number of different efforts. The Coast Guard recently developed Quarterly Project Reports, a compilation of cost and schedule information that summarizes the status of each acquisition for reporting through the Coast Guard chain of command as well as to DHS and the Congress. The Coast Guard also plans to analyze program information using the “probability of project success” tool. Coast Guard acquisition officials told us they will use this tool to grade each asset on 19 different elements, including acquisition process compliance and progress and earned value management data, to assess the risk of assets failing to meet their goals. This information is intended to enable senior Coast Guard management officials to review project risks and status at a glance. At this time, the Coast Guard has completed reports on ten Deepwater assets. The Coast Guard is working to improve the quality and reporting of earned value management data. For example, officials have developed standard operating procedures for earned value reporting and analysis to create consistency among Deepwater assets. As part of these procedures, Coast Guard analysts have begun to review the earned value management data provided by contractors and provide the results to project managers. The Coast Guard is also exploring how it can use the Defense Contract Management Agency to validate contractor earned value systems. Certification would provide the Coast Guard greater assurance that contractor data are accurate. The Coast Guard has acknowledged the need for a workforce that can effectively manage its major acquisitions—including Deepwater—a challenge common within the federal government. With the July 2007 creation of the Acquisition Directorate, the Coast Guard has taken steps to develop a workforce with the requisite acquisition and program management skills, while trying to reduce reliance on support contractors. The Coast Guard’s 2008 acquisition human capital strategic plan sets forth a number of acquisition workforce challenges, including a shortage of civilian acquisition staff, lack of an acquisition career path for Coast Guard military personnel, difficulty in tracking acquisition certifications, and absence of policy guidance on the use of support contractors in the acquisition process. To address these challenges, the Coast Guard has begun initiatives that leverage expertise and best practices from other organizations, including use of GAO’s Framework for Assessing the Acquisition Function at Federal Agencies. These initiatives include establishing an Office of Acquisition Workforce Management to contracting for development of a strategic tool to forecast acquisition workforce needs in terms of numbers and skill sets; utilizing hiring flexibilities such as reemployed annuitants, relocation bonuses, and direct hire authority; and developing certification requirements for the entire Acquisition Directorate (not just for project managers) to help develop what it calls “bench strength” in the acquisition workforce. Some of these initiatives have begun to see concrete results; for example, key Acquisition Directorate leadership positions have been filled and, through use of hiring flexibilities, over 100 vacant civilian acquisition positions have been filled, 40 of them using direct hire authority. However, as Table 1 shows, the Acquisition Directorate still has not fully staffed its billets, including a range of positions—such as contract specialists, financial analysts, systems engineers, and program management staff— that the directorate has designated as “hard-to-fill.” The Acquisition Directorate has also identified a need for about 189 contractor billets for fiscal year 2008. These support contractors fill a range of positions, such as contracting support and logisticians. Despite the Coast Guard’s stated goal of reducing its reliance on support contractors, acquisition management officials told us that use of contractors will likely continue for the foreseeable future and is contingent upon the Coast Guard’s ability to build its core staff. Other initiatives are still in the early stages, and it is too soon to evaluate their outcomes. For example, the Coast Guard is developing a workforce forecasting tool, which it plans to use to answer key questions about its strategic acquisition workforce needs. This tool requires significant up- front data collection and management training efforts to be used effectively. The Coast Guard is also evaluating a similar tool developed by the Air Force and will determine which tool best suits their needs in the future. The new and modernized assets the Coast Guard expects to acquire under the Deepwater Program are intended to be used to help meet a wide range of missions. After the September 11, 2001, terrorist attacks, the Coast Guard’s priorities and focus had to shift suddenly and dramatically toward protecting the nation’s vast and sprawling network of ports and waterways. Coast Guard cutters, aircraft, boats, and personnel normally used for non-homeland security missions were shifted to homeland security missions, which previously consumed only a small portion of the agency’s operating resources. Although we have previously reported that the Coast Guard is restoring activity levels for many of its non-homeland security missions, the Coast Guard continues to face challenges in balancing its resources between the homeland and non-homeland security missions. In addition to the growing demands for homeland security missions, there are indications that the Coast Guard’s requirements are also increasing for selected non-homeland security missions. The Coast Guard’s heightened responsibilities to protect America’s ports, waterways, and waterside facilities from terrorist attacks owe much of their origin to the Maritime Transportation Security Act of 2002 (MTSA). This legislation, enacted in November 2002, established a port security framework that was designed, in part, to protect the nation’s ports and waterways from terrorist attacks by requiring a wide range of security improvements. The SAFE Port Act, which was enacted in October 2006, made a number of adjustments to programs within the MTSA-established framework, creating additional programs or lines of efforts and altering others. The additional requirements found in the SAFE Port Act have added to the resource challenges already faced by the Coast Guard, some of which are described below: Inspecting domestic maritime facilities: Pursuant to Coast Guard guidance, the Coast Guard has conducted annual inspections of domestic maritime facilities to ensure that they are in compliance with their security plans. The SAFE Port Act added additional requirements that inspections be conducted at least twice per year and that one of these inspections be conducted unannounced. More recently, the Coast Guard has recently issued guidance requiring that unannounced inspections be more rigorous than before. Fulfilling the requirement of additional inspections and potentially more rigorous inspections, may require additional resources in terms of Coast Guard inspectors. Inspecting foreign ports: In response to a MTSA requirement, the Coast Guard established the International Port Security Program to assess and, if appropriate, make recommendations to improve security in foreign ports. Congressional directives have called for the Coast Guard to increase the pace of its assessments of foreign ports. However, to increase its pace, the Coast Guard may have to hire and train new staff, in part because a number of experienced personnel are rotating to other positions as part of the Coast Guard’s standard personnel rotation policy. Coast Guard officials also said that they have limited ability to help countries build on or enhance their own capacity to implement security requirements because the program does not currently have the resources or authority to directly assist countries with more in-depth training or technical assistance. Fulfilling port security operational requirements: The Coast Guard conducts a number of operations at U.S. ports to deter and prevent terrorist attacks. Operation Neptune Shield, first released in 2003, is the Coast Guard’s operations order that sets specific security activities (such as harbor patrols and vessel escorts) for each port and specifies the level of security activities to be conducted at each port. As individual port security concerns change, the level of security activities also change, which affects the resources required to complete the activities. Many ports are having difficulty meeting their port security requirements, with resource constraints being a major factor. Meeting security requirements for additional Liquified Natural Gas (LNG) terminals: The Coast Guard is also faced with providing security for vessels arriving at four domestic onshore LNG import facilities. However, the number of LNG tankers bringing shipments to these facilities will increase considerably because of expansions that are planned or under way. As a result of these changes, Coast Guard field units will likely be required to significantly expand their security workloads to conduct new LNG security missions. Boarding and inspecting foreign vessels: Security compliance examinations and boardings, which include identifying vessels that pose either a high risk for noncompliance with international and domestic regulations or a high relative security risk to the port, are a key component in the Coast Guard’s layered security strategy. An increasing number of vessel arrivals in U.S. ports may impact the pace of operations for conducting security compliance examinations and boardings in the future. For example, in the 3-year period from 2004 through 2006, vessel arrivals rose by nearly 13 percent and, according to the Coast Guard, this increase is likely to continue. Moreover, officials anticipate that the increase in arrivals will also likely include larger vessels, such as tankers, that require more time and resources to examine. At present, it is unclear to what extent increased demands on resources may impact the ability of Coast Guard field units to complete these activities on vessels selected for boarding. Establishing interagency operational centers: The SAFE Port Act called for establishment of interagency operational centers, directing the Secretary of DHS to establish such centers at all high-priority ports no later than 3 years after the Act’s enactment. The Coast Guard estimates the total acquisition cost of upgrading 24 sectors that encompass the nation’s high priority ports into interagency operations centers will be approximately $260 million. Congress funded a total of $60 million for the construction of interagency operational centers for fiscal year 2008. The Coast Guard has not requested any additional funding for the construction of these centers as part of its fiscal year 2009 budget request. However, as part of its fiscal year 2009 budget request, the Coast Guard is requesting $1 million to support its Command 21 acquisition project (which includes the continued development of its information management and sharing technology in command centers). So, while the Coast Guard’s estimates indicate that it will need additional financial resources to establish the interagency operational centers required by law, its current budget and longer term plans do not include all of the necessary funding. Updating area maritime security plans: MTSA, as amended, required that the Coast Guard develop, in conjunction with local public and private port stakeholders, Area Maritime Security Plans. The plans describe how port stakeholders will deter a terrorist attack or other transportation security incident or secure the port in the event such an attack occurs. These plans were initially developed and approved by the Coast Guard by June 2004. MTSA also requires that the plans be updated at least every 5 years. The SAFE Port Act added a requirement to the plans that specified that they identify salvage equipment able to restore operational trade capacity. The Coast Guard, working with local public and private port stakeholders, is required to revise its plans and have them completed and approved by June 2009. This planning process may require a significant investment of Coast Guard resources, in the form of time and human capital at the local port level for existing plan revision and salvage recovery development as well as at the national level for the review and approval of all the plans by Coast Guard headquarters. While the Coast Guard continues to be in the center of the nation’s response to maritime-related homeland security concerns, it is still responsible for rescuing those in distress, protecting the nation’s fisheries, keeping vital marine highways operating efficiently, and responding effectively to marine accidents and natural disasters. Some of the Coast Guard’s non-homeland security missions are facing the same challenges faced by its homeland security missions with regard to increased mission requirements. Examples of these additional requirements include (1) revising Area Maritime Security Plans so they also cover natural disasters, (2) revising oil spill regulations to better protect the Oil Spill Liability Trust Fund from risks related to certain vessels with disproportionately low limits of liability, (3) patrolling and enforcing a Presidential declaration regarding new protected areas such as the Northwestern Hawaiian Islands Coral Reef Ecosystem Reserve, and (4) increasing polar activities commensurate with increased resource exploitation and vessel traffic in the artic. In closing, we would like to emphasize several key points as we continue to oversee the various Coast Guard initiatives discussed today. First, now that the Coast Guard has made the decision to assume a greater management and oversight role of the Deepwater Program, sustained effort on a number of fronts will be needed for some time to come. Whether the Coast Guard will achieve its goals is largely contingent on continued strong leadership and a commitment to adhering to a knowledge-based acquisition approach that was lacking in the past. In addition, the Coast Guard originally turned to the private sector to manage Deepwater, in part, because the government lacked requisite expertise. Thus, the Coast Guard’s ability to build an adequate acquisition workforce is critical, and over time the right balance must be struck between numbers of government and contractor personnel. Similarly, the right balance must be struck between homeland and non- homeland security missions. In the aftermath of the September 11, 2001 terrorist attacks, the Coast Guard understandably shifted its focus to homeland security missions at the expense of non-homeland security missions. Congress passed and the President signed legislation that supported and reinforced this shift that further increased Coast Guard missions related to security. Our recent work on the Coast Guard’s homeland security programs has indicated that these missions continue to increase demands on resources. To further complicate the Coast Guard’s resource and mission balancing act, unexpected events such as terrorist attacks or natural disasters could result in major shifts in resources and operations. Thus, the Coast Guard will continue to face the challenge inherent in being a multi-mission force. Mr. Chairman, this concludes our testimony. We would be happy to respond to any questions Members of the Committee may have. For further information about this testimony, please contact John P. Hutton, Director, Acquisition and Sourcing Management, at (202) 512-4841, [email protected] or Stephen L. Caldwell, Director, Homeland Security and Justice, (202) 512-9610, [email protected]. Other individuals making key contributions to this testimony include Michele Mackin, Assistant Director; Greg Campbell, Wayne Ekblad, Jessica Gerrard-Gough, Maura K. Hardy, Dawn Hoff, J. Kristopher Keener, Angie Nichols-Friedman, Scott Purdy, Ralph Roffo, Sylvia Schatz, April Thompson, and Tatiana Winger. In 2005, the Coast Guard revised its Deepwater acquisition program baseline to reflect updated cost, schedule, and performance measures. The revised baseline accounted for, among other things, new requirements imposed by the events of September 11. The initially envisioned designs for some assets, such as the Offshore Patrol Cutter and Vertical Unmanned Aerial Vehicle, are being rethought. Other assets, such as the National Security Cutter and Maritime Patrol Aircraft, are in production. Table 2 shows the 2005 baseline and current status of selected Deepwater assets. Coast Guard: Status of Efforts to Improve Deepwater Program Management and Address Operational Challenges. GAO-07-575T (Washington, D.C.: Mar. 8, 2007). Coast Guard: Preliminary Observations on Deepwater Program Assets and Management Challenges. GAO-07-446T (Washington, D.C.: February 15, 2007). Coast Guard: Status of Deepwater Fast Response Cutter Design Efforts. GAO-06-764 (Washington, D.C.: June 23, 2006). Coast Guard: Changes to Deepwater Plan Appear Sound, and Program Management Has Improved, but Continued Monitoring is Warranted. GAO-06-546 (Washington, D.C.: Apr. 28, 2006). Coast Guard: Progress Being Made on Addressing Deepwater Legacy Asset Condition Issues and Program Management, but Acquisition Challenges Remain. GAO-05-757 (Washington, D.C.: Jul. 22, 2005). Coast Guard: Preliminary Observations on the Condition of Deepwater Legacy Assets and Acquisition Management Challenges. GAO-05-651T (Washington, D.C.: Jun. 21, 2005). Coast Guard: Deepwater Program Acquisition Schedule Update Needed. GAO-04-695 (Washington, D.C.: Jun. 14, 2004). Contract Management: Coast Guard’s Deepwater Program Needs Increased Attention to Management and Contractor Oversight. GAO-04-380 (Washington, D.C.: Mar. 9, 2004). Coast Guard: Actions Needed to Mitigate Deepwater Project Risks. GAO-01-659T (Washington, D.C.: May 3, 2001). This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | The Deepwater Program is intended to replace or modernize 15 major classes of Coast Guard assets--including vessels, aircraft, and communications systems. At the program's start, the Coast Guard chose to use a system integrator, Integrated Coast Guard Systems, to design, build, deploy, and support Deepwater in a system-of-systems approach. In a series of reports, we have noted the risks inherent in this approach. With the Deepwater program under way, the Coast Guard's priorities and focus shifted after September 11 toward homeland security missions, such as protecting the nation's ports and waterways. The 2002 Maritime Transportation Security Act and the 2006 SAFE Port Act required a wide range of security improvements. GAO is monitoring the acquisition of Deepwater and the Coast Guard's ability to carry out its numerous missions. This testimony addresses: (1) changes the Coast Guard is making as it assumes a larger role in managing the Deepwater Program and (2) challenges the Coast Guard is facing in carrying out its various missions. To conduct this work, GAO reviewed key documents, such as Deepwater acquisition program baselines, human capital plans, and Coast Guard budget and performance documents. For information on which GAO has not previously reported, GAO obtained Coast Guard views. The Coast Guard generally concurred with the information. With a recognition that too much control had been ceded to the system integrator under the Deepwater Program, the Coast Guard began this past year to shift the way it is managing the acquisition. Significant changes pertain to: (1) increasing government management of the program as part of the Coast Guard's reorganized Acquisition Directorate; (2) acquiring Deepwater assets individually as opposed to through a system-of-systems approach; (3) improving information to analyze and evaluate progress; and (4) developing an acquisition workforce with the requisite contracting and program management skills. Many of these initiatives are just getting under way and, while they are positive steps, the extent of their impact remains to be seen. The Coast Guard will likely continue to face challenges balancing its various missions within its resources for both the short and long term. For several years, we have noted that the Coast Guard has had difficulties fully funding and executing both homeland security missions and its non-homeland security missions. GAO's recent and ongoing work has shown that the Coast Guard's requirements continue to increase in such homeland security areas as providing vessel escorts, conducting security patrols of critical infrastructure, and completing inspections of maritime facilities here and abroad. In several cases, the Coast Guard has not been able to keep up with these security demands, in that it is not meeting its own requirements for vessel escorts and other security activities at some ports. In addition, there are indications that the Coast Guard's requirements are also increasing for selected non-homeland security missions. Since 2001, we have reviewed the Deepwater Program and have informed Congress, the Department of Homeland Security, and the Coast Guard of the risks and uncertainties inherent with such a large acquisition. In March 2004, we made a series of recommendations to the Coast Guard. The Coast Guard has taken actions on many of them. Three recommendations remain open, as the actions have not yet been sufficient to allow us to close them. In past work on Coast Guard missions, GAO made recommendations related to strategic plans, human capital, performance measures, and program operations. |
During the past decade, both international and national efforts have been made to control ozone-depleting chemicals. Shortly after the United Nations Environment Programme (UNEP) developed the Montreal Protocol on Substances that Deplete the Ozone Layer (Protocol), the Congress added title VI to the Clean Air Act to supplement the Protocol’s terms and conditions. Amendments to the Protocol and regulations implementing title VI have since expanded the restrictions on individual ozone-depleting chemicals. An ozone depletion potential (ODP) index is used under the Protocol and the Clean Air Act to gauge a substance’s relative potential to deplete stratospheric ozone. This index primarily reflects the substance’s (1) likely lifetime in the atmosphere and (2) efficiency in destroying ozone compared with chlorofluorocarbon-11 (CFC-11), a widely used refrigerant and major ozone depleter that is being phased out under the Protocol and the Clean Air Act. On the basis of scientific assessments performed in December 1991 and updated in June 1992, UNEP calculated that methyl bromide has an ODP of 0.7, or 70 percent of CFC-11’s ozone-depleting potential. The Protocol originally placed controls on eight major ozone depleters—five chlorofluorocarbons (CFC) and three halons—and provided for technical and scientific assessments of potential ozone-depleting substances to be undertaken at least every 4 years. In November 1992, following the update of UNEP’s 1991 assessment, the parties to the Protocol first imposed controls on methyl bromide. They agreed to accept UNEP’s calculation of methyl bromide’s ODP as 0.7, and they amended the Protocol to freeze production of the substance at 1991 levels, beginning in January 1995. They did, however, create an exemption for the substance’s preshipment and quarantine uses. The parties also agreed to decide by January 1, 1996, how the freeze would affect the consumption of methyl bromide in developing countries. (The Protocol allows methyl bromide producers to produce 10 percent above 1991 levels for export to developing countries.) The parties further agreed to consider imposing additional controls on methyl bromide at their November 1995 meeting, after they had reviewed the results of UNEP’s next round of scientific and technical assessments. These assessments were completed in late 1994. Title VI of the Clean Air Act identifies many substances that EPA is to list as ozone depleting and requires the agency to list any others that have an ODP of 0.2 or that it finds may reasonably be anticipated to cause harm to the ozone layer. These substances are to be listed as either class I or class II, depending primarily on their ODP. The title authorizes EPA to add substances to either list and requires the agency to update both periodically. Substances that have an ODP of 0.2 or greater are to be listed as class I, and EPA is to take action to phase out their production no later than 7 years after they are listed. The schedule for phasing out the less threatening class II substances is less stringent. In December 1991, three environmental groups petitioned EPA under the Clean Air Act to list methyl bromide as a class I substance. EPA concluded, in large part on the basis of UNEP’s calculation, that methyl bromide has an ODP of 0.7, well above the act’s 0.2 threshold for listing as a class I substance. In December 1993, EPA issued a rule first freezing and then banning the production and importation of methyl bromide. The freeze, which is at 1991 levels, took effect on January 1, 1994. No further reduction from 1991 levels is required until January 1, 2001, when the ban is mandated to begin. EPA imposed no further reductions during this 7-year period because it recognized that the loss of methyl bromide would be costly and it wanted to allow as much time as possible for the development of alternatives. (In promulgating the rule, EPA estimated both the costs and benefits of phasing out methyl bromide. The U.S. Department of Agriculture (USDA) and the University of California at Berkeley and the University of Florida have also estimated the costs of banning methyl bromide’s agricultural uses. App. I summarizes these studies.) Table 1 compares the controls placed on methyl bromide by the Montreal Protocol and by EPA’s regulation. Methyl bromide is a highly effective fumigant used to control a broad spectrum of pests—insects, nematodes (parasitic worms), weeds, pathogens (bacteria, fungi, and viruses), and rodents. The agricultural community today uses it for over 100 crops. U.S. production in 1993 was over 60 million pounds. About 80 percent is used to fumigate the soil before planting crops. Another 19 percent is used to fumigate harvested agricultural commodities during storage—including those being exported from and imported into the United States—and to fumigate structures such as food processing plants, warehouses, mills, and grain elevators. A small amount is used in the production of other chemicals. According to EPA, methyl bromide is a very toxic substance whose effects on human health depend on the concentration and duration of the exposure. Exposure to the pesticide can damage the lungs, eyes, and skin and, in severe cases, cause the central nervous and respiratory systems to fail. Gross permanent disabilities or death may result. Agricultural field workers and structural fumigators have developed respiratory, gastrointestinal, and neurological problems, including inflammation of nerves and organs and degeneration of the eyes. EPA officials told us that exposures to high concentrations have resulted in deaths. UNEP’s scientific assessments of ozone-depleting substances have concluded that methyl bromide is a significant ozone depleter. Although some uncertainties are involved in these assessments, the participating scientists are confident that methyl bromide’s ODP will not drop below the 0.2 level that triggers the phaseout of the pesticide as a class I substance under the Clean Air Act. The atmosphere is made up of distinct layers, each of which has its own composition of gases and natural processes. The troposphere extends from the earth’s surface up to about 6 miles, and the stratosphere extends from the troposphere to about 30 miles above the surface. Although ozone can be harmful in the troposphere—it is a primary constituent of smog—in the stratosphere it helps protect life on earth from the sun’s ultraviolet radiation. (See fig. 1.) Ozone is continuously being produced naturally in the stratosphere by a photochemical reaction caused by the sun’s rays. It is also continuously being removed by other chemical reactions. According to scientists involved in the UNEP assessment, the production and destruction of ozone are normally in balance. However, as emissions from human uses of ozone-depleting chemicals reach the stratosphere, more ozone is lost than is created, and the ozone layer is thinned. Similarly, methyl bromide is continuously being produced and removed from the atmosphere by natural processes—scientists estimate that up to 60 percent or more of the methyl bromide in the atmosphere may be released from the oceans. Again, the UNEP scientists believe that the amounts produced and removed by natural processes tend to be in balance. Therefore, their concern about methyl bromide as an ozone depleter is focused on emissions from human uses. The scientific basis for the Montreal Protocol’s freeze and EPA’s phaseout was principally a 1992 assessment completed under the auspices of UNEP. This assessment, which scientists from around the world performed for the parties to the Montreal Protocol, concluded that the best estimate of methyl bromide’s ODP was 0.7. The 1994 UNEP scientific assessment found that the pesticide’s ODP is 0.6. Producers of methyl bromide and members of the agricultural community have expressed concern about UNEP’s estimate of the substance’s ODP. More specifically, they have questioned UNEP’s calculation of methyl bromide’s “lifetime” in the atmosphere, which the 1994 UNEP assessment calculated to be about 1 year. This calculation is important because the less time the substance is in the atmosphere, the less chance it has of reaching the stratosphere and depleting the ozone layer. UNEP’s calculation of the pesticide’s lifetime assumes that significant amounts of methyl bromide are being removed from the atmosphere through chemical reactions in the troposphere and through interaction with the oceans. However, some in industry and the agricultural community have suggested that soil and vegetation may also remove significant amounts of methyl bromide from the atmosphere. Scientists who participated in the UNEP assessment believe that the range of uncertainty factored into their estimates of methyl bromide’s lifetime is sufficient to allow for the possibility that the substance may be removed by soil and vegetation. The other major part of the ODP measurement is the relative efficiency of methyl bromide in destroying ozone. On the basis of laboratory measurements, the scientists who participated in the UNEP assessment estimate that bromine, a major component of methyl bromide, is about 50 times more efficient in destroying ozone than the chlorine in chlorofluorocarbons. Additional research is addressing the scientific uncertainties currently involved in calculating methyl bromide’s ODP. At this point, the scientists associated with the UNEP assessment anticipate only a further refinement of the ODP calculation. They are confident that the research results will not bring the ODP below 0.3. EPA, USDA, and industry representatives generally agree that chemical substitutes and other alternatives are available today to manage many of the pests currently controlled with methyl bromide. They further agree that no one substitute or alternative is available for methyl bromide’s many uses and that research is needed to identify the alternatives or combinations of alternatives that can economically and effectively replace the pesticide’s individual uses. USDA and the agricultural community, however, are less optimistic than EPA that economical and effective alternatives will be identified by the time the ban on methyl bromide goes into effect in 2001. EPA, USDA, and industry are sponsoring or conducting research on alternatives, but it is not clear at this point what this research will be able to achieve over the next 5 years. According to EPA, there are many chemical and nonchemical alternatives to methyl bromide. These include fumigants that can kill a range of pests similar to those killed by methyl bromide. Other chemicals—for example, insecticides, fungicides, and herbicides—with a more limited range are also available. Nonchemical alternatives include techniques such as rotating crops to avoid a buildup of pests, using plants that are more pest-resistant, and using organisms like parasitic bacteria to control weeds and nematodes. These alternatives, according to EPA, are technically capable of controlling many of the pests currently controlled by methyl bromide. (In its 1994 report, UNEP’s Methyl Bromide Technical Options Committee said that it had identified a technically feasible alternative, either currently available or at an advanced stage of development, for over 90 percent of the uses being made of methyl bromide in 1991. According to the report, alternatives were not identified for controlling some soilborne viruses and other pathogens and for some quarantine procedures.) The key question—assuming that the alternatives do not pose any unmanageable health and environmental risks—is which alternative or combination of alternatives is most effective and economical in a given situation. According to USDA officials, alternatives are not currently available for some important uses, such as treating certain quarantined commodities and responding to certain incidents or emergencies. The officials noted, for example, that ships carrying infested commodities may dock at U.S. ports, military equipment contaminated with soilborne pests may be brought back to the United States, or a destructive pest, such as the Mediterranean fruit fly, may be found in an area of California or another state. In these circumstances, they said, fumigation with methyl bromide is the only effective way to deal with the pests. USDA officials also pointed out that numerous scientific, economic, and environmental variables have to be considered in evaluating potential replacements. Selecting a replacement can be further complicated because a use can be quite specific. For example, alternatives for preplant soil fumigation (a technique for killing pests in the soil before planting) will need to be selected on the basis of such factors as the crop grown, the pests present in the soil, the climate, and the geographical location. Government and industry researchers believe that considerable research and field testing are needed to define the alternatives’ efficacy, applicability, and cost-effectiveness in given situations. To fund research on alternatives to methyl bromide, EPA and USDA spent about $13.3 million in fiscal year 1995 and, according to agency officials, a similar amount has been requested for fiscal year 1996. However, the Crop Protection Coalition estimates that about $60 million is needed annually for this research. According to the Coalition, the public sector has not mobilized sufficient resources and funds to achieve meaningful results before 2001 in either preplant or postharvest applications. The Coalition also believes that this research needs to be more effectively coordinated. The Coalition, with USDA’s and EPA’s cooperation, is attempting to consolidate federal and private research activities into a single agenda reflecting a consensus on priorities. In July 1995, the Coalition issued a report on the status of research activities to (1) help prioritize projects for funding, (2) identify gaps in current research, and (3) improve the transfer of technology to users of methyl bromide. According to a USDA official, the Coalition’s report and research agenda will be discussed at an international research conference on alternatives and methods for reducing methyl bromide emissions that the Department is cosponsoring in November 1995 with the Coalition and EPA. USDA, the Methyl Bromide Working Group—which represents methyl bromide producers and distributors—and the Crop Protection Coalition believe that very few new chemical alternatives will be available when the ban on methyl bromide goes into effect. They said that substantial development costs, research requiring multiple planting cycles, and federal/state regulatory reviews are involved in putting a new chemical on the market. They noted that moving a new pesticide from development to commercialization can take up to 10 years and cost a manufacturer from $50 million to $70 million. As part of this process, the manufacturer must develop the health and safety data that EPA requires to register a pesticide for use. Under the Federal Insecticide, Fungicide, and Rodenticide Act (FIFRA), EPA decides whether to register a pesticide after assessing, among other things, the potential effects on human health and the environment of using a pesticide product according to the directions on the label. A separate registration is required for each new chemical, and an existing registration has to be amended for a new or different use. The registration process can take many years, depending on the type of substance, the complexity of the testing needed, the gaps in the data, and the nature of EPA’s findings from the health and safety data submitted for the agency’s review. However, EPA recently established an expedited system for reviewing alternatives to methyl bromide. According to EPA, to date, no new chemicals and only a few new uses of existing chemicals have been submitted to EPA as potential alternatives to methyl bromide. Under 1988 amendments to FIFRA, all pesticides registered before November 1984 must be reviewed for reregistration and the data supporting their registrations must be brought up to current scientific standards. Methyl bromide and a number of pesticides that have been approved for use on pests now controlled by methyl bromide are included in this group of chemicals. USDA has identified six of these chemicals as potential alternatives to methyl bromide. For each of the alternatives identified by USDA, EPA has found potentially serious environmental and/or health and safety concerns. According to USDA officials, regulatory actions by EPA to ban or limit the use of these or other pesticides because of health and environmental concerns could exacerbate the economic effects of the methyl bromide phaseout by eliminating potentially effective alternatives. However, EPA officials told us that, under FIFRA, the agency balances risks and benefits, and if the benefits of using a pesticide outweigh the potential risks to people and the environment, then EPA may register or reregister the pesticide. The officials said that EPA is likely to reregister many of the chemical alternatives to methyl bromide after adopting appropriate risk mitigation measures, such as label changes. (App. II lists these and other potential alternatives to methyl bromide’s agricultural uses and describes various concerns raised by EPA and others. The appendix also lists recent studies and reports by EPA, USDA, industry, and environmental groups that provide additional details on alternatives.) Some technically proven methods for reducing methyl bromide emissions, such as better sealing of fumigation enclosures, are available. In addition, industry is working to develop technology that can recapture and recycle a very high percentage of the methyl bromide used to fumigate commodities and structures. According to UNEP’s Methyl Bromide Technical Options Committee, a few pieces of methyl bromide recovery equipment are already in use, and prototype systems capable of recycling recaptured gas for some uses will be evaluated by the end of 1995. Although using these technologies could substantially reduce emissions, the Clean Air Act does not exempt production for use in such systems from the ban. However, using recovery and recycling technology would extend the existing supply of methyl bromide when the ban on production and importation becomes effective. In August 1995, EPA’s Assistant Administrator for Air and Radiation said that the agency is aware of and understands the agricultural community’s concern that it does not currently have satisfactory substitutes for all uses of methyl bromide. The Assistant Administrator said that alternatives are available to effectively control many of the pests on which methyl bromide is used and that research on additional alternatives is taking place. According to the Assistant Administrator, the critical issue is whether adequate alternatives will be available by the time the phaseout deadline arrives and, if they are not available, the agency will seek an appropriate solution. According to EPA, alternatives do not need to be identical to methyl bromide but they must be environmentally acceptable and must effectively and economically manage those pests that are now being controlled by the pesticide. (As discussed later, the Clean Air Act would have to be amended to give EPA the authority to grant exemptions from the ban.) Because methyl bromide is an important pesticide worldwide, a ban that took effect in the United States before similar actions were implemented in other countries could create an “uneven playing field” in international trade for U.S. producers of various agricultural commodities. The need to use more costly and/or less effective alternatives could increase the costs and reduce the yields for growers of U.S crops. In addition, some countries require certain U.S. commodities to be treated with methyl bromide as a condition of entry. These exports would likely be lost unless acceptable alternatives could be agreed upon with the importing countries. Likewise, the United States requires treatment with methyl bromide as a condition of entry for certain imports. The impact of the U.S. ban on agricultural trade, however, will depend on the controls other countries have placed on methyl bromide and on the cost-effectiveness of the alternatives available when the U.S. ban goes into effect in 2001. Although the parties to the Montreal Protocol are to consider placing additional controls on methyl bromide at their November 1995 meeting, they may not agree to ban the pesticide. According to U.S. officials, the United States will propose a ban, but contacts with representatives of other countries indicate that a wide range of proposals will be made at the meeting. For example, the technical assessment report prepared for the parties by UNEP’s Methyl Bromide Technical Options Committee states that individual committee members estimated feasible reductions in methyl bromide emissions ranging from 50 percent by 1998 to only a few percent by 2001. Even if the parties agree to a ban, they may give developing countries special consideration. The parties have recognized that these countries may not have the technical or financial resources to switch to alternatives or that a change may have a greater economic impact on them than on more developed countries. For example, in addition to financial and technical assistance, the Protocol gave these countries a 10-year grace period to implement the controls on CFCs and halons. The Methyl Bromide Technical Options Committee is presenting several options for the parties to consider if additional controls are placed on methyl bromide. One proposal would establish a 9-year grace period for developing countries, with reviews every 3 years to determine whether the grace period should be adjusted. Another option would cap or freeze the quantities used by developing countries and grant exemptions for preshipment and quarantine uses. A few countries have acted independently to control their methyl bromide emissions. According to EPA, the Netherlands phased out its use of methyl bromide for soil fumigation in 1992 because of concerns that the pesticide contaminates groundwater. Germany and Switzerland have also prohibited its use on soil. Denmark and Sweden plan to phase out the pesticide’s uses by 1998, as does Italy by 2000, although Italy plans to retain essential uses. The European Union plans a 25-percent reduction in use by 1998, and Canada has drafted controls calling for a 25-percent reduction by 1998. In response to a 1994 survey by the Methyl Bromide Technical Options Committee, 39 countries reported information on their use of methyl bromide for preplant soil fumigation. The committee also obtained estimates from industry for nine additional countries. Although the use of methyl bromide in many of these countries is small (developing countries account for about 18 percent of its use), the crops produced with it are primarily high-value cash crops, usually for export. Because these crops—for example, strawberries, tomatoes, peppers, cucumbers, and various other produce—are similar to those grown in the United States with methyl bromide, producers in these countries potentially compete with U.S. growers for both domestic and international markets for these commodities. Studies done by USDA and for California and Florida, the two states that are the largest users of methyl bromide for soil fumigation, have concluded that alternatives to the substance are less effective in controlling soil pests and often cost more (see app. I). According to USDA officials, the higher costs and reduced yields would put U.S. growers at a disadvantage if growers in other countries could continue to use methyl bromide. For example, the Florida study stated that the use of methyl bromide is critical because of the state’s environment. According to the study, producers faced with substantially reduced revenues would reduce their acreage for fresh fruit, vegetable, and fresh citrus crops. The study concluded that the primary beneficiary would be Mexico, which, the study assumed, would be given longer, as a developing country, to use methyl bromide under any future agreement reached under the Montreal Protocol. If Mexico or other developing countries expand their use of methyl bromide, the environmental benefits gained by phasing out the pesticide’s use in the United States would be at least partially offset. EPA’s Methyl Bromide Program Director told us that the U.S. agricultural community’s concerns about the uneven playing field may be valid. He said that Mexico may increase its production of such fruits and vegetables as tomatoes and strawberries, which are major crops for California and Florida. He added, however, that additional study would be needed to determine whether Mexico could realistically market increased amounts of these commodities in the United States. For example, could strawberries be shipped to market in time to maintain the necessary freshness? And would these fruits and vegetables be grown in Mexico at the same time of year as in the United States? According to USDA officials, the Florida study and two recent USDA studies document the competition that the United States faces from developing countries, especially Mexico, in markets for crops whose production relies heavily on the use of methyl bromide. The officials said, for example, that such competition occurs in the cucumber market in March and April, in the bell pepper market from January through March, and in the tomato market from January through April. The officials also said that Mexico has supplied nearly all of the strawberries imported into the United States over the last 5 years. Although less than 1 percent of the methyl bromide produced in the United States is used to treat quarantined commodities, this use is important because it permits trade in these commodities. During quarantine treatments, which are usually done at international borders, the commodities are fumigated to kill pests that could cross geographical barriers and infect susceptible crops or commodities. Quarantine requirements are negotiated between the importing and exporting countries for individual commodities, and the treatments are governed by strict regulations that require very high efficacy levels. For example, USDA’s Animal and Plant Health Inspection Service (APHIS) requires efficacy levels of 99.9968 percent for most treatments. To meet these efficacy levels, APHIS requires that certain imports be treated with methyl bromide because of its effectiveness, and some other countries, notably Japan, likewise require this treatment for certain imports from the United States. APHIS currently requires fumigation with methyl bromide or an alternative treatment as a condition of entry into the United States for 19 fruits, 14 vegetables, and 7 nuts, seeds, and miscellaneous foods coming from certain countries (see app. III). (APHIS also requires these treatments for various nonfood imports, including unprocessed seeds and nuts, hays and straw, cotton products, gums, bagging, and brassware.) About 90 percent of some U.S. imports, including apricots, nectarines, grapes, peaches, plums, and yams, are affected by these requirements. According to APHIS officials, acceptable alternatives are generally not available and the loss of methyl bromide will lead APHIS to ban imports of many economically important commodities. An April 1993 USDA study of nine imported fruits found that the loss of imports would reduce supplies and increase prices. According to the study, the higher prices would increase the revenues to U.S. producers by $3.0 billion to $3.3 billion over 5 years. However, the losses to U.S. consumers from paying the higher prices would range from $4.7 billion to $5.0 billion over 5 years. The study further found that many of the imported items fill an important niche in U.S. supplies. For example, the study said that apricots, grapes, nectarines, peaches, and plums from Chile enter the United States during the winter when none or nearly none of these items are produced domestically. In addition, U.S. exports worth over $400 million were fumigated with methyl bromide in 1994 (see app. IV). If the United States bans methyl bromide, an acceptable alternative treatment must be negotiated with the receiving countries. According to USDA officials, these negotiations can take several years and may not be successful, especially if other producers can continue to use methyl bromide and meet the quarantine requirements. EPA officials told us that they are more optimistic than USDA officials that acceptable alternatives will be available for imports and can be agreed upon for exports. On the basis of our review, we have concluded that the Clean Air Act does not currently authorize EPA to grant exemptions from the ban on methyl bromide for domestic agricultural uses, including preshipment and quarantine treatments. Supplies of methyl bromide available when the ban goes into effect on January 1, 2001, can be used, but no additional amounts can be produced or imported for domestic uses. The Congress, in section 604 of the act, specified the conditions under which EPA may grant exemptions from the production phaseout of class I ozone-depleting substances, including methyl bromide. This section details six categories of substances for which exemptions may be granted. For four of the six categories, the exemptions are restricted to specific chemicals named in the relevant provisions, none of which is methyl bromide. For the remaining two categories—chemicals used in medical devices and exports to developing countries—EPA is authorized to promulgate exemptions for any class I substance after giving notice and an opportunity for public comment. Neither section 604 nor any other provision of title VI grants EPA general authority to issue essential use exemptions. We identified no current uses of methyl bromide in medical devices, and it appears that an exemption for this purpose would not be applicable. However, methyl bromide could qualify for an exemption under the export provision of section 604(e). That provision imposes only three limits on the availability of the exemption: (1) it authorizes the production of only “limited quantities” (not defined in the provision), (2) the substance may be exported only to developing countries that are parties to the Montreal Protocol, and (3) the export may be only for the purpose of “satisfying the basic domestic needs of such countries.” UNEP’s scientific assessments indicate that emissions from human uses of methyl bromide cause significant ozone depletion and should be controlled. However, a phaseout of the substance could adversely affect some parts of U.S. agriculture and trade unless adequate—that is, environmentally acceptable, effective, and economical—alternatives are identified before the ban takes effect in 5 years. More progress in identifying alternatives is being made for some uses of methyl bromide than for others. If adequate alternatives are not available by the time the ban takes effect, exemptions from the ban may be needed for some domestic uses until alternatives can be developed. However, EPA does not currently have the authority to grant exemptions for the continued production and/or importation of methyl bromide for domestic uses. To provide for an orderly phaseout of methyl bromide, we recommend that the Administrator, EPA, seek changes to the Clean Air Act to authorize the agency to grant exemptions from the ban for essential uses. This authority should provide for EPA to grant exemptions after determining that adequate alternatives for a particular use are not available and that the adverse impact of not having methyl bromide for that use outweighs the negative effects on human health and the environment of further production and importation. We provided copies of a draft of this report to EPA and USDA for their review and comment. On November 3, 1995, we met with USDA officials, including the Chairman of the USDA Ad Hoc Committee for Alternatives to Methyl Bromide and the Deputy Director of the National Agricultural Pesticide Impact Assessment Program. The USDA officials generally agreed with the report’s findings. The officials said that overall the report is balanced and presents the important issues and viewpoints associated with the use of methyl bromide. The officials again stressed their positions that practical or cost-effective alternatives are not available for many of methyl bromide’s uses and that a unilateral ban on the pesticide is likely to hurt U.S. competitiveness in world agricultural markets. On November 7, 1995, we met with EPA officials, including the Methyl Bromide Program Director in the Office of Air and Radiation and the Deputy Director of the Policy and Special Projects Staff in the Office of Pesticide Programs. The officials described the report’s summarization of available information on the agricultural, economic, environmental, and health effects of the planned phaseout of methyl bromide as generally accurate. However, they expressed concern that the report leaves the impression that the outlook for finding alternatives to methyl bromide is more dire than warranted. In their view, the fact that no single chemical or other alternative is expected to replace methyl bromide for all of its uses does not mean that viable, economical alternatives will not be available for most uses by 2001. Furthermore, they added, even though viable, economical alternatives may not be found for some uses by 2001, current projections of large losses resulting from the phaseout cannot be relied on by any means. EPA officials indicated that the agency would look at the need for exemptions and determine whether EPA has the authority to grant them as the deadline for the ban approaches. The officials stated that the focus now should be on identifying alternatives. We believe that our report accurately depicts the availability of alternatives to methyl bromide at this time. We have made no judgment as to whether the alternatives will prove to be inadequate for many uses, as USDA officials have suggested, or for only a few, as EPA officials have suggested. In either case, we believe that EPA will need authority to grant exemptions. Although EPA could wait to seek such authority until the deadline approaches, it will need some lead time to propose changes to the Clean Air Act, have them approved, and issue implementing regulations. EPA and USDA also provided some technical comments on our draft report. We have revised our report as appropriate in response to these comments. We conducted our work from November 1994 through November 1995 in accordance with generally accepted government auditing standards. We interviewed officials from EPA, USDA, the Executive Office of the President, and the United Nations Environment Programme. We also interviewed representatives of the Methyl Bromide Working Group (producers and distributors) and the Crop Protection Coalition (a broad spectrum of methyl bromide users). In addition, we reviewed available studies on methyl bromide’s contribution to the depletion of the ozone layer and economic and technical assessments of a phaseout. We also reviewed applicable laws and regulations and public comments during the proposal stage of EPA’s phaseout regulation. Moreover, we attended conferences on alternatives to methyl bromide and on the status of scientific knowledge concerning methyl bromide’s role in ozone depletion. Appendix V more fully discusses our scope and methodology. As arranged with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 7 days after the date of this letter. At that time, we will send copies to the Secretary of Agriculture, the Administrator of EPA, and other interested parties. We will make copies available to others upon request. Please call me at (202) 512-6112 if you or your staff have any questions. Major contributors to this report are listed in appendix VI. Methyl bromide is used primarily for agricultural purposes, principally for fumigating (1) the soil before planting (preplant soil fumigation) and (2) commodities after harvesting (commodity fumigation). The costs and benefits of a ban on these uses were analyzed by the Environmental Protection Agency (EPA) during the promulgation of its phaseout rule. We also identified three other studies of the potential economic impact of a phaseout on agricultural users. The U.S. Department of Agriculture’s (USDA) National Agricultural Pesticide Impact Assessment Program studied the effects of a phaseout on 21 crops in six states, and the University of California at Berkeley and the University of Florida examined the impact of a phaseout in their states. Each of these studies compared the projected costs and crop yields for likely replacements with those for methyl bromide and found that growers would incur significant losses because of a ban on agricultural uses of methyl bromide. The USDA study also found that consumers would suffer a loss because supplies would be reduced and prices would be higher. Each study based its economic estimates on alternatives available at the time the study was conducted. The economic impact could change if more effective or less costly alternatives are identified in the future. The studies by EPA and USDA arrived at substantially different estimates of the impact of a ban on methyl bromide. However, these estimates could not be easily compared because the studies made different assumptions, differed in their scope, and used different methodologies and cost data. The California and Florida studies were more limited in their scope than either the EPA or USDA studies. We did not independently evaluate these studies. In 1993, EPA reviewed the costs and benefits of its regulatory action to phase out the production and importation of methyl bromide. This study included information on the costs and effectiveness of potential new alternatives by the year 2001 and on the costs and benefits of improving the use of existing alternatives. On the basis of this study, EPA estimated that the total costs of a phaseout of methyl bromide between 1994 and 2010 would be $1.7 billion to $2.3 billion. EPA’s cost analysis examined the likely range of costs for the alternatives and coupled these assumptions with a monte carlo analysis, presenting a set of costs (median, mean, minimum, and maximum) that could be expected with a methyl bromide phaseout in 2001. The $1.7 billion figure represented the estimated median cost, and the $2.3 billion figure represented the mean cost. The minimum and maximum costs were estimated at approximately $7 million and roughly $16 billion, respectively. According to EPA, some available alternatives, if used after 2001, may indeed prove to be more expensive than methyl bromide, and their users may receive lower profits if the increases cannot be passed on to consumers. However, EPA said that it has found that the effects of regulatory actions that remove pesticides from the market are mitigated over time as new pest control technologies are introduced and adjustments are made to compensate for the loss of the pesticide through alternative pest control practices. EPA estimated that the benefits of the phaseout would be between $244 billion and $952 billion. This estimate was based primarily on avoided cases of nonmelanoma cancers. According to the study, in the longer term (until 2160), a total of 2,800 skin cancer fatalities in the United States would be avoided because of the phaseout. The benefits for the period from 1994 through 2010 were estimated to be between $14 billion and $56 billion. The analysis reflected key assumptions about emissions of methyl bromide from human activities, the impact of bromine on ozone, and the likely growth in use of methyl bromide without regulations. The range in values for benefits results from different estimates of the value of a human life. EPA recognized but did not calculate the benefits of avoiding other health and environmental problems caused by increased ultraviolet radiation, such as damage to plants and animals. EPA also did not consider the possible adverse effects on humans, plants, and animals of contact with methyl bromide during its application. In 1993, USDA published a study of the effects on U.S. agriculture of banning methyl bromide, under the National Agricultural Pesticide Impact Assessment Program. The study showed that actions to ban or restrict methyl bromide’s use in the United States would be costly because currently available alternative control practices are less effective or more expensive than using methyl bromide. The study estimated that the annual economic loss to producers and consumers from banning the agricultural uses of methyl bromide included in this study would be about $1.3 billion to $1.5 billion. Of this amount, $800 million to $900 million would be attributed to the loss of methyl bromide for soil fumigation and $450 million to its loss for the fumigation of quarantine imports. An additional economic loss of about $200 million would occur if Vorlex—the alternative identified as having the most potential for succeeding methyl bromide—were no longer available. (The manufacturer had indicated to EPA that it planned to stop producing Vorlex because of high reregistration costs.) According to the study, a phaseout, rather than an immediate ban, of methyl bromide would postpone annual losses and provide time for potential alternatives to be developed and for consumers and producers to adjust. The study concluded, however, that the likelihood of developing new, effective fumigant alternatives appears very remote. The results of USDA’s study were presented to EPA as part of the Department’s comments on the agency’s proposed phaseout rule. According to EPA, the study would be a useful analysis if methyl bromide were being banned immediately, but it does not consider alternatives that may be developed before the ban goes into effect. EPA also said that the study considers only alternatives that duplicate methyl bromide’s ability to kill a wide range of pests and that other alternatives could be used in combination to achieve similar results. USDA officials believe that no alternatives are available for many uses. A 1993 study by the University of California at Berkeley for the California Department of Food and Agriculture examined the role of methyl bromide in the state’s agriculture and the impact on growers of regulatory action to further restrict or ban its use. The University examined background information on the patterns and intensity of methyl bromide’s uses for preplant soil and postharvest fumigation and then used a model to measure the financial impact on California growers of canceling agricultural uses of methyl bromide. According to the University’s report on the study, in the short term, the loss of methyl bromide for preplant soil fumigation would reduce net farm income in California by more than $233.8 million annually. The most significantly affected crops would be strawberries, nursery products (cut flowers and rose, fruit, vine, nut, and strawberry plants), and grapes, and estimated net annual farm income losses would be $105.8 million, $71.7 million, and 31.3 million, respectively. Net income losses reflect differences in production costs from using alternative treatments, which are more costly for some crops, and lower revenues from reduced yields. The report also found that the cancellation of methyl bromide for postharvest applications would have a significant impact on the profitability of California’s fresh fruit and dried nut crops in the short run because fumigation by another method would cost more and take longer. For example, producers of cherries sell their highest-quality fruit on the export market and receive a premium price. If the cancellation of methyl bromide diverts all of the cherries previously sold on the export market to the domestic market, growers will lose $7.3 million annually. Likewise, walnut producers will have to ship more products to the domestic market instead of the holiday markets abroad because alternative techniques could not be used to fumigate the walnuts quickly enough to meet the holiday markets’ needs. As a result, walnut producers would lose about $36.8 million annually. However, according to the study, trade negotiations could, in the long term, remove the requirements for quarantine treatments for cherries or approve alternative techniques. For walnuts, the expansion of holiday markets or earlier harvesting could help meet producers’ needs. A University of Florida study of the economic impact of losing methyl bromide on Florida’s agriculture concluded that the environment that prevails in the state makes the use of methyl bromide critical to the competitiveness of the state’s fruit and vegetable crops in U.S. and international markets. The University surveyed extension specialists in the production areas and reviewed previous work on methyl bromide to identify existing production systems and possible alternatives to the use of methyl bromide. To analyze the economic impact of the ban, the University developed mathematical models of the North American winter fresh vegetable market and the world market for Florida grapefruit. According to the study, the loss of methyl bromide would have a devastating effect on Florida’s winter fresh vegetable producers. Because no viable alternatives can be effectively substituted for methyl bromide, Florida is estimated to lose over $620 million in the value of fresh fruit, vegetables, and fresh citrus (measured at the time of shipping) worth over $1 billion in total sales and more than 13,000 jobs. The study concludes that producers in the state would reduce the acreage allocated to these crops by 43 percent, from about 126,000 acres to 71,500 acres. Tomato production would decline by more than 60 percent, pepper production by 63 percent, and cucumber production by 46 percent without methyl bromide. The study also predicted that Mexico, in particular, would expand its production of vegetables, increasing its tomato production by 80 percent and its pepper production by 54 percent because, as a developing country, it was expected to have longer to use methyl bromide in producing and marketing its crops. Research is currently being conducted by governmental and academic institutions, as well as by the private sector, to ensure that alternative materials and methods will be proven viable and available to the agricultural community before methyl bromide is phased out. Tables II.1 and II.2, together with the accompanying descriptions, briefly profile various alternatives to methyl bromide being evaluated by USDA and other researchers for methyl bromide’s preplant and postharvest end uses and note various concerns that need to be resolved during the 5 years before the ban goes into effect. 1,3-Dichloropropene. A broad-spectrum liquid fumigant comparable to methyl bromide for controlling most soil pests but less effective for controlling weeds. A potential groundwater contaminant. Classified by EPA as a probable human carcinogen. Under special review by EPA because of concerns about cancer for workers and residents in and around treated fields. Use permits previously suspended by California because of health and safety concerns but currently allowed for limited use. Dazomet. A broad-spectrum granular fumigant comparable to methyl bromide for controlling most soil pests but can be less effective for controlling nematodes (parasitic worms). Currently registered for some food crops, but approval may not be sought for all uses of methyl bromide (e.g., crops with low production acreage). Small fruit and orchard uses restricted to the propagation or outplanting of nonbearing berry, vine, fruit and nut crops and similar nonbearing plants, according to EPA. Concerns about potential genotoxicity raised by EPA. Releases methyl isothiocyanate (MITC), a potential groundwater contaminant. Concerns expressed by United Nations Environment Programme (UNEP) about contamination of groundwater. Metam-sodium. A broad-spectrum liquid fumigant comparable to methyl bromide for controlling most soil pests but may be less effective as a nematicide. Identified by EPA as a known teratogen (i.e., cause of developmental malformations). Classified by EPA as a probable human carcinogen. Efficacy dependent on the availability of water (irrigation) to ensure even distribution in the soil. Releases methyl isothiocyanate (MITC), a potential groundwater contaminant. Concerns about contamination of groundwater expressed by EPA and UNEP. Sodium tetrathiocarbonate. A broad-spectrum liquid fumigant found effective for many soilborne pests but not for weeds. Is considered less effective than methyl bromide for controlling nematodes. Currently registered for use on grapes and citrus and registration being sought for almonds, prunes, and peaches. Efficacy dependent on the availability of water (irrigation) to ensure even distribution in the soil. Concerns about groundwater contamination expressed by UNEP. Groundwater concerns addressed by EPA through label restrictions. Formalin/formaldehyde. A broad-spectrum granular (paraformaldehyde) or liquid (formalin) fumigant comparable to methyl bromide for controlling fungi but less effective for controlling nematodes and weeds. Registration voluntarily canceled because of health, safety, and environmental concerns. Efficacy dependent on the availability of water (irrigation) to ensure even distribution in the soil and prevent toxicity to plants. Chloropicrin. A broad-spectrum liquid fumigant principally used as a fungicide. Comparable to methyl bromide for controlling many soil pests but less effective for controlling nematodes and weeds. Also used for tear gas, has a pungent/noxious odor, and can be very unpleasant or even hazardous to handle. Concerns about toxicity and effects of exposure on humans raised by EPA. Nonfumigant narrow-spectrum pesticides. Include granular or liquid nonfumigant nematicides, herbicides, and fungicides spread or sprayed on the soil before or after planting to control specific pests (nematodes, weeds, insects, fungi, or bacteria). Less effective than methyl bromide. Registered uses specific to crops and locations, varying from state to state. Some reregistration concerns raised (e.g., registered nematicides such as aldicarb, carbofuran, and oxamyl are potential groundwater contaminants). Future and preliminary chemical research alternatives. Include new and modified pesticides (e.g., bromonitromethane and carbonyl sulfide) being researched. Will require registration and are in varying stages of research. Will take time to completely develop products and assess their suitability as replacements. Steam. Technically feasible for soil applications and can be as effective as methyl bromide, depending on methods of application and soil conditions/temperatures. Concerns about viability raised by USDA. May be impractical for large-scale (more than 2-acre) applications because it is labor-, equipment-, and energy-intensive and current estimated costs per acre are about two to five times higher than for methyl bromide. Related equipment and services may not be readily available. Feasibility dependent in some areas on availability of energy resources and fuel costs, according to EPA. Solar heating. Technically feasible for soil applications, depending on geographic location and climate. Can be as effective as methyl bromide, depending on application methods and soil conditions/temperatures. Requires long treatment periods and may therefore be impractical for sterilizing soil in areas with short growing seasons (e.g., northern United States). Is likely, for the most part, to be used in combination with other alternatives (e.g., soil fumigants) rather than by itself. Hydroponics. Relatively new plant production systems that eliminate soilborne pests by eliminating soil as the growing medium. Instead, technology uses water-retaining substrates to deliver nutrients. Cannot be used for root crops (e.g., carrots), can have high start-up costs, requires significant support services, and, in the long run, could take many years to become widely accepted and economical. Organic matter. Incorporates soil amendments, such as compost, green waste, straw, sawdust, and animal manure, into the soil to build soil health and control some soilborne pests (e.g., nematodes and weeds). Information on efficacy generally lacking. Some amendments as or more effective than some nonfumigant pesticide alternatives used to control nematodes and possibly viable for use in combined treatments. Plant modification. Includes techniques such as crossbreeding plants, grafting orchard and vineyard rootstocks, and changing plants’ genetic makeup to obtain high resistance to pests and desirable production characteristics. Extensive research required to determine potential of some techniques as alternatives. Considered an important source of viable alternatives by USDA and as having an already demonstrated potential in breeding plants for pest resistance. Crop rotation. Can be effective in suppressing damage by soilborne pests. Effectiveness can be improved by including plants that produce fungicidal and nematicidal substances. Limitations include land availability and required knowledge of pest dynamics, general ecology, and appropriate rotational crops in specific production areas. Research under way to address these concerns. Future and preliminary nonchemical research alternatives. Include biocontrol methods (e.g., egg-destroying fungi) and genetic engineering (e.g., altering organisms to control plant pathogens). Registration and further research required for most. Time needed to complete development and assess suitability as replacements. Integrated pest management. Prevents pest populations from reaching damaging levels through the use of chemical and/or nonchemical treatments and management practices, as appropriate. Requires strict monitoring of pest populations and knowledge of soil ecosystem/crop production interactions. For effective implementation, requires intensive research, training for growers, and use of some chemical control methods that require regulatory approval and may involve health, safety, and environmental concerns. Research needed to determine effective combinations. Choices potentially limited by concerns about registering or reregistering chemicals. Phosphine. A gas produced when aluminum or magnesium phosphide is exposed to moisture. Primarily used to fumigate grains but can be used to control numerous pests on a wide variety of commodities and in some structures. Commodities include raw agricultural foods (e.g., grains and almonds), processed foods (e.g., cereal flours), animal feeds, and nonfood commodities (e.g., tobacco). Structural uses include disinfesting grain storage facilities, such as silos and grain bins, and other structures that are not sensitive to phosphine’s highly corrosive properties, which can damage switches or electronic equipment. Also used as a quarantine treatment for nonfood commodities, such as tobacco exports and cotton products. Effectiveness comparable to methyl bromide’s for allowed treatments. Not suitable for some agricultural commodities (e.g., toxic to fresh fruits and vegetables and can decrease efficiencies when longer treatment times are required, according to USDA). Poses concerns for EPA about effects of exposure on workers, mutagenicity, and neurotoxicity. Risk of corrosion can be reduced and penetration and toxicity can be enhanced by combining low doses with heat and carbon dioxide, according to EPA. Sulfuryl fluoride. Applied as a liquid that converts to a gas and can be used for some nonfood quarantine treatments and for disinfesting some structures empty of food and food products. Effectiveness comparable to methyl bromide’s but poses concerns for EPA about mutagenicity, carcinogenicity, and reproductive effects. Dichlorvos. A volatile liquid compound with limited penetrative powers. Used primarily to control pests in nonperishable foods (e.g., dried fruits and nuts, grains, and milled products) stored in warehouses, including raw and processed products. Classified by EPA as a possible human carcinogen and under special review because of concerns about neurotoxicity and carcinogenicity. Previously used/limited-use alternatives. Include ethylene oxide and other quarantine fumigants (hydrogen cyanide, ethylene dibromide, carbon disulfide, and ethylene dichloride) that pose concerns about health and safety. As effective as methyl bromide for quarantine treatments, but may need emergency-use permits such as USDA formerly obtained to control specific pests on specified commodities. Also include methyl bromide recovery systems being researched for quarantine applications, since use of the recycled chemical is not banned after 2001. Preliminary research indicates feasibility of designing fumigation chambers to achieve 95-percent recovery. But full development of these systems may extend beyond 2001 and poses liability concerns involving yet-to-be-established operational and performance tolerances. Irradiation. Uses low-level gamma radiation to sterilize or kill pests in quarantine and nonquarantine applications. Can be used on most foods and grains and can be equal in effectiveness to methyl bromide. Requires considerable investment in facilities and equipment, entails additional costs to dispose of spent cobalt, and poses capacity limitation concerns. USDA concerned about some commodities’ sensitivity to treatment. Still requires USDA’s approval for quarantine uses, and public’s acceptance is uncertain. Controlled/modified atmosphere. Uses decreased amounts of oxygen and/or increased amounts of carbon dioxide or nitrogen to suffocate pests. May require sealed facilities. Has most potential for treating nonperishable commodities. Use in combination with other treatments being evaluated for improving efficacy levels. Requirements for sealing facilities and long treatment times can pose cost considerations. Controlled atmospheres and low temperatures used more cost-effectively than methyl bromide by the Department of Defense to successfully ship perishables, according to EPA. Thermotherapy. Can be used to control a broad spectrum of pests infesting commodities and structures and is comparable in effectiveness to methyl bromide. Treatments include vapor heat, dry heat, hot water, quick freeze, and cold. Length of required treatment, treatment facility’s size, and commodities’ sensitivities to temperature pose limitations. Experimentation begun with various techniques. Combination treatments likely to be required for some combinations of pests. Combination treatments. Chemical and/or nonchemical combinations potentially usable to control pests on many commodities and in quarantine treatments. Combinations not yet identified for all commodities or pests. Chemical and nonchemical pest control combinations indicate the best potential for controlling pests now managed by methyl bromide, according to EPA. Listed below are recent studies and reports that provide more detailed information on these and other potential alternatives for methyl bromide’s many agricultural uses, the status of their availability as viable substitutes, and research priorities for meeting users’ short-, mid-, and long-term needs. Alternatives to Methyl Bromide: Research Needs for California, California Department of Food and Agriculture (Sacramento: Sept. 1995). Status of Methyl Bromide Alternatives Research Activities, Crop Protection Coalition (July 1995). Alternatives to Methyl Bromide: Ten Case Studies—Soil, Commodity, and Structural Use, EPA, EPA430-R-95-009 (Washington, D.C.: July 1995). Out of the Frying Pan, Avoiding the Fire: Ending the Use of Methyl Bromide—An Analysis of Methyl Bromide Use in California and the Alternatives, Ozone Action, Inc. (Washington, D.C.: 1995). 1994 Report of the Methyl Bromide Technical Options Committee for the 1995 Assessment of the Montreal Protocol on Substances That Deplete the Ozone Layer, UNEP (Nairobi, Kenya: Nov. 1994). Annual International Research Conference on Methyl Bromide Alternatives and Emissions Reductions, sponsored by Methyl Bromide Alternatives Outreach (Orlando, Fla.: Nov. 1994). Alternatives to Methyl Bromide, ICF Incorporated for EPA (Washington, D.C.: Sept. 1993). Alternatives to Methyl Bromide: Assessment of Research Needs and Priorities, USDA (Arlington, Va.: June/July 1993). Methyl Bromide Substitutes and Alternatives: A Research Agenda for the 1990s, USDA (Arlington, Va.: Jan. 1993). The Ranking Minority Member of the House Committee on Commerce asked that we review the concerns of the U.S. Department of Agriculture and the agricultural community about phasing out the U.S. production and importation of methyl bromide. Specifically, we agreed to develop information on (1) the scientific evidence that human uses of methyl bromide contribute to the depletion of the stratospheric ozone layer, (2) the availability of economical and effective alternatives to methyl bromide, (3) the impact of the ban on U.S. trade in agricultural commodities, and (4) EPA’s authority under the Clean Air Act, as amended, to grant exemptions to the ban for essential uses. We conducted our work from November 1994 through November 1995 in accordance with generally accepted government auditing standards. To review the scientific evidence, we consulted the reports of the United Nations Environment Programme (UNEP) on its 1991, 1992 (update of 1991), and 1994 scientific assessments of ozone depletion. We discussed the results of these studies with the Associate Director of Environment, Office of Science and Technology Policy in the Executive Office of the President and with scientists at the National Aeronautics and Space Administration who participated in the 1994 assessment. We also discussed the results with officials of USDA and EPA, including EPA’s Methyl Bromide Program Director. We further discussed the scientific evidence with the Methyl Bromide Working Group, which was formed by methyl bromide producers and distributors to address scientific issues related to the phaseout, and with the Crop Protection Coalition, which represents methyl bromide users. Finally, we discussed the phaseout with a representative of the Natural Resources Defense Council, which is coordinating methyl bromide issues for various environmental groups, including the Friends of the Earth and the Environmental Defense Fund. In addition, we reviewed scientific studies, reports, and other information either prepared by EPA or submitted by others during EPA’s promulgation of the methyl bromide phaseout rule. Furthermore, we attended the “1995 Methyl Bromide State of the Science Workshop” held in June 1995. At the conference, which was sponsored by the Methyl Bromide Global Coalition in cooperation with the National Aeronautics and Space Administration, various papers were presented on the latest research developments. At EPA, we discussed concerns about alternatives to methyl bromide with officials of the Stratospheric Protection Division and Office of Pesticide Programs. At USDA, we interviewed officials of the Agricultural Research Service, Economic Research Service, and Animal and Plant Health Inspection Service, including the Chair of USDA’s Ad Hoc Committee for Alternatives to Methyl Bromide. We further discussed substitutes for and alternatives to methyl bromide with the Methyl Bromide Working Group, the Crop Protection Coalition, the California Strawberry Commission, and several strawberry growers in California. In addition, we reviewed studies, reports, and other information on the availability and suitability of substitutes and alternatives provided by these officials. We also reviewed the assessment reports of UNEP’s Technology and Economics Assessment Panel, Methyl Bromide Technical Options Committee, and Economics Committee and attended the “Annual International Research Conference on Methyl Bromide Alternatives and Emissions Reductions,” which was held in November 1994. Furthermore we reviewed the applicable EPA supporting documents and the information submitted to the agency during the promulgation of the phaseout rule. We discussed the trade implications of the phaseout with officials of USDA’s Economic Research Service, Agricultural Research Service, and Animal and Plant Health Inspection Service; EPA’s Methyl Bromide Program; the Crop Protection Coalition; and the Methyl Bromide Working Group. In addition, we reviewed studies, reports, and other documents prepared by these organizations on the phaseout’s effects on trade in agricultural commodities. We also reviewed the 1994 assessment reports of UNEP’s Technology and Economics Assessment Panel, Methyl Bromide Technical Options Committee, and Economics Committee. Finally, we obtained information from the Animal and Plant Health Inspection Service on U.S. imports and exports of commodities treated with methyl bromide. To determine whether the Clean Air Act provides EPA with the authority to grant essential use exemptions to the phaseout rule, our Office of General Counsel reviewed the Clean Air Act and its legislative history. Richard P. Johnson, Attorney Advisor The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | GAO provided information on the phaseout of methyl bromide in the United States, focusing on the: (1) scientific evidence that emissions of methyl bromide are depleting the ozone layer; (2) availability of economical and effective alternatives to the pesticide; (3) effects of banning the pesticide on U.S. trade in agricultural commodities; and (4) Environmental Protection Agency's (EPA) authority under the Clean Air Act to exempt essential uses of methyl bromide from the phaseout. GAO found that: (1) world scientists participating in the United Nation's Environment Programme believe that emissions of methyl bromide contribute significantly to ozone depletion; (2) although several chemical and nonchemical pest-control alternatives to methyl bromide are available, none are as economical and effective as methyl bromide; (3) if other countries continue to use methyl bromide after it is phased out in the United States, they will have an unfair advantage in international markets for the various agricultural commodities produced with the substance; and (4) the Clean Air Act does not authorize EPA to grant exemptions on producing and importing methyl bromide except for use in medical devices and for export to developing countries that have signed the Montreal Protocol. |
The FAR states that a BPA is a simplified method of filling anticipated repetitive needs for supplies or services that functions as a “charge account,” with terms and conditions agreed upon when the BPA is established. A BPA is not a contract; therefore, the government is not obligated to purchase a minimum quantity or dollar amount and the contractor is not obligated to perform until it accepts an order under the BPA. BPAs do not obligate funds; funds are obligated when an order subsequently is placed. Agencies may establish BPAs under GSA’s Schedule program contracts. Subpart 8.4 of the FAR provides procedures for using GSA schedule contracts, including establishing and ordering from BPAs. Prior to the issuance of the FAR in 1984 as the governmentwide procurement regulation, “blanket purchase arrangements,” a vehicle similar to BPAs, were permitted to be established with schedule contractors, if not inconsistent with the terms of the schedule contract, as early as the 1950s. Schedule BPAs use the pre-established terms and conditions of the GSA contract (such as prices and delivery terms) as a starting point, but ordering agencies may add terms and conditions, such as discounted pricing, as long as they do not conflict with those of the GSA contract. Each schedule BPA must address the frequency with which orders will be placed; invoicing procedures; discounts; delivery locations; time and requirements, such as the amount or quantity the agency expects to purchase under the BPA and the work the vendor will perform. The potential volume of orders under a BPA, as indicated by the estimated amount or quantity, provides an opportunity to seek discounts from the GSA schedule contract prices. From the first issuance of the FAR until 1994, agencies establishing schedule BPAs were required to follow the simplified acquisition procedures of Part 13, which emphasized “adequate” or “maximum practicable competition” at the time orders were placed. From 1994 until 1997, the FAR and subsequent GAO bid protest decisions indicated that the policies and procedures of Part 13 did not apply to schedule BPAs and that agencies were to follow the procedures of Subpart 8.4 for placing orders on schedule BPAs, but not for their establishment. Beginning in 1997, the FAR applied the ordering procedures in Subpart 8.4 to the establishment of schedule BPAs, including such steps as considering information about the supply or service offered under schedule contracts or reviewing the catalogs of schedule contractors. It also encouraged agencies to seek discounts when establishing schedule BPAs. A 2004 amendment to the FAR clarified the BPA ordering procedures under Subpart 8.4 and explicitly required agencies to seek discounts. The FAR currently requires federal agencies to seek price reductions from vendors’ schedule prices and to follow certain procedures when establishing schedule BPAs. Which procedures are to be followed depends on whether the BPA will be used to purchase a product or service performed for a fixed price or for a service performed at an hourly rate, and thus requiring a statement of work. Procedures for establishing schedule BPAs and seeking discounts are depicted in figure 1. Agencies may award a schedule BPA to a single vendor or to multiple vendors to fulfill the same requirement. The decision is to be based on a strategy that is expected to maximize the effectiveness of the BPA(s). The FAR states that, in determining how many BPAs to establish, contracting officers are to consider: the scope and complexity of the requirement(s); the need to periodically compare multiple technical approaches or prices; the administrative costs of BPAs; and the technical qualifications of the schedule contractors. After the BPA is established, requirements vary for considering more than one vendor when placing orders, as shown in table 1. DOD is required to adhere to more stringent competition requirements than are at present applicable to civilian agencies. Section 803 of the National Defense Authorization Act for Fiscal Year 2002 directed DOD to amend its regulations to require that any purchase of services exceeding $100,000 under a multiple award contract be made on a competitive basis, subject to limited exceptions. DOD’s implementation of this provision extended the competition requirement to orders under multiple award BPAs. Hence, for such orders exceeding $100,000, DOD contracting officers are required to either (1) notify as many schedule contractors as practicable of the purchase to reasonably ensure that offers would be received from at least three contractors and receive three offers (or determine in writing that no additional contractors could be identified that can fulfill the requirement) or (2) notify all contractors offering the required services under the applicable schedule and afford all responding contractors a fair opportunity to submit an offer and have that offer fairly considered. Congress recently took action to apply multiple award competition requirements that are similar to those in the 2002 statute to all executive agencies. The implementing regulations have not yet been promulgated. We estimate that the federal government obligated between $3.7 billion and $7.9 billion by placing orders under schedule BPAs during fiscal year 2008. Civilian agencies reported spending almost $3.2 billion under schedule BPAs, with the five civilian agencies in our review obligating almost $2.3 billion of this amount, or almost 72 percent of total civilian agency obligations. Although orders under schedule BPAs (for goods and services) comprised only about 2.3 percent of civilian agencies’ reported obligations during fiscal year 2008, usage of schedule BPAs by civilian agencies has grown substantially over time, by 382 percent from fiscal year 2004 to 2008 ($659 million to $3.2 billion). We were unable to develop similar trend information for DOD due to the data’s being unavailable, but for fiscal year 2008, we estimate that DOD obligated between $0.5 billion and $4.7 billion under schedule BPAs. Unlike civilian agencies, DOD does not use the available fields in FPDS-NG to distinguish its schedule BPAs from its traditional BPAs and indefinite delivery contracts. Therefore we could not readily determine DOD’s overall usage of schedule BPAs or what DOD is buying under these BPAs. We attempted to use data from DOD’s own procurement database (the DD350 database) and information from defense agency officials to identify schedule BPAs, but found additional inaccuracies. For example, we identified possible schedule BPAs for several Army organizations with obligations totaling roughly $319.8 million in fiscal year 2007. However, after further review and consultation with Army officials, we found that only about 16 percent of this amount had actually been obligated under schedule BPAs. A DOD acquisition official informed us that the department is taking actions to implement new reporting procedures in FPDS-NG. Civilian agencies’ use of schedule BPAs to purchase services has increased vastly more than their overall growth in services contracting in recent years. From fiscal years 2004 to 2008, civilian agency schedule BPA obligations for services increased by 475 percent, compared to a slightly negative growth in their overall service contracting. In addition, civilian agency schedule BPA purchases of services increased far more than their purchases for goods during the same time period. Figure 2 illustrates the trend in civilian agency obligations under schedule BPAs for products and services. We could not portray a similar analysis for DOD because of the data issue discussed earlier. The majority of schedule BPAs in our sample—74 percent of the 336 DOD and civilian agency BPAs we reviewed—were established to acquire services as opposed to goods. The most frequently cited broad categories of services in the BPAs we reviewed were management support services, other professional services, and program management/support services. The estimated purchase amounts when the BPAs were established ranged from $10,000 to $734 million, with the average estimated dollar amount just over $64 million. Specific examples of services acquired under the schedule BPAs in our review include: a BPA established by DHS for a range of acquisition support services, including drafting performance work statements and quality assurance surveillance plans; a BPA established by the Federal Emergency Management Agency (FEMA) to provide program management support for implementing the Pre-Disaster Mitigation Program; several BPAs established by the Navy to obtain analytical support for budget formulation and execution and other activities; and a BPA established by the Food Safety and Inspection Service to obtain court reporting services. When the agencies established the BPAs in our sample to acquire goods, the most frequently cited categories were for data processing software; software and system configuration; and printing, duplicating, and bookbinding. For example, the Social Security Administration established a BPA to buy color copiers, and the Navy established a BPA to obtain software. Other examples of goods purchased through schedule BPAs in our sample include: special purpose boats purchased by the Coast Guard for various law body armor purchased by the Air Force; laboratory equipment and supplies purchased by the Department of Health and Human Services; and fire engines purchased by the Forest Service. In addition to saying they use schedule BPAs to fulfill recurring needs, many of the contracting officials we spoke with cited BPAs’ flexibility and the speed with which they can be used as reasons they chose to use them as opposed to other contract vehicles, such as indefinite delivery contracts. Several contracting officials noted that schedule BPAs do not require the government to commit to any minimum dollar obligation or amount, as would an indefinite delivery/indefinite quantity contract. For example, a contract specialist at DHS’s Immigration and Customs Enforcement explained that his office does not receive funding, and therefore cannot obligate funds, until the budget is passed; in recent years, this has occurred during the second quarter of the fiscal year. Using a schedule BPA allows his office to be ready whenever it receives funds. Also, a contracting officer at FEMA said that she can establish a schedule BPA and have it ready for use when the agency has to respond to natural disasters and to conduct recovery operations without having to guarantee a minimum amount. However, some contracting officials noted that the lack of a binding contract can be a potential negative, since a vendor can decline to accept an order. One contracting official at the Marine Corps said that he prefers to have multiple BPA holders to ensure that vendors are available to meet the demand for goods and services. Contracting officers also indicated that the speed with which they can both establish BPAs and place orders under them is an advantage. For example, a contracting officer at the Marine Corps noted that schedule BPAs do not take a long time to negotiate because the solicitation process is streamlined and contracting officers are not required to advertise the solicitation on FedBizOpps, the Web site where government business opportunities greater than $25,000 are posted. As a result, he said it usually takes him a month or less to establish a schedule BPA, whereas it frequently takes him 3 to 4 months to award an indefinite delivery contract. Some contracting officers also told us that the ability to place BPA orders without competition is an advantage in terms of time saved. For example, a contracting officer at the Centers for Disease Control noted that a schedule BPA that has a broad scope of work makes it unnecessary to conduct a time-consuming competition each time he wants to place an order. A contracting officer at the Food Safety and Inspection Service stated that she can place orders under a single award BPA without further competition in less time than would be needed to meet the competition requirements for ordering directly from a GSA schedule contract. Some agencies also use schedule BPAs to help meet their small business goals. A contracting officer at the Social Security Administration told us that he uses schedule BPAs in part because there are many companies on the GSA schedule that meet the requirements of the Small Business Administration’s 8(a) business development program. We also reviewed a number of schedule BPAs, established by the Air Force to provide a wide range of advisory and assistance services, that involved teams of vendors often led by small businesses serving as prime contractors. Agencies in our sample competed BPAs when establishing them—meaning that, for purposes of this report, contracting officers considered more than one vendor—64 percent of the time. For a small number of BPAs in our sample (12 percent) contracting officers documented their rationale for not competing. We found no evidence that the remainder, 24 percent of the BPAs in our sample, were competed. For instance, at the National Institutes of Health (NIH), we found no evidence that 18 of the BPAs included in our sample were competed when established. Competition is the cornerstone of the acquisition system, and the benefits of competition are well established. It saves the taxpayer money, improves contractor performance, curbs fraud, and promotes accountability for results. When orders are placed under GSA schedule contracts, the FAR allows contracting officers to limit the number of vendors they consider, which includes considering only one vendor. However, the FAR does not explicitly apply this provision to the establishment of BPAs. The FAR specifically lists some examples of circumstances in which limited competition may be justified, including instances when (1) the work is unique or specialized in nature and only one source is capable of responding; (2) the new work is a logical follow-on to a previous requirement; or (3) an urgent and compelling need exists. In assessing agencies’ rationale for awarding BPAs directly to vendors without competition, we found justifications for doing so that were based on each of these circumstances. For example: Agencies purchased software from vendors who were the sole authorized vendors holding a GSA schedule contract. The Social Security Administration awarded a BPA for program management, technical management, and administrative support because it was a logical follow-on to previous work. The Coast Guard awarded a BPA to bridge the gap between the expiration of one contract and the competitive award of the next contract. In addition, we found four instances in which schedule BPAs were issued directly to one vendor because the vendor was designated as a small business or as an Alaska Native Corporation-owned business. However, we also found examples of justifications for awarding BPAs directly to one vendor that are not specifically mentioned in the FAR, some of which may not conform with sound procurement policy. A Navy contracting officer stated that it was not necessary to compete a BPA for engineering and technical services because GSA had already determined the vendor’s schedule pricing to be fair and reasonable. In two instances at the Justice Department, the contracting officer in one case stated that the vendor had performed well on a previous BPA, and in the other, that the vendor provided a deep discount. We discussed the lack of clarity regarding the applicability of FAR provisions regarding limiting competition when establishing BPAs with officials from the Office of Federal Procurement Policy. They agreed that action is needed to clarify the relevant provisions of the FAR and noted that discussions are ongoing regarding implementation of the provisions of section 863 of the Duncan Hunter National Defense Authorization Act for Fiscal Year 2009, regarding competition requirements under multiple award contracts. The FAR allows a contracting officer to decide whether to award a BPA to a single vendor or to multiple vendors for the same requirement. In determining how many BPAs to establish, the contracting officer is to consider such factors as the scope and complexity of the requirement and the administrative costs of the BPA. Over half of the BPAs in our sample (60 percent or 200) were single-award BPAs, and of these, we found no evidence of competition when the BPA was established for 19 percent or 37 of them. One of the single award BPAs, established in 2004, for which we found no evidence of competition had an estimated amount of nearly $60 million. Further, once a single award BPA is established, all orders may be issued directly with the vendor without additional competition. We found this to be the case for the vast majority of orders under the single award BPAs in our sample; only 10 percent had been competed. Indeed, a number of contracting officers we spoke with cited this feature of single award BPAs as an advantage. The dollar value of some of the non-competed orders was fairly significant; 45 of the orders not competed under single award BPAs were greater than $1 million. For instance, DHS issued one of these orders for $37.6 million for professional information analysis and intelligence support, and the Coast Guard issued a $13.1 million order for network integration, software, and system integration support services. Agencies established a number of single award BPAs of fairly long duration, resulting in an extended period of time under which orders could be placed without additional competition. The FAR currently suggests that schedule BPAs should not exceed five years in length, but permits BPAs of longer duration. We found 28 instances in which agencies established single award BPAs with durations of at least 6 years, with a few single award BPAs in place for longer than 10 years, and one for over 20 years. Furthermore, of these 28 instances, we found evidence that competition occurred when establishing the BPAs in only ten cases and that competition occurred when placing orders in only four of the cases. The FAR requires agencies to follow specific procedures to compete orders under multiple award BPAs that exceed the micropurchase threshold ($3,000). Specifically, agencies are required to forward requirements, or statements of work and evaluation criteria, to an “appropriate number” of BPA holders, with the determination of what constitutes an appropriate number left to the discretion of the contracting officer. Contracting officers competed 49 percent of the orders above the micropurchase threshold with more than one vendor under the multiple award BPAs we reviewed. For 32 percent of the orders, contracting officers placed the order directly with one vendor and did not compete it with other vendors, the appropriate number effectively being one. For example, the Department of Agriculture did not compete an order worth $1.2 million for fire engines. For the remaining 19 percent of these orders, we found no evidence that contracting officers competed the order with more than one vendor. We found no evidence, for example, to suggest that DHS competed a $2.1 million order under one of its multiple award BPAs for information technology. Figure 3 shows the percentage of schedule BPAs in our sample that were established with a single vendor or with multiple vendors and the dollar value of orders competed and not competed for each type. The defense supplement to the FAR contains additional competition requirements for DOD, specifically that DOD compete orders under schedule BPAs exceeding $100,000 or justify the award if an order is not competed. Of the 37 orders subject to this requirement included in our sample, DOD competed or properly justified as sole source 28 of them. For the remaining 9 orders, there was no evidence of competition. Recent legislation directs that acquisition regulations be amended to require executive agencies to place on a competitive basis any order exceeding $100,000 that is made under a multiple award contract, but the implementing regulations are still pending and the extent to which this requirement will apply to orders under schedule BPAs is not certain. Agencies frequently did not seek discounts when establishing schedule BPAs and rarely tried to obtain better pricing when placing orders. We found no evidence that agencies requested a discount for 47 percent of the BPAs we reviewed, even though GSA notes that agencies’ ability to negotiate discounts from schedule prices by leveraging their buying power through larger volume purchasing is one of the advantages of using schedule BPAs. By not requesting discounts when establishing schedule BPAs, agencies are missing opportunities to save money. Agencies frequently received discounts from GSA schedule prices if they requested them when establishing BPAs. For the 179 BPAs in our sample for which agencies requested discounts, discounts were received for 75 percent of them. For example, the Department of Justice requested and subsequently received an 18 percent discount from the vendor’s GSA schedule pricing for a BPA used to procure information technology services. This discount saved the government roughly $20 million from fiscal year 2006, when the BPA was established, through July 2009 when the last order was placed, based on obligation data in FPDS-NG. In another instance, two BPAs awarded to the same vendor highlight the importance of requesting discounts. A contracting officer requested a discount when establishing a Navy BPA for analytical support services, and the vendor provided a 5 percent discount from its GSA schedule prices. In contrast, when establishing a different BPA for similar services from the same vendor, the contracting officer did not seek or receive a discount. If he had done so and received the same 5 percent discount applied to orders placed during the life of the BPA, the Navy would have saved almost $87,000. In another instance, a contracting officer at the Marine Corps did not seek, and subsequently did not receive, a discount for a schedule BPA whose estimated value was $205 million. Some contracting officers did not appear to understand the current requirement to seek discounts. In some cases, their rationale for not seeking discounts was based upon the statement in the FAR that GSA has already determined prices in the underlying schedule contract to be “fair and reasonable.” However, this FAR statement addresses the fact that ordering activities are not required to conduct additional price analyses when ordering supplies and services not requiring a statement of work under the GSA schedule contracts. It does not negate the requirement to seek discounts when establishing schedule BPAs, which is clearly stated in FAR Subpart 8.4. One contracting officer said that using competition when establishing the BPA is the more significant determining factor for pricing, and thus he did not focus specifically on requesting discounts. Contracting officers who did request a discount usually included such language in the solicitation when establishing the BPA. In some cases, the contracting officer even made the offer of a discount a condition for awarding the BPAs—in effect, demanding a discount. For example, the request for quotation for two Navy BPAs stated, “Quoted prices, inclusive of fees must be discounted below GSA schedule prices.” The Department of Agriculture included the following statement in the request for quotation for one BPA: “Provide a proposed discount off your normal GSA schedule rates for the entire BPA period of performance.” The Air Force stated the following: “the contractor is expected to offer their (sic) best prices at or below the schedule price list.” In other cases, the request for a discount was more tentative. For example, the solicitation for a FEMA BPA stated, “the Government requests that you consider offering a discount percentage beyond the GSA Schedule pricing…” In a few instances, the contracting officer requested discounts via email or during negotiations. For example, a DHS contracting officer requested discounts during negotiations to establish a schedule BPA to provide technical support services to the Office of Immigration Statistics. The discounts agencies received when establishing BPAs varied widely. Vendors sometimes offered a single, flat rate discount for all items offered under the BPA, but we found it was more common for vendors to offer a range of discounts, with some goods or services more heavily discounted than others. Vendors’ flat rate discounts usually fell between 1 and 10 percent. For instance, the Air Force obtained a 10 percent discount when establishing an estimated $99 million BPA to obtain advisory and assistance services. Some discounts were larger. DHS, for example, received a 76 percent flat rate discount on a $22 million BPA established to purchase software and services. When vendors provided ranges of discounts, the minimum discount was most often between zero and 10 percent, while maximum discounts were more dispersed, with a majority ranging up to 30 percent. Some BPAs included discounts that varied by volume, while others included discounts that varied according to the product or service offered. For example, under a Department of Agriculture schedule BPA for software and associated maintenance, the vendor provided discounts ranging from 5 percent on a single order up to $250,000 to 20 percent on a single order over $1 million. The Social Security Administration received discounts ranging from 15 percent off labor rates to 91 percent off software under one of its schedule BPAs used to purchase software, maintenance, consulting services, and training. Figure 4 demonstrates the wide range of discounts received by each of the agencies in our sample. In addition to requesting a discount at the time the schedule BPA is established, agencies can request additional discounts when they issue orders, although the FAR does not require them to do so. The agencies in our review infrequently requested discounts when placing orders. Of the 352 orders we reviewed, agencies clearly requested discounts for 51 of them. Contracting officers indicated that their rationale for not seeking additional discounts when placing orders was the fact that pricing was already established at the time the BPAs were awarded. As with discounts at the time a BPA is established, we found that agencies were more likely to receive discounts when they specifically requested them than when they did not. In some cases, agencies had negotiated discounts when establishing the schedule BPAs and were also able to obtain further discounts for orders. In the Department of Justice example noted above, in which the vendor provided an 18 percent discount for the BPA, the contracting officer received an additional 10 percent discount for a $6.3 million order, saving $630,000. A vendor also provided an additional 45 percent discount for a $2.6 million order under a BPA for which the Social Security Administration had already received discounts when the BPA was established, resulting in a dollar savings of over $1 million. As an illustrative example of the potential for savings, had the contracting officer in another case we reviewed negotiated even a 2 percent discount for a BPA with an estimated amount of $205 million, it would have saved nearly $4 million based on obligations under this BPA from its establishment in 2005. Contracting officers conducted annual reviews that addressed all of the required FAR elements for only 19 of the 320 BPAs in our sample that required an annual review. A number of contracting officers stated that they were unfamiliar with the FAR’s specific annual review requirements for schedule BPAs. Contracting officers also cited heavy workloads and a lack of acquisition personnel as additional reasons for not conducting annual reviews. Some contracting officers said that they did not know the requirement existed at all. For example, when we asked if an annual review had been conducted, one contracting official at FEMA asked to which FAR requirements we were referring. In 63 additional instances, agencies did not complete all of the required elements of the annual review. In some cases, these reviews occurred while the contracting officers were conducting other activities associated with the BPAs. One contracting officer at the Department of Agriculture conducted a best value assessment when exercising an option year and verified that the GSA schedule contract was still in effect but did not document whether the original estimated BPA amount had been exceeded. In a case from the Department of Justice, a contracting officer verified that the GSA schedule contract was still in effect when issuing modifications to the BPA, but did not address any of the other required elements of the annual review. In other cases, contracting officers cited parts of the FAR outside of Section 8.4 when conducting their annual reviews and in doing so often did not complete required elements. When conducting annual reviews following sections of the FAR other than Part 8, contracting officers often did not verify that the underlying schedule contract was still in effect or check to see if obligations under the BPA had exceeded the estimated amount. Some also failed to conduct a best value assessment that would inform decisions about whether the BPA should be continued. In seven cases, contracting officers at the Department of Agriculture and the Coast Guard documented annual reviews using FAR Section 13.303, which covers traditional BPAs, but did not make sure that the GSA schedule contract was still valid and did not determine whether the estimated amounts of the BPAs had been exceeded. In four other cases, contracting officers cited FAR Part 17 when conducting an annual review, which includes requirements for the exercise of options. In another case, a contracting officer cited FAR Section 16.702, which covers basic agreements. We found only two contracting activities that regularly conducted some sort of annual review. Contracting officers at DHS’s Citizenship and Immigration Services conducted annual reviews for 10 of the agency’s schedule BPAs we reviewed, although not all contained each of the required elements. The head of the contracting office attributed the consistency to an extremely low staff turnover rate as well as a mandatory back-up system that ensures that staff members’ workloads are always covered. In addition, all but one of the BPA files we reviewed at NIH contained some form of annual review, although, again, not all elements were always covered. An official responsible for NIH’s BPA program told us that contracting officials generally conducted the annual reviews to identify and terminate BPAs that were not being used. The required annual reviews present contracting officers with an opportunity to assess whether the BPA still represents the best value and to identify additional opportunities for discounts by determining whether the quantities or amounts estimated when the BPA was established have been exceeded and whether additional price reductions can be obtained. In some of the instances in which contracting officers conducted annual reviews, they determined that the schedule BPAs still represented the best value to the government in a variety of ways. For example, an annual review for a Health and Human Service BPA noted that the BPA still filled an existing need and provided continuity of service. A contracting officer managing a Marine Corps BPA checked that prices were still reasonable, while other annual reviews assessed whether market conditions had changed since the BPA was established. A contracting officer at the Social Security Administration used an annual review to obtain discounts. While conducting the review, he determined that the original estimated amount had been exceeded and successfully obtained discounts from the vendor. Contracting officers also informed us of instances in which conducting annual reviews helped them to better manage schedule BPAs. One contracting officer stated that when conducting annual reviews, she has occasionally found that prices for the schedule BPA have escalated, which led her to cease using those BPAs. By not doing the annual reviews, contracting officers missed opportunities for additional savings. For some of the BPAs in our sample, the BPA amount originally estimated had been exceeded, but because annual reviews were not conducted, agencies missed opportunities to obtain discounts. For instance, orders under a BPA established by the Marine Corps exceeded the BPA’s estimated amount within the third year of a 10- year period of performance. Had the contracting officer conducted an annual review, he may have been able to use the volume of purchases as leverage to negotiate better prices with the vendor. As part of the annual review process, contracting officers are required to verify that the underlying GSA schedule contract—under which the BPA is established—is still in effect. BPAs established under the schedule contract using the procedures of Subpart 8.4 are considered to be issued using full and open competition. Thus, orders properly placed under a valid schedule contract, whether directly or via a BPA, meet the requirements for competition under the Competition in Contracting Act (CICA) of 1984. In the absence of a valid schedule contract, any order placed using a schedule BPA does not meet those competition requirements, unless the procedures used to obtain the order independently satisfy the CICA requirements. Accordingly, if the underlying schedule contract has expired, subsequent orders using the schedule BPA may not be valid. We found one instance in which CICA was potentially violated among the BPAs and orders we reviewed. The underlying schedule contract for a Defense Logistics Agency BPA, under which the Navy placed an order in our sample, had expired. Although the Navy considered more than one schedule vendor when placing the order, this situation still involves a potential CICA violation because the underlying schedule contract had expired by the time the order was placed and it is not clear that statutory requirements for full and open competition were otherwise met. Schedule BPAs can provide federal agencies with a flexible and streamlined contracting mechanism for meeting repetitive procurement needs. However, especially in light of the significant increase in obligations under schedule BPAs, these potential benefits must be balanced with ensuring that this mechanism is used appropriately and serves the best interests of the government and the taxpayer. Based on the failure of contracting officers across the agencies in our review to leverage competition, seek better pricing through discounts, and monitor the use of schedule BPAs by conducting annual reviews, it is apparent that those interests are not being met in many cases. This is particularly true for procedural requirements when establishing and ordering under schedule BPAs that require the consideration of multiple vendors. The high use of single award BPAs, under which no further competition is required when placing orders of any amount, reduces the potential to harness the benefits of competition, including additional savings for the taxpayer. And the FAR’s lack of clarity about the circumstances under which agencies can limit the number of vendors considered when establishing schedule BPAs, including establishing them with only one vendor, can lead to situations, such as we found, where justifications appear inconsistent with sound procurement policy. Further, the fact that so many contracting officers are either unaware of the requirement for annual reviews or simply are not conducting them means that opportunities are being missed to ensure that competition requirements are met and to seek better pricing from vendors. Finally, while some contracting officers clearly sought discounts from schedule prices, sometimes leading to millions in savings, many others did not. We are making the following three recommendations to the Administrator of the Office of Federal Procurement Policy: To ensure that federal agencies take greater advantage of the opportunities that competition provides under schedule BPAs: take steps to amend the FAR to clarify when establishing a schedule BPA using the limited source justifications of the FAR, including when to only one vendor, is or is not appropriate; and consider including in the pending proposed FAR rule that implements the provisions of section 863 of the National Defense Authorization Act of 2009 an amendment to FAR 8.4 specifying that the requirement to place on a competitive basis any order above the simplified acquisition threshold (generally $100,000) under multiple award contracts also applies to orders under single and multiple award BPAs. To improve compliance with the FAR requirement to conduct annual reviews of schedule BPAs, increasing opportunities for additional savings and avoiding violations of competition rules, take steps to require federal agencies to put procedures in place to ensure that annual reviews are conducted. Further, to assist federal agencies in requesting and obtaining discounts when establishing schedule BPAs, we recommend that the GSA Administrator include in the guidance on GSA’s Web site specific language which agencies can use in their requests for quotation to clearly request discounted pricing when establishing schedule BPAs. We requested comments on a draft of this report from the Office of Federal Procurement Policy; the departments of Agriculture, Defense, Health and Human Services, Homeland Security, and Justice; GSA; and the Social Security Administration. In oral comments, the Office of Federal Procurement Policy concurred with our recommendations. In written comments, included in appendix II, GSA concurred with our recommendation, noting that it will include in the guidance on its Web site specific language that agencies can use in their requests for quotation to clearly request discounted pricing when establishing schedule BPAs. The departments of Health and Human Services and Homeland Security generally agreed with our report and provided written comments, included in appendixes III and IV, respectively. The Department of Health and Human Services stated that it plans to take steps to reinforce compliance with BPA requirements. Health and Human Services also commented that while we found no evidence of competition for NIH’s BPAs included in our sample, NIH’s policy is to ensure that prices are competitive before awarding BPAs. Nonetheless, our review of the contract files for the 18 BPAs selected showed no evidence of competition. The Department of Homeland Security discussed several actions it plans to take to improve management and use of BPAs. The Social Security Administration provided written comments, contained in appendix V, and provided new information regarding an example we had identified as a potential CICA violation that was discussed in our draft report. An annual review had not been conducted for the BPA, and the underlying GSA schedule contract had been canceled one year into a 7- year period of performance. The agency had continued to place orders, totaling $3.4 million, under this BPA. In its comments, the Social Security Administration stated that there was no CICA violation because of changes made to the vendor’s underlying schedule contract. Several of the vendor’s schedule contracts had been consolidated into one single schedule contract, which was assigned a new contract number by GSA. The agency stated that the contracting officer had failed to reference the correct schedule contract number when placing orders under the BPA, but that this action did not violate CICA because the BPA was competed. We independently verified this new information, which was not contained in the BPA file, and therefore removed the example from our report. The Administration further noted that it has issued a reminder to its contracting officers to review BPAs annually to ensure, in part, that the underlying GSA schedule contracts are still in effect. In oral comments, the Department of Agriculture generally agreed with our report and did not provide additional comments. The departments of Defense and Justice did not provide comments. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 30 days from the date of this report. We will then send copies of this report to interested congressional committees; the Secretaries of Agriculture, Defense, Health and Human Services, and Homeland Security; the Attorney General; the Administrators of the General Services Administration and the Office of Federal Procurement Policy; and the Commissioner of the Social Security Administration. The report also is available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4841 or [email protected]. Contact points for our offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VI. The overall focus of this review was agencies’ use of blanket purchase agreements (BPA) established under the General Administration Service’s (GSA) schedules program. Our objectives were to determine: (1) the extent to which agencies use schedule BPAs, what they buy with them, and why agencies use them; (2) whether agencies are competing BPAs and the orders under them; (3) whether agencies are taking advantage of opportunities for savings by seeking discounts when using these BPAs; and (4) whether agencies are conducting the required annual reviews. To conduct our work for each objective, we used an electronic data collection instrument to review 336 schedule BPAs and the largest associated order under each during fiscal year 2007, the most recent data available at the time we began our work. Our scope included five civilian agencies and three defense agency locations. We reviewed 263 BPAs from the following civilian agencies: the Departments of Agriculture, Health and Human Services, Homeland Security (DHS), and Justice and the Social Security Administration; and 73 BPAs from the following Department of Defense (DOD) components: Air Force, Marine Corps, and Navy. The five civilian agencies in our review represented roughly 80 percent of civilian agency obligations using orders under schedule BPAs during fiscal year 2007, based on data provided by GSA from the Federal Procurement Data System–Next Generation (FPDS-NG) on the dollar value of orders at the time the orders were placed. We selected a random sample of 30 schedule BPAs per agency from the Departments of Agriculture,Health and Human Services, Justice, and the Social Security Administration, taken from the universe of all BPAs that the agencies ordered under during fiscal year 2007. Our findings for those agencies are projectable to those agencies. Because DHS had obligated the largest dollar amount to orders under schedule BPAs, we selected all 155 BPAs under which orders were placed during fiscal year 2007. Our findings reflect the full universe of DHS’s schedule BPAs used in fiscal year 2007. In some instances, agency officials could not locate or provide the files associated with a given BPA. For example, Department of Agriculture officials could not locate the file for one of the BPAs in our sample, so we reviewed only 29 of Agriculture’s schedule BPAs. Likewise, because DHS officials could not locate and provide files for 11 of the BPAs, we reviewed 144 BPA files. Table 2 shows the number of BPAs selected and reviewed at each civilian agency. We attempted to identify the agencies at DOD that represented roughly 80 percent of DOD obligations to orders under schedule BPAs during fiscal year 2007, based on the dollar value of orders at the time the orders were placed, but were unable to do so because DOD was not using the fields in FPDS-NG that distinguish between BPAs and indefinite-delivery/indefinite- quantity contracts. We attempted to use the DD350 data (DOD’s former procurement database) to identify DOD obligations to orders under possible schedule BPAs during fiscal year 2007 but found inconsistencies in the coding. Based on FPDS-NG data on all DOD BPAs—schedule and traditional—the Army, Defense Logistics Agency, Marine Corps, and Navy represented about 80 percent of defense obligations under all BPAs, based on the dollar value of orders at the time the orders were placed. Because of DOD’s size and geographic dispersion, we selected the contracting activity/location with the most dollars obligated to orders under possible schedule BPAs in fiscal year 2007 within the selected services and agencies, based on the preliminary data. We then selected a random sample of possible schedule BPAs from the selected contracting activities/locations. Our findings are projectable only to the DOD locations selected. We sought to determine whether the DOD BPAs in our sample were schedule BPAs by reviewing the documentation available in DOD’s Electronic Document Access System (EDA). For the Marine Corps, all of the BPAs we selected were schedule BPAs. For the Navy contracting activity we had selected, only one of the original 30 BPAs we selected was not a schedule BPA (and therefore out of the scope of this review). We selected another BPA as a replacement. For the Defense Logistics Agency (DLA), the location that had the greatest obligations under BPAs was the Defense Supply Center-Philadelphia, Systems & Procedures Division. Because none of that location’s BPAs were listed in EDA, we asked officials at the Systems & Procedures Division to tell us whether the 30 BPAs in our sample were schedule BPAs; they stated that none of them were schedule BPAs. We then contacted officials at another DLA location, the Defense Supply Center, Pacific Region, who stated that none of their BPAs were schedule BPAs. As a result, DLA dropped from our sample and we replaced it with the Air Force. We provided the Air Force contracting activity (the Air Force District of Washington) with a list of 25 possible schedule BPAs–the total number that had orders placed under them during fiscal year 2007–and asked officials to identify which ones were in fact schedule BPAs. An official at the Air Force District of Washington indicated that all 25 were schedule BPAs; however, when we reviewed the BPA files, we discovered that 6 of them were not schedule BPAs and dropped them from our sample. In addition, the Air Force contracting activity could not locate one BPA file, and the 754th Electronic Systems Group, Maxwell Air Force Base-Gunter Annex failed to provide information for four of the BPAs under which the Air Force District of Washington placed orders. With regard to the Army, based on FPDS-NG data, we identified the Army’s Communications and Electronics Command (CECOM) in Ft. Monmouth, New Jersey, as having the greatest amount obligated to orders under schedule BPAs. By reviewing the BPAs available in EDA, we discovered that only 2 of the 17 BPAs identified were schedule BPAs. We next looked at the Army’s Tank-Automotive and Armaments Command (TACOM) in Warren, Michigan. Because many of the BPAs used by TACOM were not available in EDA, we asked TACOM officials to identify which of the BPAs were schedule BPAs. They identified only 7 out of 63 BPAs as schedule BPAs. We next looked to the Army Contracting Command in Kuwait; an Army contracting official told us that all 76 of their BPAs were not schedule BPAs. Finally, we contacted the Army’s Contracting Center of Excellence in Washington, D.C. to ask officials there to identify the schedule BPAs from a list of 50 candidates. An associate director from the Center of Excellence told us that the center was unable to identify the schedule BPAs. We did not replace the Army with another defense agency. Table 3 shows the number of reviewed at the selected defense agencies. For all agencies, both civilian and defense, we selected the BPAs based on the agency and location where the orders were placed. For example, the Navy location selected for our review, the Naval Air Systems Command, had placed orders under four schedule BPAs established by the Naval Inventory Control Point – Mechanicsburg. We included these four BPAs in our sample for the Naval Air Systems Command. In another instance, the Naval Air Systems Command ordered under a schedule BPA established by the Defense Information Systems Agency; again, the BPA was included in our sample. We reviewed 352 orders under the BPAs in our sample. We selected the order placed during fiscal year 2007 that obligated the largest dollar value at the time of award. In some instances, more than one order was selected under a single BPA, resulting in a greater number of orders than BPAs selected. For example, at DHS, both Citizenship and Immigration Services and Immigration and Customs Enforcement placed orders under the same BPA; we selected the highest dollar value order placed under the BPA from each component for review. In addition, in some cases, where an agency could not provide the file for the BPA, the agency was able to provide the file for the order. We used an electronic data collection instrument and verified the information on-site to conduct our review of the BPA and order files and to facilitate our analysis. We supplemented our file reviews with follow-up questions when documentation in the file was not available, insufficient, or unclear. In some instances, we received additional documentation from agency officials, which we analyzed and incorporated in our final results when appropriate. When agency officials did not provide documentation that supported their response, we reported the response to our question as “not documented” or “no evidence.” In some cases, we interviewed the contracting officer or contract specialist to obtain clarification. To assess the extent to which agencies use schedule BPAs, what they buy with them, and why agencies use them, we used data from FPDS-NG, data from our file review, and information provided by contracting officials. More specifically, we analyzed data from FPDS-NG on civilian agency procurements for fiscal years 2004 to 2008 to determine the 5-year trend in BPA use among civilian agencies and to compare the use of schedule BPAs to obtain services with the overall growth in contracting for services. To do so, we converted the data into fiscal year 2008 constant dollars using the Bureau of Economic Analysis price index for services in the federal consumption expenditures category. To determine why agency officials chose to establish and use schedule BPAs rather than other contracting vehicles, we interviewed contracting officials across the agencies included in our review about their use of schedule BPAs. To determine what products and services agencies intended to purchase using the schedule BPAs in our sample, we analyzed data from our file review. To estimate DOD’s usage of schedule BPAs for fiscal year 2008, we used DOD’s contract coding system to identify the BPAs in FPDS-NG under which DOD agencies placed orders during fiscal year 2008. From that universe, we selected a random sample of BPAs. We used the EDA, DOD’s online contract retrieval system, to review the BPAs. In the event that a BPA was not available in EDA, we replaced it with the next BPA on our list until we had 100 BPAs. In the three instances in which the documentation in EDA was insufficient to make a determination as to whether the BPA was a schedule BPA, we contacted the agency for clarification. In two of these instances, the contacting officer did not respond, and we replaced the BPAs with the next on our list. Next, we obtained data from FPDS-NG on the amount obligated using orders under these BPAs during fiscal year 2008. We found that 25 of the BPAs in our sample of 100 were schedule BPAs; orders under the 25 schedule BPAs obligated $106,011,561 of the $143,711,789 obligated to orders under the 100 BPAs in our sample. Based on this information, we estimate that 852 of the 5,178 DOD BPAs in FPDS-NG are schedule BPAs, with the 95 percent confidence interval between 589 and 1200 BPAs. We estimate their value as $3.3 billion, about 65 percent of the $5.1 billion value of all BPAs, with the 95 percent confidence interval between $0.5 and $4.7 billion. To determine whether agencies are competing BPAs and the orders under them and whether agencies are taking advantage of opportunities for savings by seeking discounts when using these BPAs, we analyzed the data we obtained during reviews of the BPA and order files concerning competition and discounts, following up when necessary with additional questions and interviews of contracting officials. We reviewed BPA files to determine whether the requirement was competed when the BPA was established and when an order was placed by determining whether more than one vendor had been contacted. We also identified the ordering procedures in the Federal Acquisition Regulation (FAR) and the Defense Federal Acquisition Regulation Supplement (DFARS) and the level of competition required under them. We discussed with contracting officials what factors were considered when deciding to establish single vs. multiple award BPAs. To determine whether contracting officials sought discounts and whether the contractor provided discounts either when the BPA was established or when orders were placed, we reviewed files for the BPA and the order. We then followed up with contracting officers or contract specialists as needed. To determine whether the agencies in our sample are conducting the required annual reviews, we examined the files for the schedule BPAs in our sample for documentary evidence of each element of the annual review as listed in FAR 8.405-3(d). Where there was no documentation of annual reviews in the contract file, we asked agency officials to provide us with the appropriate documentation. We conducted interviews with agency contracting officials, to determine how they interpreted the relevant FAR provision and clarify information in the BPA files. We reviewed the contract files to determine whether the GSA schedule contracts had expired. If there was no GSA schedule data in the BPA file or the file suggested that the schedule contract had expired, we searched the GSA website (GSA e-Library) to determine whether the GSA schedule contracts were still in effect. For those schedule contracts that were no longer listed on GSA e-Library, we contacted GSA to obtain documentation of either the date the schedule contract expired or the current expiration date. We reviewed the BPA files to determine if the contracting officer or contract specialist checked to see if estimated amounts had been exceeded. We visited or contacted the following offices for our review: Agricultural Research Service, Beltsville, Maryland Food and Nutrition Service, Alexandria, Virginia Food Safety and Inspection Service, Beltsville, Maryland U.S. Forest Service Arlington, Virginia Northwest Oregon Contracting Area, Sandy, Oregon National Finance Center, New Orleans, Louisiana Office of Procurement and Property Management Washington, D.C. Fort Collins, Colorado General Services Administration, Washington, D.C. Department of Health and Human Services: Health Resources and Services Administration, Rockville, Maryland National Institutes of Health, Rockville, Maryland Office of the Assistant Secretary for Administration and Management, Washington, D.C. Department of Homeland Security: Citizenship and Immigration Services, Williston, Vermont Customs and Border Protection Washington, D.C. Atlanta, Georgia Chicago, Illinois Austin, Texas Mt. Weather, Virginia New Orleans, Louisiana Washington, D.C. Dallas, Texas Denver, Colorado Grand Prairie, Texas Philadelphia, Pennsylvania Washington, D.C. Office of Procurement Operations, Washington, D.C. U.S. Coast Guard Baltimore, Maryland Washington, D.C. Bureau of Alcohol, Tobacco, Firearms and Explosives, Washington, D.C. Federal Bureau of Prisons, Washington, D.C. Drug Enforcement Administration, Arlington, Virginia Justice Management Division, Washington, D.C. Office of the Federal Detention Trustee, Arlington, Virginia Office of Federal Procurement Policy, Washington, D.C. Social Security Administration, Baltimore, Maryland Defense Procurement and Acquisition Policy, Arlington, Virginia Department of the Air Force Air Force District of Washington, Washington, D.C. Office of the Deputy Assistant Secretary of the Army, Arlington, Army Contracting Agency, Contracting Center of Excellence, Tank-Automotive and Armaments Command, Warren, Michigan U.S. Marine Corps, Quantico, Virginia Department of the Navy Naval Air Warfare Center – Aircraft Division, Patuxent River, Maryland Naval Inventory Control Point, Mechanicsburg, Pennsylvania Space and Naval Warfare Systems Command Systems Center, San Defense Information Systems Agency, Defense Information Technology Contracting Organization, Scott Air Force Base, Illinois Defense Supply Center Philadelphia, Pacific Region, Pearl Harbor, Defense Supply Center Philadelphia, Philadelphia, Pennsylvania Enterprise Support Base Contracting Office, Fort Belvoir, Virginia As we describe in our methodology, we performed extensive tests to assess the reliability of the automated information we used to select our collection of BPAs. For example, we confirmed that the information contained in the automated records reflected the information contained in the contract files. We based our estimate of DOD’s use of schedule BPAs on information we verified using automated images of the contract records. Accordingly, we believe that the data we used to support our findings are reliable for our intended purposes. We conducted this performance audit from June 2008 to August 2009, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Michele Mackin, Assistant Director; Kathryn Edelman; Bridget Grimes; Paula J. Haurilesko; Art James, Jr.; Brandon Jones; Julia Kennon; Arthur Lord; Susan Neill; Kenneth Patton; Caitlin A. Tobin; and Alyssa Weir made key contributions to this report. | The Federal Acquisition Regulation (FAR) allows agencies to establish blanket purchase agreements (BPA) under the General Services Administration's (GSA) Schedules Program, where contracts are awarded to multiple vendors for commercial goods and services and made available for agency use. BPAs are agreements between agencies and vendors with terms in place for future use; funds are obligated when orders are placed. When establishing BPAs under schedule contracts, agencies must follow procedures regarding the number of vendors considered, request discounts, and conduct annual reviews in accordance with requirements. This report assesses selected agencies' use of schedule BPAs and evaluates whether they considered more than one vendor when establishing BPAs and placing orders under them, took opportunities for savings, and conducted annual reviews. To conduct this work, GAO reviewed a sample of 336 schedule BPAs and 352 fiscal year 2007 orders and met with officials. In fiscal year 2008, civilian agencies obligated $3.2 billion under schedule BPAs--up 383 percent from fiscal year 2004. GAO estimates that DOD's obligations ranged from $0.5 to $4.7 billion, placing total obligations in 2008 between $3.7 and $7.9 billion. GAO was unable to determine more fully DOD's obligations because DOD does not utilize fields in the federal procurement data system to distinguish schedule BPAs from other BPAs. DOD has begun to take actions to address this issue. Civilian agencies' use of BPAs for services grew significantly faster--475 percent--than their overall services contracting between 2004 and 2008. Contracting officers use BPAs for flexibility and speed, noting, for example, advantages in disaster response preparation and when funding for a fiscal year is unknown. Of the BPAs GAO reviewed, 64 percent had been competed--meaning, for purposes of this report, that more than one vendor was considered--when established. For 12 percent of BPAs that had not been competed, contracting officers provided a variety of justifications, some of which appear inconsistent with sound procurement policy. The FAR is not clear about justification requirements for BPAs awarded with limited competition, including to one vendor. Also, the majority of BPAs had been awarded to a single vendor, which resulted in a lack of competition when placing orders because the FAR does not currently require competition of orders under single award BPAs. Multiple award BPAs--awarded to more than one vendor for the same requirement--provide an opportunity to benefit from further competition when placing orders, but many contracting officers placed orders directly with one vendor without further competition. Congress recently enhanced competition requirements for multiple award contracts, but the application of this requirement to schedule BPAs has not yet been established. Some of the BPAs GAO reviewed had lengthy durations, exceeding 5 years. GAO found no evidence that agencies sought discounts when 47 percent of the BPAs reviewed were established. In the other cases, some contracting officers explicitly requested, or even demanded, discounts, while others merely encouraged them. Agencies frequently received discounts when they requested them. For instance, the Justice Department was able to save $20 million under a BPA where the contracting officer requested and received discounts. However, at times, such opportunities were missed when discounts were not requested, even when the estimated amount of the BPA was in the hundreds of millions of dollars. Contracting officials rarely conducted the required annual reviews. The reviews for only 19 of the 320 BPAs that required them addressed all of the FAR elements. By not conducting annual reviews, agencies miss opportunities for savings and can run the risk of violating competition requirements. One contracting officer was unaware that the underlying GSA schedule contract had expired, and orders continued to be placed under the BPA--a potential violation of the Competition in Contracting Act. |
The Employee Retirement Income Security Act of 1974 (ERISA) created PBGC as a self-financing, nonprofit, wholly owned government corporation. PBGC protects participants in private pension plans from losing guaranteed benefits due to the termination of underfunded plans. PBGC’s primary responsibilities are to collect premiums from the sponsors of defined benefit plans to insure against default and to assume administration of underfunded plans that terminate. In the event of a plan default, PBGC assumes control of plan assets (including amounts due and payable from the plan sponsor); calculates benefit amounts due to plan participants, commonly communicated in “benefit determination letters;” and pays recipients as benefits are due. Generally, pension plans under PBGC’s administration, in which final benefit determination letters have not yet been issued, are considered estimated plans. PBGC pays benefits in estimated amounts until final determinations are made, routinely taking several years to complete all benefit determinations for plans that terminate. When all letters are issued and participant appeal periods have expired, plans are then closed and moved to ongoing administration where they generally require limited maintenance to reflect participants’ marital changes, address changes, deaths, and other events. Figure 1 provides an overview of the steps involved in processing a terminated pension plan. Over the years, PBGC’s workloads have grown significantly. In fiscal year 1975, the first year after the passage of ERISA, PBGC administered three pension plans with a total of 400 participants. By fiscal year 2007, PBGC administered almost 3,800 pension plans with over 1.3 million participants. Figures 2 and 3 show the number of pension plans and participants administered by PBGC since fiscal year 2000—the last time GAO issued a report on this subject. Between fiscal years 2002 and 2005, PBGC experienced a large number of claims that contributed to its workload growth. In September 2000, we identified a variety of challenges facing PBGC’s contracting activities. Faced with a significant influx of large pension plan failures beginning in the mid-1980s, PBGC chose to contract for services rather than seek additional federal employees during a period of government downsizing. Over time, PBGC continued contracting for services to address backlogs, but was focused on obtaining necessary services quickly. As a result, we found that PBGC did not adequately link decisions to contract for services to longer-term strategic planning considerations. We recommended that PBGC develop a strategic approach to contracting by conducting a comprehensive review of PBGC’s future human capital needs and using this review to better link contracting decisions to PBGC’s long-term strategic planning process. In response to our recommendation, PBGC commissioned a study by the National Academy of Public Administration (NAPA). NAPA provided a six-step model for PBGC to follow for its strategic human capital planning. Upon completion of this study, PBGC convened a workforce planning team that initiated some of the steps NAPA suggested. Our work also identified weaknesses in PBGC’s contract planning and execution processes, which may have led to overuse of labor-hour contracts rather than fixed-price contracts and also to limited competition. PBGC’s contractor oversight activities also exhibited weaknesses, including a lack of data essential for monitoring performance, quality assurance review processes and policies, and procedural guidance. Finally, we identified a potential lack of independence on the part of the office that was responsible for auditing and reviewing PBGC contracting activities. Many of these findings have been echoed more recently by PBGC’s Inspector General. PBGC’s Inspector General issued a report on contracting trends in July 2007. The Inspector General noted that the three major areas cited most often as needing improvements were questioned costs (unsupported and or unauthorized costs that contractors billed to PBGC); lack of documented contracting policies, procedures, and directives; and inadequate contractor oversight. The leading causes for questioned costs were that contractors: (1) did not maintain adequate documentation to justify the costs they billed PBGC and (2) subcontracted with employees who did not have the qualifications required under the contract. Inspector General reports also noted that PBGC has no comprehensive contracting directives or policy guidance available to aid contracting officials and contractors to carry out their respective duties. The reports also identified significant contracting problems and control vulnerabilities, often stemming from a lack of adequate monitoring of contractor performance. Appendix II includes a list of related GAO and PBGC Inspector General Reports. In response to federal agencies’ increasing reliance on contractors to perform their missions and the systemic weaknesses identified in key areas of contracting by GAO, inspectors general, and other accountability organizations, in 2005, we published a Framework for Assessing the Acquisition Function at Federal Agencies. The framework enables high- level, qualitative assessments of the strengths and weaknesses of the contracting function. The framework consists of four interrelated cornerstones that our work has shown are essential to an efficient, effective, and accountable contracting process. Organizational alignment and leadership. Organizational alignment assures the appropriate placement of the contracting function in the agency, with stakeholders having clearly defined roles and responsibilities. Key elements and critical success factors include aligning contracting with the agency’s mission and needs and organizing the contracting function to operate strategically. Committed leadership enables officials to make strategic decisions that achieve agency-wide contracting outcomes more effectively and efficiently. In fact, the Services Acquisition Reform Act of 2003 requires that certain civilian executive agencies designate a Chief Acquisition Officer that has management of acquisition as their primary duty. An executive in this position would address acquisition workforce needs and strategies as part of strategic planning and performance results processes. Policies and processes. Implementing strategic decisions to achieve desired agency-wide outcomes requires clear and transparent policies and processes that are implemented consistently. Policies establish expectations about the management of the contracting function. Processes are the means by which management functions will be performed and implemented. Effective policies and processes govern the planning, award, administration, and oversight of contracting efforts. Human capital. Successfully acquiring goods and services, and executing and monitoring contracts requires valuing and investing in the contracting workforce. Agencies must think strategically about attracting, developing, and retaining talent, and creating a results-oriented culture within the contracting workforce. Knowledge and information management. Effective knowledge and information management provides credible, reliable, and timely data to make contracting decisions. Stakeholders in the contracting process— Procurement Department and program staff who decide which goods or services to buy; project managers who receive the goods and services; managers who maintain supplier relationships; contract administrators who oversee compliance with the contract; and the finance department that pays for the goods and services—need meaningful data to perform their respective roles and responsibilities. Contracting plays a central role in helping PBGC achieve its mission and address unpredictable workloads. PBGC’s contracts cover a wide range of services, including the administration of terminated plans, payment of benefits, customer communication, legal assistance, document management, and information technology. Its contract spending has increased steadily along with overall budget and workload, and use of contracted staff has outpaced its hiring of federal employees. PBGC has relied on contractors to supplement its workforce since the mid-1980s as its workloads have grown due to a significant number of pension plan terminations. PBGC acknowledges that it has difficulty anticipating its workloads due to unpredictable economic conditions and relies on contractors to expand or reduce its workforce as necessary. To address the increase in the overall number of pension plans and participants, PBGC’s budget increased steadily over the last 8 years, from $165 million in fiscal year 2000 to $398 million in fiscal year 2007. Similarly, PBGC’s contract spending increased from $122 million in fiscal year 2000 to $297 million in fiscal year 2007, as shown in figure 4. Contracting represented 71 percent of PBGC’s appropriated budget during this period. Across the federal government, contract spending has more than doubled since fiscal year 2000, going from about $208.8 billion to $430 billion. The Departments of Defense and Homeland Security account for the majority of the increase, while other agencies vary in their budget changes. In previous work on contract spending at various agencies, we found that overall, about a quarter of agencies’ discretionary spending was through contracts. Therefore, PBGC’s contract spending as a percentage of its discretionary budget is relatively high, although most federal agencies have increased their contract spending in recent years. The number of PBGC’s contract employees has grown significantly more than PBGC employees. As figure 5 shows, PBGC had 791 contract employees in fiscal year 2000. By fiscal year 2007, there were over 1,500 contract employees. Conversely, PBGC’s federal employees increased modestly from 763 in fiscal year 2000 to 811 in fiscal year 2007. The largest increases in contractor employees correspond to the failure of several large plans, resulting in workload increases that occurred from fiscal year 2002 through fiscal year 2005. The number of contract employees peaked in fiscal year 2006 at 1,768, and has since fallen to 1,502 in fiscal year 2007, as PBGC is completing the work associated with terminating these large plans. PBGC contracts for a wide range of services including operating its 10 field benefit administration offices throughout the country—field benefit administration office contract employees perform the majority of benefits estimation and plan administration processing; managing PBGC’s $68 billion of assets; paying benefits to participants; developing, overseeing, and managing new systems and information technology projects in support of program operations; and handling of customer inquiries through its customer call center. PBGC also has certain functions, such as audit and actuarial functions, where contract employees work side by side with federal employees and supplement their efforts. PBGC also contracts for specialized services that may not be needed on a routine basis. For example, while PBGC has in- house attorneys who handle the majority of PBGC’s litigation and negotiations, occasionally there are cases where it is necessary for PBGC to retain outside counsel. According to PBGC officials, hiring lawyers with special expertise on a full-time basis, rather than as needed, would be cost prohibitive. When PBGC needs outside legal services, it solicits proposals from these firms for the needed services. According to PBGC officials, this arrangement makes it possible for PBGC to secure necessary legal services quickly and efficiently. To carry out its operations, PBGC has relied on contractors to address its unpredictable workload, which is driven largely by pension plan terminations. In 2001, PBGC convened a workforce planning team to analyze its future service demands. The team recognized that fluctuations in the economy, or business cycle, could result in significant fluctuations in PBGC workloads. For example, PBGC typically sees an increase in terminations 1 to 2 years after a decline in the economy. The team noted that PBGC must remain flexible to rapidly expand or reduce its processing capacity to meet changing workloads, but cautioned that it was important to keep the highest level of expertise in-house. Figure 6 shows the number of contract employees and federal employees at PBGC, by office. Although PBGC is currently experiencing a decline in new terminated plan workloads, the case processing cycle takes an average of 3 years or more. In determining the need for contract employees, PBGC considers both the incoming workload and the workload currently in process. A PBGC official noted that in order to balance the current benefit determination workload and the anticipated decline in the incoming workload, PBGC is allowing field benefit administration office staff increases only when there is a clear demonstration of need. See figure 7 for recent trends in pending benefit determinations. According to a PBGC official, PBGC doesn’t foresee large plan terminations and bankruptcies coming in 2008 or 2009, but there are a few plans it is monitoring that could have significant impact on PBGC. Thus, PBGC is in the delicate position of beginning to decrease contract resources while still remaining prepared for possible workload increases in a time of uncertain financial conditions. In 2007, PBGC began to realign its Procurement Department, update contracting policies and processes, upgrade the skills of Procurement Department staff, and better track contracting data. While these efforts provide an improved foundation for the contracting function, they are early steps and a strategic approach to contracting has not yet been developed. PBGC’s heavy reliance on contract support requires an adequate acquisition infrastructure to ensure an efficient, effective, and accountable contracting function. An agency’s acquisition function stands on four interrelated cornerstones—organizational alignment and leadership, policies and processes, human capital, and knowledge and information management. In 2007, PBGC hired a new procurement director who has taken steps to improve the acquisition infrastructure in the four cornerstone areas; however, the acquisition function’s infrastructure is still inadequate in some areas. The appropriate placement of the acquisition function within an agency can facilitate efficient and effective management of acquisition activities. In our work on best practices, we learned that leading companies elevated or expanded the role of the company’s acquisition organization—typically assigning it greater responsibility and authority for strategic planning, management, and oversight of the company’s services spending. These changes transformed the role of the purchasing unit from one focused on mission support to one that was strategically important to the company’s bottomline. Further, recent legislation recognized the importance of placing the acquisition function at an appropriate level and mandates that most executive departments appoint a chief acquisition officer. PBGC’s Procurement Department does not play a strategic role within the corporation and does not have an active role in the strategic decision- making process, becoming involved only once a requisition is submitted to the department. For example, the procurement director is not part of the following three teams that focus on initiatives that support the corporation’s strategic plan. The Operations Integration Board, whose members include the entire Executive Management Committee, provides a forum for the senior leadership to commission and review corporate-wide programs, projects, and internal policies. Significant projects that cross organizational lines and require multiorganizational resources are presented to the board for approval. The Budget and Planning Integration Team is responsible for approving corporate-wide resource allocations and aligning resources to PBGC’s strategic objectives. This team also makes recommendations on large funding requests that are subject to approval by the Operations Integration Board. The Capital Planning for Information Technology Team reviews information technology investments to assure the alignment of information technology capital investments with the corporate strategic plan and to monitor and control the execution of those investments. The team provides recommendations to the Operations Integration Board and the Budget and Planning Integration Team as the recommendations affect operations or budget. As a result, the Procurement Department has no input into the decision to acquire goods or services through contracting; it is responsible for implementing the policy decisions made by the executive management team but has no voice in making these decisions. The Procurement Department’s involvement on such boards could improve strategic planning by enabling PBGC to identify and manage relationships among the parties involved in the acquisition process; analyze aggregate agency needs and devise strategic acquisition plans; and take into consideration the effects of external factors, such as the appropriations process, on the timing and execution of major contracts. Further, while PBGC has recently published its strategic plan and begun developing a strategic human capital plan, the agency has not actively involved the Procurement Department in the process. The Procurement Department’s lack of involvement in strategic decision making is an indicator that it may be unable to identify, analyze, prioritize, and coordinate agency-wide acquisition needs. Policies and processes govern the way an agency performs the acquisition function. The acquisition function does not end with the award of contracts, but continues through contract implementation and closeout. Acquisition policies and processes should clearly define the roles and responsibilities of all involved in the acquisition process and need to be communicated clearly to all involved. In addition, implementing strategic acquisition decisions to achieve agency-wide outcomes requires clear, transparent, and consistent policies and processes that govern the planning, award, administration, and oversight of acquisitions. PBGC updated its contracting policies and procedures in 2008. In the past year, with the support of executive management, the Procurement Department updated and issued two procurement directives that spell out roles and responsibilities of all individuals involved in the acquisition process and outline procurement and obligation procedures. The Procurement Department issued policies requiring high-level approval for certain types of transactions, such as issuing labor-hour-type contracts and making modifications to contracts. The Procurement Department has recently completed a comprehensive procurement manual that outlines standard operating procedures and provides examples to illustrate guidance. The General Counsel’s Office has provided guidance on when the legal review of acquisitions is required, and works closely with the Procurement Department in performing legal reviews and providing legal advice. However, it is still too early to tell what the effect of the new policies will be. The success of any set of policies will depend on adequate communication across the agency; internal controls to ensure they are implemented; and clear, strong, guidance from leadership on the importance of adhering to the new policies. A strategic human capital management approach enables an agency to recruit, develop, and retain the right number of personnel with the right skills to accomplish its mission effectively. Senior managers should devote adequate resources to recruiting, hiring, developing, rewarding, and retaining talented personnel. This is true for all functions within an agency and is crucial for specialized functions, such as acquisition. Succession planning also is needed to ensure that the workforce is composed of the right number of personnel with the necessary skills and qualifications to perform the acquisition function into the future. PBGC has begun to focus on developing the knowledge and skills of its Procurement Department staff. Having the right people with the right skills is key to making a successful transformation toward an effective acquisition environment. Over the last decade, the emergence of several procurement trends, including a government-wide rise in services contracting, has created a need for an acquisition workforce with a much greater knowledge of market conditions, industry trends, and the technical details of the commodities and services they procure. The Procurement Department has developed new training and certification requirements and invested in upgrading the skills of the acquisition workforce, by providing training to help contract specialists obtain certification. According to PBGC, while only 2 of 12 staff in the contract specialist series were certified in contracting as of February 2007, by January 2008, 9 of the 12 staff had been certified. The Procurement Department also is working to enhance training and certification requirements for COTRs working throughout PBGC. While PBGC’s contract spending has more than doubled since 2001, the number of staff in the Procurement Department has risen only by two FTE employees from 2001 to 2007. PBGC Procurement Department officials are concerned that their staff of nine certified contract specialists is not adequate to support the mission. In May 2007, the Procurement Department studied four comparable agencies to determine appropriate staffing levels for PBGC’s Procurement Department. This study compared PBGC’s contracting staff size, number of annual transactions, and value of annual transactions to those of the other four agencies. Although PBGC had 88 percent more transactions on average and 40 percent more contract dollars on average than the other four agencies, it had less than half the average number of contracting office staff. This study only focused on the Procurement Department and did not attempt to determine appropriate staffing levels for other acquisition professionals not assigned to the Procurement Department, such as program managers, financial managers, and individuals involved in contract monitoring. We did not conduct an independent assessment of this study to validate the study’s results. To make strategic, mission-focused acquisition decisions, organizations need knowledge and information management processes and systems that produce credible, reliable, and timely data about the goods and services acquired and the methods used to acquire them. Such data can be used to identify opportunities to reduce costs, improve service levels, measure compliance and performance, and manage service providers. PBGC’s Procurement Department uses a variety of reports to oversee contract spending. Some reports come from the contract writing system while others are maintained manually. According to a PBGC official, the reports track how goods and services are acquired but do not provide detailed data on goods and services, as well as suppliers, and spending patterns. As a result, PBGC may not have the strategic information needed to support effective acquisition management decisions. In addition, PBGC’s procurement software is not integrated into its financial system, which would allow contracting professionals to obtain real-time information on availability of funds, status of obligations and expenditures, and payments for the receipts of goods and services. PBGC’s Procurement Department recently invested in new procurement software to better track acquisition data. In addition to generating reports on workload and procurement lead times, the new software links to the Federal Procurement Data System to report on PBGC’s contract actions as required by the Office of Management and Budget (OMB). This system is able to produce some aggregate data, like dollars expended, but lacks detailed information on goods and services purchased. PBGC has not yet taken all the steps needed to develop a strategic approach to acquisition. In 2000, we recommended that PBGC do so by conducting a comprehensive review of PBGC’s future human capital needs and using this review to better link contracting decisions to PBGC’s long- term strategic planning process. However, PBGC’s strategic plans do not provide sufficient detail to determine what role acquisition plays in achieving its goals. In our work on best practices, we learned that a strategic plan should incorporate an understanding of how acquisition will be used to help an agency achieve its mission and goals. This would enable PBGC to better coordinate current acquisition initiatives or serve as a road map for identifying or prioritizing future efforts. PBGC recently issued its strategic plan, but it is not comprehensive. Although the plan states that one of PBGC’s strategic priorities is to align resources to meet changing workload demands and mentions flexible staffing as an indicator of efficient operations, it does not specify how this will be accomplished or what role contract staff will play. PBGC recently has hired a human resources specialist to coordinate and complete the planning process initiated in 2002 and update its human capital succession plan. PBGC also has drafted a strategic human capital plan that acknowledges the need for contract support, but does not provide detailed plans for how the contract support will be obtained. As stated earlier, NAPA provided a six-step model for PBGC to follow for its strategic human capital planning. In response, PBGC convened a workforce planning team that implemented some of NAPA’s suggested steps. Although the workforce planning team acknowledged the importance of contract staff for meeting PBGC’s unpredictable workloads, the team’s analysis of PBGC’s future workforce focused almost entirely on PBGC’s federal employees and not its contractor workforce. The team recognized PBGC’s need for improvement in the area of contracting, such as better defining where to use contractors versus federal employees, structuring the work to ensure that federal staff retain core competencies, and developing stronger COTR and contract monitoring competencies. While the team’s 2002 report included an analysis of the current competencies of PBGC’s federal workforce and PBGC’s future needs, it did not similarly analyze the contractor workforce, which, at the time, made up almost half of PBGC’s total workforce and now makes up almost two-thirds. The report did not address how the contractor workforce should change to meet future needs or how contractors should be utilized. Although the report included a discussion of recruitment and hiring strategies, it did not include an analysis of strategies for adding to or subtracting from the contractor workforce in case of increased or decreased workloads. Further, PBGC does not use its strategic annual performance plans to document how the acquisition function supports the agency’s missions and goals. It is not clear how acquisition serves PBGC’s mission, because metrics are not linked to PBGC’s overall performance plan. While the procurement director has developed some metrics to measure the Procurement Department’s workload, PBGC’s strategic plan only has one broad metric related to measuring call center customer or participant satisfaction, but there is no specific metric that enumerates the level of customer service to be reached. Additionally, the strategic plan only has one broad metric related to performance-based contracting, but there are no specific metrics that relate to acquisition efficiency, effectiveness, and results, such as measures to track the number of contracts awarded that include incentives for performance. Performance measurements can be used to gain insight into the Procurement Department’s current performance level and performance over time and set realistic goals for improvements to the acquisition process. Finally, PBGC has taken some limited steps toward making more strategic contracting decisions in certain specific areas. These steps generally were taken in reaction to concerns raised about existing contracts in an internal report and reports by us and the Office of the Inspector General. For example, the Inspector General and PBGC each studied the contracts for the field benefit administration offices and concluded that there were opportunities to increase efficiency and decrease costs. As a result, PBGC is recompeting the contracts in an effort to consolidate the number of field benefit administration offices. PBGC has made improvements to contractor oversight by implementing new contract monitoring activities, improving oversight activities for some of its major contracts, and developing comprehensive procedures to direct contracting activities. However, most of PBGC’s current contracts lack performance incentives and methods to hold contractors accountable. PBGC recently began awarding more performance-based contracts, as a means to achieve better contract outcomes, but there are common challenges that arise—from deciding which contracts are appropriate for a performance-based approach to deciding which outcomes to measure and emphasize. PBGC procurement officials acknowledge the benefits and challenges of performance-based contracting, and must provide additional oversight of contracts and a different approach to contract monitoring that focuses on outcomes rather than processes. PBGC has improved upon existing contract monitoring activities and implemented new activities to strengthen contract oversight. In our 2000 report, we recommended that PBGC develop the capacity to centrally monitor field benefit administration office contractor performance including product quality and timeliness. In response, PBGC shifted the responsibility for contract oversight of Benefits Administration and Payment Department (BAPD) contracts to its Management Coordination Unit (MCU), to consolidate its monitoring of field benefit administration office performance. The MCU uses several different methods to monitor contracts. The MCU reviews the 10 field benefit administration office contractors annually to assess the accuracy of benefit determination letters and the security procedures in place and trains analysts in contract oversight to conduct field benefit administration office reviews. Following each review, the office receives a report that highlights findings and requires a corrective action plan to address deficiencies. The MCU conducts quarterly COTR visits to the field benefit administration offices. During these visits, COTRs conduct interviews with key office staff and review the office’s workplans and records. The MCU developed the COTR site visit program and a corresponding standard protocol to be used in conducting site visits. PBGC officials told us that, due to COTR work activities, not all site visits are being conducted as anticipated; only one to two are being conducted per office each year. The MCU conducts monthly compliance and data integrity reviews of field benefit administration case processing activities. Results of these reviews are compiled into a scorecard that is reported quarterly to the office and the COTR. The scorecard measures BAPD processing goals for timeliness and quality. Noncompliant items are communicated to the offices monthly for resolution. Field benefit administration office contractors receive feedback on the timeliness and accuracy of benefit payments based on the MCU’s monitoring efforts. To improve contract oversight, the Procurement Department also has implemented refresher training requirements for COTRs. PBGC provides guidance and training to COTRs regarding their duties and to ensure their compliance with Procurement Department policy, federal law, regulations and guidance, including the Federal Acquisition Regulation. PBGC is planning to comply with a November 2007, OMB Office of Federal Procurement Policy (OFPP) memorandum for training directed mainly at COTRs. The memorandum establishes a structured training program for COTRs and calls for standardization of competencies and training across civilian agencies. The mandate requires a minimum of 40 hours of training to be certified as a COTR. According to the requirements, new COTRs must be certified within 6 months of appointment and existing COTRs within a year. The Procurement Department will ensure that all COTRs have evidence of their certification, as required by the memorandum. PBGC recently improved its procedural guidance. In 2000, we found that PBGC lacked such guidance on contract oversight and a central location for guidance and procurement policies on contract oversight. Our report noted that due to the absence of specific procedures, staff spent significant time seeking guidance on issues, may have received conflicting directions, and contributed to inconsistent administration practices. Procurement Department officials recently completed a comprehensive procedural guidance manual for staff responsible for awarding contracts and monitoring contractor performance. According to Procurement Department officials, the new manual should eliminate the ad-hoc directives, e-mail, and stand-alone memorandums previously used to address concerns. The Procurement Department’s new procedures manual provides uniform procedures for the internal operation of acquiring supplies and services within PBGC. The document represents a central repository for guidance and policies. The manual has been prepared in an electronic format, and includes relevant Internet links wherever external references are made, such as to OMB Circulars. Performance-based contracting offers the government the potential for achieving better contract outcomes by requiring that all aspects of an acquisition be structured around the purpose of the work as opposed to the manner in which the work is to be performed. Contracts should include descriptions of the outcomes the agency is looking for rather than descriptions of how services should be performed, measurable performance standards, quality assurance plans that describe how the contractor’s performance will be evaluated, and positive and negative incentives, when appropriate. However, our work has shown that the transition to and use of performance-based contracts has proven a challenge for government agencies deeply rooted in traditional methods of contracting. PBGC agreed with our 2000 recommendation that it utilize contracts and payment arrangements consistent with best practices in performance- based contracting. In 2001, OFPP directed government agencies to award contracts using performance-based techniques for at least 20 percent of service contracting dollars greater than $25,000 by fiscal year 2002. In 2003, OFPP recommended that executive agencies apply performance- based techniques to at least 40 percent of service contracting dollars greater than $25,000 by 2005. PBGC began altering its acquisition strategy to be in line with the government-wide move toward performance-based contracting in 2003. However, PBGC to date has awarded less than $150 million in service contract dollars in fiscal year 2008 and remains short of OFPP’s performance-based contracting goals. According to PBGC officials, only six of its contracts currently are performance based, representing a yearly cost of approximately $30 million for communication, administrative, and critical function services. PBGC is in the process of awarding an additional $20 million in performance-based contracts for the administration of the field benefit administration offices. Prior to the solicitation, these offices were contracted using individual labor-hour contracts. According to PBGC officials, most of PBGC’s current contracts lack the methods to hold contractors accountable for their performance. One PBGC official said existing contracts neither include the incentives needed to encourage contractors to achieve desired results, nor do they include performance measures and targets. Instead, PBGC staff work directly with the contractor to communicate necessary targets. In the event that deliverables do not match contract descriptions or there is a problem with contractor performance, PBGC will work with the contractor to correct that problem. By not incorporating performance targets and other measures, PBGC depends on contractors who have limited incentive to provide optimal service. PBGC has the option not to renew a contract with a poorly performing contractor, but officials acknowledge the disadvantage it faces by not providing performance incentives to help hold contractors accountable. PBGC also has attempted to motivate contractors by considering assessments of the contractor’s customer service as a part of future contract renewal and is planning to incorporate customer service measures into its contracts through the use of the American Customer Service Index. However, the index is not completely effective as an incentive mechanism because it does not report results for individual contractors and instead reports on contracts collectively. While it is important that PBGC incorporate strong performance incentives into its contracts, the transition to and use of performance- based contracts has proven a challenge for agencies deeply-rooted in traditional methods of contracting. In a 2002 report, we highlighted challenges faced by agencies during their transition to performance-based contracting. These challenges included the lack of understanding of performance-based contracting, lack of specific agency guidance, and inadequate oversight of contracts with performance-based methods. In addition, while PBGC has increased the amount of training provided to COTRs and others, the transition to performance-based contracting will require additional training, specific to the new contracting method. In 1998 guidance, OFPP called attention to the problems agencies face in converting from a traditional contract’s statement of work to a performance-based work statement. Agencies reported to OFPP that performance work statements required an increased initial investment of time and resources. However, according to the OFPP guidance, the savings expected by performance-based contracting will offset such costs and correct problems commonly associated with service contracts—cost overruns, schedule delays, and technical challenges. Our prior work, and the work of others, also explains that both agencies and contractors typically find it difficult to move away from traditional contracting methods to a method of linking payments to performance, based on specific requirements that describe results and measurable standards of performance. Our prior work concluded that additional government guidance on performance-based contracting was needed to ensure its proper and effective use. PBGC officials recognize that PBGC may face challenges similar to those faced by other government agencies during implementation of performance-based contracting. With three-quarters of its operational budget currently being spent on contracting, it is clear that acquisition plays a central role in achieving PBGC’s strategic goals. While PBGC has made efforts to improve its acquisition infrastructure, it has not developed a strategic approach to its contracting process as envisioned in our 2000 report. In its role as a support function, rather than a business partner, PBGC’s Procurement Department is not involved in helping PBGC make strategic decisions about contracting early in the process or in developing long-term strategic approaches. PBGC developed its most recent strategic plan and strategic human capital plan, the latter still in draft, without a thorough examination of the role contracting plays at PBGC. By assessing the existing organizational alignment of the Procurement Department against a framework of best practices, PBGC may find that its Procurement Department is unable to effectively identify, analyze, prioritize, and coordinate agency-wide acquisition needs. Further, PBGC’s workload depends on future economic conditions that are difficult to predict. Without a strategic acquisition approach, PBGC risks being unprepared for future workload changes and cannot be assured that it has the optimal mix of contractor staff and federal employees. Since our last report, PBGC also has made meaningful improvements to its contract oversight. To continue contracting improvements, PBGC’s focus on performance-based contracting is growing—providing additional tools to hold contractors accountable for performance and to encourage the achievement of desired outcomes. However, this contracting method requires a new approach to contract oversight and has demonstrated the need for comprehensive training and organizational culture changes. PBGC will likely face challenges similar to those we have seen faced by other agencies that have moved toward performance-based contracting. PBGC needs to be aware of the common pitfalls other agencies have faced and take steps now to avoid the same challenges. To improve PBGC’s performance in an environment of heavy contractor use, we recommend that the Director of PBGC revise its strategic plan and, in drafting the corporation’s human capital strategic plan, reflect the importance of contracting and PBGC’s use of contractors, project its vision of future contractor use, and better link staffing and contracting decisions at the corporate level. In drafting the plan, the Director of PBGC should do the following: Include the Procurement Department in agency-wide strategic planning. Ensure that the Procurement Director sits on PBGC’s three strategic teams—the Operations Integration Board, the Budget and Planning Integration Team, and the Capital Planning for Information Technology Team. Broaden the Procurement Department’s May 2007 staffing study to include as part of PBGC’s agency-wide acquisition workforce those positions outside of the Procurement Department that have a significant impact on procurement outcomes (i.e., requirements staff, program managers, financial managers, and individuals involved in contract monitoring). The study should determine appropriate staffing levels for these positions as the May 2007 study did for Procurement Department staff. Include in PBGC’s human capital plan detailed plans for how contract support will be obtained. Assess PBGC’s contract information to determine if additional information is needed to support strategic acquisition management decisions. This could include more complete information on goods and services purchased, as well as suppliers and spending patterns. In addition, contract spending information should be integrated into PBGC’s financial system, to allow acquisition staff to obtain real-time information on the availability of funds, status of obligations and expenditures, and payments for the receipt of goods and services. Develop metrics for PBGC’s annual performance plan that document how the acquisition function supports PBGC’s missions and goals. These could include metrics related to acquisition efficiency and customer satisfaction. To improve PBGC’s contract management as it implements a performance- based approach to contracting, we recommend that the Director of PBGC provide comprehensive training on performance-based contracting for PBGC’s Procurement Department staff, managers, and acquisition-related workforce; develop practices to help ensure accountability for the Procurement Department staff carrying out contract monitoring responsibilities; and ensure that future contracts measure performance in terms of outcomes, provide incentives for the accomplishment of desired outcomes, and ensure payment of award fees only for excellent performance. We obtained written comments on a draft of this report from PBGC, which are reproduced in appendix II. In addition, we provided a copy of the draft report to the Department of Labor for its comments, but Labor did not provide comments. In response to our draft report, PBGC’s Director stated PBGC’s commitment to managing its contracting activities to obtain the best value for the 44 million beneficiaries of its insurance program. PBGC agreed with most of our recommendations and mentioned various ways it planned to address them. For example, PBGC stated it understood that other government agencies have faced challenges in implementing performance-based contracting and plans to take steps to avoid common pitfalls. PBGC also stated that it will be conducting a comprehensive review of necessary staffing levels across the agency related to procurement functions and future contracting needs, consistent with our recommendation. While PBGC agreed that contracting should be part of its strategic planning process, it disagreed with our recommendation to reflect the importance of contracting and incorporate its vision for future contractor use into its strategic planning documents. In its comments, PBGC maintained that its recently issued strategic plan reflects the importance of contracting and its vision for future use. However, we continue to believe that PBGC’s recently issued strategic plan is not sufficiently comprehensive. PBGC’s strategic plan only briefly mentions performance-based contracting, flexible staffing and metrics for specific contracts, and therefore does not fully reflect the importance of contracting in achieving its mission. For example, among eight “strategic priorities,” contracting is not mentioned. While the plan does state that PBGC will implement performance-based contracting for vendors in an effort to provide good customer service to stakeholders, it does not provide measurable goals for converting certain contracts or any time frames for implementation. Where the plan mentions using the American Customer Service Index as an indicator, it does not provide any detail on how it will use the index, what its performance goals are, and how it will measure success. In addition, the plan lacks certain key attributes of successful performance measures, such as measurable targets with numerical goals, and it does not include the activities that its acquisition function is expected to perform to support the intent of PBGC’s acquisition program. PBGC also disagreed with our recommendation that its Director of Procurement should sit on certain specific corporate committees. We believe that PBGC’s Procurement Director should be included on each of the corporation’s three strategic teams. In its comments, PBGC stated that its Chief Management Officer represents contracting on these teams, and that there are greater gains to be realized by emphasizing executive-level awareness of procurement issues in decision making than by requiring the Procurement Director to sit on the three committees. While we appreciate PBGC’s position that executives should be aware of procurement issues in their strategic decision making, because PBGC relies to such a great extent on contracting, it is critical that its Procurement Director be more involved in the corporation’s strategic planning efforts. In addition to the Procurement Department, the Chief Management Officer currently oversees several additional functions, such as the Budget Department and the Human Resources Department, each vitally important to PBGC, and each with its own challenges. It is essential that an individual well-versed in procurement operations be more integrated into PBGC’s planning for the future. As agreed with your staff, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after its issue date. At that time, we will send copies of this report to the Secretary of Labor and to the Director of PBGC and other interested parties. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-7215 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. To assess the role contracting plays in the Pension Benefit Guaranty Corporation’s (PBGC) efforts to accomplish its mission, we collected and analyzed data on PBGC contracting activities, as well as on participants, plans, employees, and budget trends. We also collected data to identify trends regarding how PBGC has relied on contractors to conduct its work. To do this, we reviewed contracting data from fiscal years 2000 through 2007. We determined that PBGC’s data were sufficiently reliable for the purposes of this report. To assess the steps PBGC has taken to improve its acquisition infrastructure and develop a strategic approach, we compared PBGC’s acquisition infrastructure to standards outlined in GAO’s acquisition framework. Use of the framework enabled us to conduct a high-level, qualitative assessments of the strengths and weaknesses of PBGC’s contracting function. Specifically, we evaluated PBGC’s acquisition infrastructure in four key areas—organizational alignment and leadership, policies and procedures, human capital, and information management. We also reviewed prior GAO work on best practices in strategic approaches to the contracting and compared PBGC’s current operations to best practices. To identify the strategies that PBGC uses to monitor contracts, we reviewed applicable laws, regulations, policies and guidance regarding contract management at PBGC. Specifically, we reviewed OFPP guidance related to performance-based contracting to understand PBGC’s adherence with federal policy on the subject. We also conducted a contract file review of six contract files to assess file fitness and completeness along with monitoring and oversight improvements. To assess the steps PBGC has taken to improve its contract oversight processes to ensure accountability, we reviewed our findings from our 2000 report and followed up on improvements PBGC has made since then to its contract monitoring procedures. For each objective, we interviewed PBGC senior executives, managers, and programming and contracting staff at headquarters, as well as selected contractors. We also interviewed officials from PBGC’s Office of Inspector General and reviewed relevant Inspector General reports. We conducted this performance audit from May 2007 to August 2008, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The following team members made key contributions to this report: Blake Ainsworth, Assistant Director; Lara Laufer and Monika Gomez, Analysts- in-Charge; Jeffrey Bernstein; Susannah Compton; Jena Sinkfield; Najeema Washington; and Craig Winslow. Pension Benefit Guaranty Corporation: Governance Structure Needs Improvements to Ensure Policy Direction and Oversight. GAO-07-808. Washington, D.C.: July 6, 2007. High-Risk Series: An Update. GAO-07-310. Washington, D.C.: January 2007. Framework for Assessing the Acquisition Function at Federal Agencies. GAO-05-218G. Washington, D.C.: September 2005. Private Pensions: The Pension Benefit Guaranty Corporation and Long- Term Budgetary Challenge. GAO-05-772T. Washington, D.C.: June 9, 2005. Contract Management: Opportunities to Improve Surveillance on Department of Defense Service Contracts. GAO-05-274. Washington, D.C.: March 17, 2005. Federal Procurement: Spending and Workforce Trends. GAO-03-443. Washington, D.C.: April 30, 2003. Contract Management: Guidance Needed for Using Performance-Based Service Contracting. GAO-02-1049. Washington, D.C.: September 23, 2002. Best Practices: Taking a Strategic Approach Could Improve DOD’s Acquisition of Services. GAO-02-230. Washington, D.C.: January 18, 2002. Pension Benefit Guaranty Corporation: Appearance of Improper Influence in Certain Contract Awards. T-OSI-00-17. Washington, D.C.: September 21, 2000. Pension Benefit Guaranty Corporation: Contract Management Needs Improvement. T-HEHS-00-199. Washington, D.C.: September 21, 2000. Pension Benefit Guaranty Corporation: Contracting Management Needs Improvement. GAO/HEHS-00-130. Washington, D.C.: September 18, 2000. High-Risk Series: An Overview. GAO/HR-95-1. Washington, D.C.: February 1, 1995. High-Risk Series: Pension Benefit Guaranty Corporation. GAO/HR-93-5. Washington, D.C.: December 1, 1992. Related PBGC Inspector General Reports Trend Analysis Report: PBGC Procurement Issues From 2000-2007, 2007-6/CA-0036. Washington, D.C., July 26, 2007. Evaluation of the Field Benefits Administration Concept, 2004-9/23178. Washington, D.C., April 30, 2004. | The Pension Benefit Guaranty Corporation (PBGC) insures the pensions of more than 44 million workers in over 30,000 employer-sponsored defined benefit pension plans. In response to growing workloads, PBGC has come to rely heavily on contractors to conduct its work. GAO was asked to report on (1) the role that contracting plays in PBGC's efforts to accomplish its mission, (2) the steps PBGC has taken to improve its acquisition infrastructure and develop a strategic approach to guide its contracting activities, and (3) the steps PBGC has taken to improve its contract oversight processes to ensure accountability. To address these issues, we interviewed PBGC officials and selected contractors, reviewed data on PBGC's contracting activities; identified changes PBGC is making to contracting procedures; and identified strategies PBGC uses to monitor contracts. Contracting plays a central role in helping PBGC achieve its mission and address unpredictable workloads. Since the mid-1980s, PBGC has had contracts covering a wide range of services, including the administration of terminated plans, payment of benefits, customer communication, legal assistance, document management, and information technology. PBGC's workforce currently consists of about 800 federal employees and utilizes the services of about 1,500 contract employees. From fiscal year 2000 through 2007, PBGC's contract spending increased steadily along with its overall budget and workload, and its use of contract employees has outpaced its hiring of federal employees. As its workloads have grown due to a significant number of large pension plan terminations, PBGC has relied on contractors to supplement its workforce, acknowledging that it has difficulty anticipating its workloads due to unpredictable economic conditions. PBGC is taking steps to improve its acquisition infrastructure, but the Procurement Department is not yet part of PBGC's strategic decision-making process. In 2007, PBGC began to take steps to realign its Procurement Department, update contracting policies and processes, upgrade the skills of Procurement Department staff, and better track contracting data. PBGC's efforts begin to provide an improved foundation for the contracting function; however, these efforts are early steps and more remains to be done. PBGC has not fully integrated its contracting function at the corporate level; the Procurement Department is not included in corporate-level strategic planning and does not have a presence on PBGC's relevant strategic teams. PBGC has made improvements to contractor oversight and has begun to implement performance-based contracting that offers the potential for better contract outcomes, but also creates new challenges for contract oversight and monitoring efforts. PBGC has implemented new contract monitoring activities, improved oversight activities for some of its major contracts, and developed comprehensive procedures to direct contracting activities. For its field benefit administration office contracts, PBGC developed performance measures and scorecards, providing feedback about contractor performance in terms of timeliness and accuracy of benefit payments. Despite these improvements, most of PBGC's current contracts still lack performance incentives and methods to hold contractors accountable. PBGC recently began awarding more performance-based contracts, as a means to achieve better outcomes. Although performance-based contracting is recognized as a viable way toward getting better results from contractors, GAO and others have identified common challenges agencies face when implementing this approach--from deciding which contracts are appropriate for a performance-based approach to deciding which outcomes to measure and emphasize. PBGC procurement officials recognize the benefits and challenges of performance-based contracting, and that they must provide additional oversight of contracts and a different approach to contract monitoring that focuses on outcomes rather than processes. |
Before advanced computerized techniques for aggregating, analyzing, and disseminating data came into widespread use, personal information contained in paper-based public records at courthouses or other government offices was relatively difficult to obtain, usually requiring a personal visit to inspect the records. Nonpublic information, such as personal information contained in product registrations, insurance applications, and other business records, was also generally inaccessible. In recent years, however, advances in technology have spawned information reseller businesses that systematically collect extensive amounts of personal information from a wide variety of sources and make it available electronically over the Internet and by other means to customers in both government and the private sector. This automation of the collection and aggregation of multiple-source data, combined with the ease and speed of its retrieval, have dramatically reduced the time and effort needed to obtain information of this type. Among the primary customers of information resellers are financial institutions (including insurance companies), retailers, law offices, telecommunications and technology companies, and marketing firms. We use the term “information resellers” to refer to businesses that vary in many ways but have in common the fact that they collect and aggregate personal information from multiple sources and make it available to their customers. These businesses do not all focus exclusively on aggregating and reselling personal information. For example, Dun & Bradstreet primarily provides information on commercial enterprises for the purpose of contributing to decision making regarding those enterprises. In doing so, it may supply personal information about individuals associated with those commercial enterprises. To a certain extent, the activities of information resellers may also overlap with the functions of consumer reporting agencies, also known as credit bureaus—entities that collect and sell information about individuals’ creditworthiness, among other things. As is discussed further below, to the extent that information resellers perform the functions of consumer reporting agencies, they are subject to legislation specifically addressing that industry, particularly the Fair Credit Reporting Act. Information resellers obtain personal information from many different sources. Generally, three types of information are collected: public records, publicly available information, and nonpublic information. Public records are a primary source of information about consumers, available to anyone, and can be obtained from governmental entities. What constitutes public records is dependent upon state and federal laws, but generally these include birth and death records, property records, tax lien records, motor vehicle registrations, voter registrations, licensing records, and court records (including criminal records, bankruptcy filings, civil case files, and legal judgments). Publicly available information is information not found in public records but nevertheless publicly available through other sources. These sources include telephone directories, business directories, print publications such as classified ads or magazines, Internet sites, and other sources accessible by the general public. Nonpublic information is derived from proprietary or nonpublic sources, such as credit header data, product warranty registrations, and other application information provided to private businesses directly by consumers. Private sector businesses rely on information resellers for information to support a variety of activities, such as conducting pre-employment background checks on prospective verifying individuals’ identities by reviewing records of their personal marketing commercial products to consumers matching specified preventing financial fraud by examining insurance, asset, and other financial record information. Typically, while information resellers may collect and maintain personal information in a variety of databases, they provide their customers with a single, consolidated online source for a broad array of personal information. Figure 1 illustrates how information is collected from multiple sources and ultimately accessed by customers, including government agencies, through contractual agreements. In addition to providing consolidated access to personal information through Internet-based Web sites, information resellers offer a variety of products tailored to the specific needs of various lines of business. For example, an insurance company could obtain different products covering police and accident reports, insurance carrier information, vehicle owner verification or claims history, or online public records. Typically, services offered to law enforcement officers include more information—including sensitive information, such as full Social Security numbers and driver’s license numbers—than is offered to other customers. There is no single federal law that governs all use or disclosure of personal information. Instead, U.S. law includes a number of separate statutes that provide privacy protections for information used for specific purposes or maintained by specific types of entities. The major requirements for the protection of personal privacy by federal agencies come from two laws, the Privacy Act of 1974 and the privacy provisions of the E-Government Act of 2002. The Federal Information Security Management Act of 2002 (FISMA) also addresses the protection of personal information in the context of securing federal agency information and information systems. The Privacy Act places limitations on agencies’ collection, disclosure, and use of personal information maintained in systems of records. The act describes a “record” as any item, collection, or grouping of information about an individual that is maintained by an agency and contains his or her name or another personal identifier. It also defines “system of records” as a group of records under the control of any agency from which information is retrieved by the name of the individual or by an individual identifier. The Privacy Act requires that when agencies establish or make changes to a system of records, they must notify the public by a notice in the Federal Register identifying, among other things, the type of data collected, the types of individuals about whom information is collected, the intended “routine” uses of data, and procedures that individuals can use to review and correct personal information. The act’s requirements also apply to government contractors when agencies contract for the development and maintenance of a system of records to accomplish an agency function. The act limits its applicability to cases in which systems of records are maintained specifically on behalf of a government agency. Several provisions of the act require agencies to define and limit themselves to specific predefined purposes. For example, the act requires that to the greatest extent practicable, personal information should be collected directly from the subject individual when it may affect an individual’s rights or benefits under a federal program. The act also requires that an agency inform individuals whom it asks to supply information of (1) the authority for soliciting the information and whether disclosure of such information is mandatory or voluntary; (2) the principal purposes for which the information is intended to be used; (3) the routine uses that may be made of the information; and (4) the effects on the individual, if any, of not providing the information. According to OMB, this requirement is based on the assumption that individuals should be provided with sufficient information about the request to make a decision about whether to respond. In handling collected information, the Privacy Act also requires agencies to, among other things, allow individuals to (1) review their records (meaning any information pertaining to them that is contained in the system of records), (2) request a copy of their record or information from the system of records, and (3) request corrections in their information. Such provisions can provide a strong incentive for agencies to correct any identified errors. Agencies are allowed to claim exemptions from some of the provisions of the Privacy Act if the records are used for certain purposes. For example, records compiled for criminal law enforcement purposes can be exempt from a number of provisions, including (1) the requirement to notify individuals of the purposes and uses of the information at the time of collection and (2) the requirement to ensure the accuracy, relevance, timeliness, and completeness of records. A broader category of investigative records compiled for criminal or civil law enforcement purposes can also be exempted from a somewhat smaller number of Privacy Act provisions, including the requirement to provide individuals with access to their records and to inform the public of the categories of sources of records. In general, the exemptions for law enforcement purposes are intended to prevent the disclosure of information collected as part of an ongoing investigation that could impair the investigation or allow those under investigation to change their behavior or take other actions to escape prosecution. …information is handled: (i) to ensure handling conforms to applicable legal, regulatory, and policy requirements regarding privacy; (ii) to determine the risks and effects of collecting, maintaining, and disseminating information in identifiable form in an electronic information system; and (iii) to examine and evaluate protections and alternative processes for handling information to mitigate potential privacy risks. Agencies must conduct PIAs (1) before developing or procuring information technology that collects, maintains, or disseminates information that is in a personally identifiable form or (2) before initiating any new data collections involving personal information that will be collected, maintained, or disseminated using information technology if the same questions are asked of 10 or more people. OMB guidance also requires agencies to conduct PIAs when a system change creates new privacy risks, for example, changing the way in which personal information is being used. The requirement does not apply to all systems. For example, no assessment is required when the information collected relates to internal government operations, the information has been previously assessed under an evaluation similar to a PIA, or when privacy issues are unchanged. FISMA also addresses the protection of personal information. FISMA defines federal requirements for securing information and information systems that support federal agency operations and assets; it requires agencies to develop agencywide information security programs that extend to contractors and other providers of federal data and systems. Under FISMA, information security means protecting information and information systems from unauthorized access, use, disclosure, disruption, modification, or destruction, including controls necessary to preserve authorized restrictions on access and disclosure to protect personal privacy, among other things. OMB is tasked with providing guidance to agencies on how to implement the provisions of the Privacy Act and the E-Government Act and has done so, beginning with guidance on the Privacy Act, issued in 1975. The guidance provides explanations for the various provisions of the law as well as detailed instructions for how to comply. OMB’s guidance on implementing the privacy provisions of the E-Government Act of 2002 identifies circumstances under which agencies must conduct PIAs and explains how to conduct them. OMB has also issued guidance on implementing the provisions of FISMA. Although federal laws do not specifically regulate the information reseller industry as a whole, they provide safeguards for personal information under certain specific circumstances, such as when financial or health information is involved, or for such activities as pre-employment background checks. Specifically, the Fair Credit Reporting Act, the Gramm- Leach-Bliley Act, the Driver’s Privacy Protection Act, and the Health Insurance Portability and Accountability Act all restrict the ways in which businesses, including information resellers, may use and disclose consumers’ personal information (see app. II for more details about these laws). The Gramm-Leach-Bliley Act, for example, limits financial institutions’ disclosure of nonpublic personal information to nonaffiliated third parties and requires companies to give consumers privacy notices that explain the institutions’ information sharing practices. Consumers then have the right to limit some, but not all, sharing of their nonpublic personal information. As shown in table 1, these laws either restrict the circumstances under which entities such as information resellers are allowed to disclose personal information or restrict the parties with whom they are allowed to share information. Information resellers are also affected by various state laws. For example, California state law requires businesses to notify consumers about security breaches that could directly affect them. Legal requirements, such as the California law, led ChoicePoint, a large information reseller, to notify its customers in mid-February 2005 of a security breach in which unauthorized persons gained access to personal information from its databases. Since the ChoicePoint notification, bills were introduced in at least 35 states and enacted in at least 22 states that require some form of notification upon a security breach. The Fair Information Practices are a set of internationally recognized privacy protection principles. First proposed in 1973 by a U.S. government advisory committee, the Fair Information Practices were intended to address what the committee termed a poor level of protection afforded to privacy under contemporary law. A revised version of the Fair Information Practices, developed by the Organization for Economic Cooperation and Development (OECD) in 1980, has been widely adopted. The OECD principles are shown in table 2. The Fair Information Practices are, with some variation, the basis of privacy laws and related policies in many countries, including the United States, Germany, Sweden, Australia, New Zealand, and the European Union. They are also reflected in a variety of federal agency policy statements, beginning with an endorsement of the OECD principles by the Department of Commerce in 1981, and including policy statements of the DHS, Justice, Housing and Urban Development, and Health and Human Services. In 2004, the Chief Information Officers Council issued a coordinating draft of their Security and Privacy Profile for the Federal Enterprise Architecture that links privacy protection with a set of acceptable privacy principles corresponding to the OECD’s version of the Fair Information Practices. The Fair Information Practices are not precise legal requirements. Rather, they provide a framework of principles for balancing the need for privacy with other public policy interests, such as national security, law enforcement, and administrative efficiency. Striking that balance varies among countries and among types of information (e.g., medication versus employment information). The Fair Information Practices also underlie the provisions of the Privacy Act of 1974. For example, the system of records notice required under the Privacy Act embodies the purpose specification, openness, and individual participation principles in that it provides a public accounting through the Federal Register of the purpose and uses for personal information, and procedures by which individuals may access and correct, if necessary, information about themselves. Further, the E-Government Act’s requirement to conduct PIAs likewise reflects the Fair Information Practices. Under the act, agencies are to make these assessments publicly available, if practicable, through agency Web sites or by publication in the Federal Register, or other means. To the extent that such assessments are made publicly available, they also provide notice to the public about the purpose of planned information collections and the planned uses of the information being collected. A number of congressional hearings were held and bills introduced in 2005 in the wake of widely publicized data security breaches at major information resellers such as ChoicePoint and LexisNexis as well as other firms. In March 2005, the House Subcommittee on Commerce, Trade, and Consumer Protection of the House Energy and Commerce Committee held a hearing entitled “Protecting Consumers’ Data: Policy Issues Raised by ChoicePoint,” which focused on potential remedies for security and privacy concerns regarding information resellers. Similar hearings were held by the House Energy and Commerce Committee and by the U.S. Senate Committee on Commerce, Science, and Transportation in spring 2005. The heightened interest in this subject led a number of Members of Congress to propose a variety of bills aimed at regulating companies that handle personal information, including information resellers. Several of these bills require companies such as information resellers to notify the public of security breaches, while a few also allow consumers to “freeze” their credit (i.e., prevent new credit accounts from being opened without special forms of authentication), or see and correct personal information contained in reseller data collections. Other proposed legislation includes (1) the Data Accountability and Trust Act, requiring security policies and procedures to protect computerized data containing personal information and nationwide notice in the event of a security breach, and (2) the Personal Data Privacy and Security Act of 2005, requiring data brokers to disclose personal electronic records pertaining to an individual and inform individuals on procedures for correcting inaccuracies. Primarily through governmentwide contracts, Justice, DHS, State, and SSA reported using personal information obtained from resellers for a variety of purposes, including law enforcement, counterterrorism, fraud detection/prevention, and debt collection. Most uses by Justice were for law enforcement and counterterrorism, such as investigations of fugitives and obtaining information on witnesses and assets held by individuals of interest. DHS also used reseller information primarily for law enforcement and counterterrorism, such as screening vehicles entering the United States. State and SSA reported acquiring personal information from information resellers for fraud detection and investigation, identity verification, and benefit eligibility determination. The four agencies reported approximately $30 million in contractual arrangements with information resellers in fiscal year 2005. Justice accounted for most of the funding (about 63 percent). Approximately 91 percent of agency uses of reseller data were in the categories of law enforcement (69 percent) or counterterrorism (22 percent). Figure 2 details contract values categorized by their reported use. (Details on uses by each agency are given in the individual agency discussions.) According to Justice contract documentation, access to up-to-date and comprehensive public record information is a critical ongoing mission requirement, and the department relies on a wide variety of information resellers—including ChoicePoint, Dun & Bradstreet, LexisNexis, and West—to meet that need. Departmental use of information resellers was primarily for purposes related to law enforcement (75 percent) and counterterrorism (18 percent), including support for criminal investigations, location of witnesses and fugitives, information on assets held by individuals under investigation, and detection of fraud in prescription drug transactions. In fiscal year 2005, Justice and its components reported approximately $19 million in acquisitions from information resellers involving personal information. The department acquired these services primarily through use of GSA’s Federal SupplySchedule offerings including a blanket purchase agreement with ChoicePoint valued at approximately $15 million. Several component agencies, such as the Federal Bureau of Investigation (FBI), the Drug Enforcement Administration (DEA), and the Bureau of Alcohol, Tobacco, Firearms, and Explosives (ATF) placed orders with information resellers based on the schedules. In addition, for fiscal year 2005, Justice established separate departmentwide contracts with LexisNexis and West valued at $4.5 million and $5.2 million, respectively. Tasked to protect and defend the United States against terrorist and foreign intelligence threats and to enforce criminal laws, the FBI is Justice’s largest user of information resellers, with about $11 million in contracts in fiscal year 2005. The majority of FBI’s use involves two major programs, the Public Source Information Program and the Foreign Terrorist Tracking Task Force (FTTTF). In support of the investigative and intelligence missions of the FBI, the Public Source Information Program provides all offices of the FBI with access via the Internet to public record, legal, and news media information available from various online commercial databases. These databases are used to assist with investigations by identifying the location of individuals and identifying alias names, Social Security numbers, relatives, dates of birth, telephone numbers, vehicles, business affiliations, other associations, and assets. Public Source Information Program officials reported that use of these commercial databases often results in new information regarding the subject of the investigation. Officials noted that commercial databases are used in preliminary investigations, and that subsequently, investigative personnel must verify the results of each search. The FBI’s FTTTF also contracts with several information resellers (1) to assist in fulfilling its mission of assisting federal law enforcement and intelligence agencies in locating foreign terrorists and their supporters who are in or have visited the United States and (2) to provide information to other law enforcement and intelligence community agencies that can lead to their surveillance, prosecution, or removal. As we previously reported, FTTTF makes use of personal information from several commercial sources to analyze intelligence and detect terrorist activities in support of ongoing investigations by law enforcement agencies and the intelligence community. Information resellers provide FTTTF with names, addresses, telephone numbers, and other biographical and demographical information as well as legal briefs, vehicle and boat registrations, and business ownership records. Other Justice components reported using personal information from information resellers to support the conduct of investigations and other law enforcement-related activities. For example, the U.S. Marshals Service uses an information reseller to, among other things, locate fugitives by identifying a fugitive’s relatives and their addresses. Through interviews with relatives, a U.S. Marshal may be able to ascertain the location of a fugitive and subsequently apprehend the individual. DEA, the second largest Justice user of information resellers in fiscal year 2005, obtains reseller data to detect fraud in prescription drug transactions. Through these data, DEA agents can detect irregular prescription patterns for specific drugs and trace this information to the pharmacy and prescribing doctor. DEA also uses an information reseller to locate individuals in asset forfeiture cases. Reseller data allows DEA to identify all possible addresses for an individual in order to meet the agency’s obligation to make a reasonable effort to notify individuals of seized property and inform them of their rights to contest the seizures. Other uses reported by Justice components are not related to law enforcement. For example, uses by the U.S. Trustees, Antitrust, Civil, Tax, and Criminal Divisions include ascertaining the financial status of individuals for debt collection purposes or bankruptcy proceedings or for the location of individuals for court proceedings. The Executive Office for U.S. Attorneys uses information resellers to ascertain the financial status of those indebted to the United States in order to assess the debtor’s ability to repay the debt. According to officials, information reseller databases may reveal assets that a debtor is attempting to conceal. Further, the U.S. Attorneys use information resellers to locate victims of federal crime in order to notify these individuals of relevant court proceedings pursuant to the Justice for All Act. Table 3 details in aggregate the vendors, fiscal year 2005 contract values, and reported uses for contracts with information resellers by major Justice components. In fiscal year 2005, DHS and its components reported that they used information reseller data primarily for law enforcement purposes, such as for developing leads on subjects in criminal investigations and detecting fraud in immigration benefit applications (part of enforcing the immigration laws). Counterterrorism uses involved screening programs at the northern and southern borders as well as at the nation’s airports. DHS reported planning to spend about $9 million acquiring personal information from resellers in fiscal year 2005. DHS acquired these services primarily for law enforcement (63 percent) and counterterrorism (35 percent) purposes through FEDLINK—a governmentwide contract vehicle provided by the Library of Congress—and GSA’s Federal Supply Schedule contracts as well as direct purchases by its components. DHS’s primary vehicle for acquiring data from information resellers was the FEDLINK contract vehicle, which DHS used to acquire reseller services from Choicepoint ($4.1 million), Dun & Bradstreet ($640,000), LexisNexis ($2 million), and West ($1 million). U.S. Immigration and Customs Enforcement (ICE) is DHS’s largest user of personal information from resellers, with acquisitions worth over $4.3 million. The largest investigative component of DHS, ICE has as its mission to prevent acts of terrorism by targeting the people, money, and materials that support terrorist and criminal activities. ICE uses information resellers to collect personal information for criminal investigative purposes and to perform background security checks. Data commonly obtained include address and vehicle information; according to officials, this information is either used to verify data already collected or is itself verified by investigators through other means. For example, ICE’s Federal Protective Service has about 50 users who access an information reseller database to assist in properly identifying and locating potential criminal suspects. Investigators may verify an address obtained from the database by confirming billing information with a utility company or by conducting “drive-by” surveillance. The Federal Protective Service views information obtained from resellers as “raw” or “unverified” data, which may or may not be of use to investigators. Other DHS components likewise reported using personal information from resellers to support investigations and other law enforcement-related activities. For example, U.S. Customs and Border Protection (CBP)— tasked with managing, controlling, and protecting the nation’s borders at and between the official ports of entry—uses information resellers for law enforcement, intelligence gathering, and prosecution support. Using these databases, investigators conduct queries on people, businesses, property, and corresponding links via a secure Internet connection. According to officials, information obtained is corroborated with other previously obtained data, open-source information, and investigative leads. CBP also uses a specially developed information reseller product to assist law enforcement officials in vehicle identification at northern and southern land borders. CBP uses electronic readers to capture license plate data on vehicles entering or exiting U.S. borders, converts the data to an electronic format, and transmits the data to an information reseller, which returns U.S. motor vehicle registration information to CBP. The license plate data, merged with the associated motor vehicle registration data provided by the reseller, are then checked against government databases in order to help assess risk related to vehicles (i.e., a vehicle whose license plate is associated with a law enforcement record might be referred for secondary examination). The Federal Emergency Management Agency (FEMA), charged with building and supporting the nation’s emergency management system, uses an information reseller to detect fraud in disaster assistance applications. FEMA uses this service to verify information that individuals present in their applications for disaster assistance via the Internet. At the time of application, an individual is required to pass an identity check that determines whether the presented identity exists, followed by an identity validation quiz to better ensure that the applicant corresponds to the identity presented. The information reseller is used to verify the applicant’s name, address, and Social Security number. DHS is also using information resellers in its counterterrorism efforts. For example, the Transportation Security Administration (TSA), tasked with protecting the nation’s transportation systems, used data obtained from information resellers as part of a test associated with the development of its domestic passenger prescreening program, called “Secure Flight.” TSA’s plans for Secure Flight involve the submission of passenger information by an aircraft operator to TSA whenever a reservation is made for a flight in which the origin and destination are domestic airports. In the prescreening of airline passengers, this information would be compared with federal watch lists of individuals known or suspected of activities related to terrorism. TSA conducted a test designed to help determine the extent to which information resellers could be used to authenticate passenger identity information provided by air carriers. It plans to use the test results to determine whether commercial data can be used to improve the effectiveness of watch-list matching by identifying passengers who would not have been identified from passenger name records and government data alone. The test results also may be used to identify items of personally identifying information that should be required of passengers to improve aviation security. Table 4 provides detailed information about DHS uses of information resellers in fiscal year 2005, as reported by officials of the department’s components. In an effort to ensure the accuracy of Social Security benefit payments, SSA and its components reported using approximately $1.3 million in contracts in fiscal year 2005 with information resellers for a variety of purposes relating to fraud prevention (66 percent), such as skiptracing, confirming suspected fraud related to workers compensation payments, obtaining information on criminal suspects for follow-up investigations (18 percent), and collecting debts (16 percent). SSA and its components acquired these services through the use of the GSA and FEDLINK governmentwide contracts and their own contracts. In fiscal year 2005, SSA contracted with ChoicePoint, LexisNexis, SourceCorp, and Equifax. The Office of the Inspector General (OIG), the largest user of information reseller data at SSA, supports the agency’s efforts to prevent fraud, waste, and abuse. The OIG uses several information resellers to assist investigative agents in detecting benefit abuse by Social Security claimants and to assist agents in locating claimants. For example, OIG agents access reseller data to verify the identity of subjects undergoing criminal investigations. Regional office agents may also use reseller data in investigating persons suspected of claiming disability fraudulently and draw upon assistance from OIG headquarters staff and state investigators from the state Attorney General’s office in these investigations. For example, the Northeastern Program Service Center, located in the New York branch of SSA, obtains New York State Workers Compensation Board data from SourceCorp, the only company legally permitted to maintain the physical and electronic records for New York State Workers Compensation. Through the use of this information, SSA can identify persons collecting workers compensation benefits but not reporting those benefits, as required, to the SSA. Table 5 details in aggregate the vendors, fiscal year 2005 contract values, and uses of contracts with information resellers reported by major SSA components. The Department of State and its components reported approximately $569,000 in contracts in fiscal year 2005 with information resellers, primarily for assistance in fraud related activities through criminal investigations (51 percent), fraud detection (26 percent), and other uses (23 percent) such as background screening. State acquired information reseller services through the GSA schedule and a Justice blanket-purchase agreement. In fiscal year 2005, the majority of State contracts were with ChoicePoint; the agency also had contracts with LexisNexis, Equifax and Metronet. State’s components reported use of these contracts mainly for passport- related activities. For example, several components of State accessed personal information to validate information submitted on immigrant and nonimmigrant visa petitions, such as marital or familial relationships, birth and identity information, and address validation. A major use of reseller data at State is by investigators acquiring information on suspects in passport and visa fraud cases. According to State, information reseller data are increasingly important to its operations, because the number of passport and visa fraud cases has increased, and successful investigations of passport and visa fraud are critical to combating terrorism. In addition to these uses, State acquires personal information through Equifax to support the financial background screening of its job applicants. Table 6 details the vendors, fiscal year 2005 contract values, and uses of contracts with information resellers reported by major State components. In fiscal year 2005, the four agencies acquired personal information primarily through governmentwide contracts, including GSA’s Federal Supply Schedule (52 percent) contracts and the Library of Congress’s FEDLINK contracts (28 percent). Components within these agencies also initiated separate contracts with resellers as well. The Department of Justice was the largest user, accounting for approximately $19 million of the $30 million total for all four agencies. Figure 3 shows the values of reseller data acquisition by agency for fiscal year 2005. In fiscal year 2005, the most common vehicles used among all four agencies to acquire personal information from information resellers were the governmentwide contracts made available through GSA’s Federal Supply Schedule. The GSA schedule provides agencies with simplified, streamlined contracting vehicles, allowing them to obtain access to information resellers’ services either by issuing task or purchase orders or by establishing blanket purchase agreements based on the schedule contracts. The majority of Justice’s acquisition of information reseller services was obtained through the GSA schedule, including a blanket purchase agreement with ChoicePoint that was also made available to non- Justice agencies (for example, the Departments of State and Health and Human Services). In addition, components of DHS such as the U.S. Secret Service and the SSA’s Office of Inspector General made use of GSA schedule contracts with information resellers. The Federal Supply Schedule allows agencies to take advantage of prenegotiated contracts with a variety of vendors, including information resellers. GSA does not assess fees for the use of these contracts; rather it funds the operation of the schedules in part by obtaining administrative fees from vendors on a quarterly basis. According to GSA officials, use of the schedule contracts allows agencies to obtain the best price and reduce their procurement lead time. Since these contracts have been prenegotiated, agencies do not need to issue their own solicitation. Instead, agencies may simply place a task order directly with the vendor, citing the schedule number. GSA’s role in administering these contracts is primarily to negotiate baseline contract requirements and pricing; it does not monitor which agencies are using its schedule contracts. GSA officials noted that the requirements contained in the schedule contracts are baseline, and agencies may add more stringent requirements to their individual task orders. Another contract vehicle commonly used to obtain personal information from information resellers was the Library of Congress’s FEDLINK service (28 percent). This vehicle was used by both DHS and SSA. FEDLINK, an intragovernmental revolving fund, is a cooperative procurement, accounting, and training program designed to provide access to online databases, periodical subscriptions, books, and other library and information support services from commercial suppliers, including information resellers. At DHS, use of the FEDLINK service was the primary vehicle for contracting with information resellers. DHS also used GSA schedule buys, and some smaller purchases were made directly between DHS components and information resellers. The majority of SSA’s fiscal year 2005 acquisitions from information resellers were through FEDLINK, with some use of the GSA schedule contracts. FEDLINK allows agencies to take advantage of prenegotiated contracts at volume discounts with a variety of vendors, including information resellers. As with the GSA schedule contracts, the requirements of the FEDLINK contracts serve as a baseline, and agencies may add more stringent requirements if they so choose. FEDLINK offers two different options for using its contracts: direct express and transfer pay. The direct express option is similar to the GSA schedule process, in which the agency issues a purchase order directly to the vendor and cites the underlying FEDLINK contract. Under direct express, the ordering agency is responsible for managing the delivery of products and services and paying invoices, and the vendor pays an administrative fee to the Library. Under the transfer pay option, ordering agencies must sign an interagency agreement and pay an administrative fee to the Library. In turn, the ordering agencies receive additional administrative services. DHS used both the direct express and transfer pay options in fiscal year 2005, while SSA used transfer pay exclusively. Although the information resellers that do business with the federal agencies we reviewed have practices in place to protect privacy, these measures were not fully consistent with the Fair Information Practices. Most significantly, the first four principles, relating to collection limitation, data quality, purpose specification, and use limitation, are largely at odds with the nature of the information reseller business. These principles center on limiting the collection and use of personal information and require data accuracy based on that limited purpose and limited use of the information. However, the information reseller industry presupposes that the collection and use of personal information is not limited to specific purposes, but instead that information can be collected and made available to multiple customers for multiple purposes. Resellers make it their business to collect large amounts of personal information and to combine that information in new ways so that it serves purposes other than those for which it was originally collected. Further, they are limited in their ability to ensure the accuracy, currency, or relevance of their holdings, because these qualities may vary based on customers’ varying uses. Information reseller policies and procedures were consistent with aspects of the remaining four Fair Information Practices. Large resellers reported implementing a variety of security safeguards, such as stringent customer credentialing, to improve protection of personal information. Resellers also generally provided public notice of key aspects of their privacy policies and practices, (relevant to the openness principle) and reported taking actions to ensure internal compliance with their own privacy policies (relevant to the accountability principle). However, resellers generally limited the extent to which individuals could gain access to personal information held about themselves, and because they obtain their information from other sources, most resellers also had limited provisions for correcting or deleting inaccurate information contained in their databases (relevant to the individual participation principle). Instead, they directed individuals wishing to make corrections to contact the original sources of the data. Table 7 provides an overview of information resellers’ application of the Fair Information Practices. According to the collection limitation principle of the Fair Information Practices, the collection of personal information should be limited, information should be obtained by lawful and fair means, and, where appropriate, it should be collected with the knowledge and consent of the individual. The collection limitation principle also suggests that organizations could limit collection to the minimum amount of data necessary to process a transaction. In practice, resellers are limited in the personal information that they can obtain by laws that apply to specific kinds of information (for example, the Fair Credit Reporting Act and the Gramm-Leach-Bliley Act, which restrict the collection, use, and disclosure of certain consumer and financial data). One reseller reported that it also restricts collection of Social Security number information from public records, as well as collection of identifying information on children from public sources, such as telephone directories. Beyond specific legal restrictions, information resellers generally attempt to aggregate large amounts of personal information so as to provide useful information to a broad range of customers. For example, resellers collect personal information from a wide variety of sources, including state motor vehicle records; local government records on births, real property, and voter registrations; and various court records. Information resellers may also obtain information from telephone directories, Internet sites, and consumer applications for products or services. The widely varying sources and types of information demonstrate the broad nature of the collection of personal information. The amount and scope of information collected vary from company to company, and resellers use this information to offer a range of products tailored to different markets and uses. Regarding the principle that information should be obtained by lawful and fair means, resellers stated that they take steps to ensure that their collection of information is legal. For example, resellers told us that they obtain assurances from their data suppliers that information is legally collected from reputable sources. Further, they design their products and services to ensure they are in conformance with laws such as the Gramm- Leach-Bliley Act and the Fair Credit Reporting Act. Regarding the principle that, where appropriate, information should be collected with the knowledge and consent of the individual, resellers do not make provisions to notify the individuals involved when they obtain personal data from their many sources, including public records. Concomitantly, individuals are not afforded an opportunity to express or withhold their consent when the information is collected. Resellers said they believe it may not be appropriate or practical for them to provide notice or obtain consent from individuals because they do not collect information directly from them. One reseller noted that in many instances the company does not have a direct relationship with the data subject and is therefore not in a position to interact with the consumer for purposes such as providing notice. Further, this reseller stated its belief that requiring resellers to notify and obtain consent from each individual about whom they obtain information would result in consumers being overwhelmed with notices and negate the value of notice. Under certain conditions, some information resellers offer consumers an “opt-out” option—that is, individuals may request that information about themselves be suppressed from selected databases. However, resellers generally offer this option only with respect to certain types of information and only under limited circumstances. For example, one reseller allows consumers to opt out of its marketing products but not other products, such as background screening and fraud detection products. The privacy policy for another information reseller states that it will allow certain individuals to opt out of its nonpublic information databases containing sensitive information under specific conditions: if the individual is a state, local, or federal law enforcement officer or public official whose position exposes him or her to a threat of imminent harm; if the individual is a victim of identity theft; or if the individual is at risk of physical harm. In order to exercise this option, consumers generally must provide satisfactory documentation to support the basis for their request. In any event, the reseller retains the right to determine (1) whether to grant or deny any request, (2) to which databases the request for removal will apply, and (3) the duration of the removal. Two resellers stated their belief that under certain circumstances it may not be appropriate to provide consumers with opportunities for opting out, such as for information products designed to detect fraud or locate criminals. These resellers stated that if individuals were permitted to opt out of fraud prevention databases, some of those opting out could be criminals, which would undermine the effectiveness and utility of these databases. According to the data quality principle, personal information should be relevant to the purpose for which it is collected, and should be accurate, complete, and current as needed for that purpose. Information resellers reported taking steps to ensure that they generally receive accurate data from their sources and that they do not introduce errors in the process of transcribing and aggregating information; however, they generally provide their customers with exactly the same data they obtain and do not claim or guarantee that the information is accurate for a specific purpose. Some resellers’ privacy policies state that they expect their data to contain some errors. Further, resellers varied in their policies regarding correction of data determined to be inaccurate as obtained by them. One reseller stated that it would delete information in its databases that was found to be inaccurate. Another stated that even if an individual presents persuasive evidence that certain information is in error, the reseller generally does not make changes if the information comes directly from an official public source (unless instructed to do so by that source). Because they are not the original source of the personal information, information resellers generally direct individuals to the original sources to correct any errors. Several resellers stated that they would correct any identified errors introduced through their own processing and aggregation of data. While not providing specific assurance of the accuracy of the data they provide, information resellers reported that they take steps to ensure that their suppliers have data quality controls in place. For example, officials from one information reseller said they use a screening process to help determine whether they should use a particular supplier. As part of this process, the reseller assesses whether the supplier has internal controls in place that are in line with the reseller’s policies. Information resellers also reported that they conduct annual audits of their suppliers aimed at assessing the integrity and quality of the information they receive. If these audits show that a supplier has failed to provide accurate, complete, and timely information, the reseller may discontinue using that supplier. Resellers also noted that data accuracy is contingent upon intended use. That is, data that may be perfectly adequate for one purpose may not be precise enough or appropriate for another purpose. While end users, such as federal agencies, may address data quality for their specific purposes, resellers—who maintain personal information for multiple purposes—are less able to achieve accuracy because they support multiple uses. Thus, resellers generally disclaim data accuracy and leave it to their customers to ensure that the data are accurate for their intended uses. One reseller stated that their customers understand the accuracy limitations of the data they obtain and take the potential for data inaccuracy into account when using the data. According to the purpose specification principle, the purpose for the collection of personal information should be disclosed before collection and upon any change to that purpose, and its use should be limited to that purpose and compatible purposes. While information resellers specify purpose in a general way by describing the types of businesses that use their data, they generally do not designate specific intended uses for each of their data collections. Resellers generally obtain information that has already been collected for a specific purpose and make that information available to their customers, who in turn have a broader variety of purposes for using it. For example, personal information originally submitted by a customer to register a product warranty could be obtained by a reseller and subsequently made available to another business or government agency, which might use it for an unrelated purpose, such as identity verification, background checking, or marketing. In a general sense, information resellers specify their purpose by indicating (on company Web sites, for example) the business categories of the customers for whom they collect information. For example, reseller privacy policies generally state that resellers make personal information available for legitimate uses by business and government organizations. Examples of business categories may be provided, but resellers do not specify which types of information are to be used in which business categories. It is difficult for resellers to provide greater specificity because they make their data available to many customers for a wide range of legitimate purposes. As a result, the public is made aware only of the broad range of potential uses to which their personal information may be applied, rather than a specific use, as envisioned in the Fair Information Practices. Under the use limitation principle, personal information should not be disclosed or used for other than the originally specified purpose without consent of the individual or legal authority. However, because information reseller purposes are specified very broadly, it is difficult for resellers to ensure that use of the information in their databases is limited. As previously discussed, information reseller data may have many different uses, depending on the types of customers involved. Resellers do take steps to ensure that their customers’ use of personal information is limited to legally sanctioned purposes. Information resellers pass this responsibility to their customers through licensing agreements and contract terms and agreements. According to two large information resellers, customers are generally contractually required to use data from resellers appropriately and must agree to comply with applicable laws, such as the Gramm-Leach-Bliley Act, the Fair Credit Reporting Act, and the Driver’s Privacy Protection Act. For example, one information reseller uses a service agreement that includes provisions governing permissible use of information sought by the customer, the confidentiality of information provided, legal requirements under federal and state laws, and other customer obligations. The reseller reported that the company monitors its customers’ compliance by conducting periodic audits and taking appropriate actions in response to any audit findings. In a standardized agreement form used by another reseller, federal agencies must certify that they will use information obtained from the reseller only as permissible under the Gramm-Leach-Bliley Act and the Driver’s Privacy Protection Act. The service agreement identifies permissible purposes for information whose use is restricted by these laws and requires agencies to agree that they will use the information only in the performance or the furtherance of appropriate government activities. In conformance with the Gramm-Leach-Bliley Act permissible uses, the information reseller requires agencies to certify that they will use personal information “only as requested or authorized by the consumer.” The information resellers used by the federal agencies we reviewed generally also reported taking steps to ensure that access to certain sensitive types of personally identifiable information is limited to certain customers and uses. For example, two resellers reported that they provide full Social Security numbers and driver’s license numbers only to specific types of customers, including law enforcement agencies and insurance companies, and for purposes such as employment or tenant screening. While actions such as these are useful in protecting privacy and are consistent with the use limitation principle in that they narrow the range of potential uses for this type of information, they are not equivalent to limiting use only to a specific predefined purpose. Without limiting use to predefined purposes, resellers cannot provide individuals with assurance that their information will only be accessed and used for the purpose originally specified when the information was collected. According to the security safeguards principle, personal information should be protected with reasonable safeguards against risks such as loss or unauthorized access, destruction, use, modification, or disclosure. While we did not evaluate the effectiveness of resellers’ information security programs, resellers we spoke with said they employ various safeguards to protect consumers’ personal information. They implemented these safeguards in part for business reasons but also because federal laws require such protections. Resellers describe these safeguards in various policy statements, such as online and data privacy policies or privacy statements posted on Internet sites. Resellers also generally had information security plans describing, among other things, access controls for information and systems, document management practices, incident reporting, and premises security. Given recent incidents, large information resellers reported having recently taken steps to improve their safeguards against unauthorized access. In a well-publicized incident, in February 2005, ChoicePoint disclosed that unauthorized individuals had gained access to personal information by posing as a firm of private investigators. In the following month, LexisNexis disclosed that unauthorized individuals had gained access to personal information through the misappropriation of user IDs and passwords from legitimate customers. These disclosures were required by state law, as previously discussed. In January 2006, ChoicePoint reached a settlement with the Federal Trade Commission over charges that the company did not have reasonable procedures to verify the identity of prospective new users. The company agreed to implement new procedures to ensure that it provides consumer reports only to legitimate business for lawful purposes. In the mean time, both information resellers reported that they had taken steps to improve their procedures for authorizing customers to have access to sensitive information, such as Social Security numbers. For example, one reseller established a credentialing task force with the goal of centralizing its customer credentialing process. In order for customers of this reseller to obtain products and services containing sensitive personal information, they must now undergo a credentialing process involving a site visit by the information reseller to verify the accuracy of information reported about the business. Applicants are then scored against a credentialing checklist to determine whether they will be granted access to sensitive information. In addition, both resellers reported efforts to strengthen user ID and password protections and restrict access to sensitive personal information (including full driver’s license numbers and Social Security numbers) to a limited number of customers, such as law enforcement agencies (others would be able to view masked information). Although we did not test the effectiveness of these measures, if implemented correctly, they could help provide assurance that sensitive information is protected appropriately. In addition to enhancing safeguards on customer access authorizations, resellers have instituted a variety of other security controls. For example, three large information resellers have implemented physical safeguards at their data centers, such as continuous monitoring of employees entering and exiting facilities, monitoring of activity on customer accounts, and strong authentication of users entering and exiting secure areas within the data centers. Officials at one reseller told us that security profiles were established for each employee that restrict access to various sections of the center based upon employee job functions. Computer rooms were further protected with a combined system of biometric hand readers and security codes. Security cameras were placed throughout the facility for continuous recording of activity and review by security staff. Information resellers also had contingency plans in place to continue or resume operations in the event of an emergency. Information resellers reported that on an annual basis, or more frequently if needed, they conduct security risk assessments as well as internal and external security audits. These assessments address such topics as vulnerabilities to internal or external security threats, reporting and responding to security incidents, controls for network and physical facilities, and business continuity management. The assessments also addressed strategies for mitigating potential or identified risks. If properly implemented, security measures such as those reported by information resellers could contribute to effective implementation of the security safeguards principle. According to the openness principle, the public should be informed about an organization’s privacy policies and practices, and individuals should have ready means of learning about the organization’s use of personal information. To address openness, information resellers took steps to inform the public about key aspects of their privacy policies. They used means such as company Web sites and brochures to inform the public of specific policies and practices regarding the collection and use of personal information. Reseller Web sites also generally provided information about the types of information products the resellers offered—including product samples—as well as general descriptions about the types of customers served. Several Web sites also provided advice to consumers on protecting personal information and discussed what to do if individuals suspect they are victims of identity theft. Providing public notice of privacy policies informs individuals of what steps an organization takes to protect the privacy of the personal information it collects and helps to ensure the organization’s accountability for its stated policies. According to the individual participation principle, individuals should have the right to know about the collection of personal information, to access that information, to request correction, and to challenge the denial of those rights. Information resellers generally allow individuals access to their personal information. However, this access is limited, as is the opportunity to make corrections. Resellers may provide an individual a report containing certain types of information—such as compilations of public records information—however, the report may not include all information maintained by the resellers about that individual. For example, one information reseller stated that it offers a free report, under certain circumstances, on an individual’s claims history, employment history, or tenant history. Resellers may offer basic reports to individuals at no cost, but they generally charge for reports on additional information. A free consumer report, such as an employment history report, for example, typically excludes information such as driver’s license data, family information, and credit header data that a reseller may possess in other databases. Although individuals can access information about themselves, if they find inaccuracies, they generally cannot have these corrected by the resellers. Information resellers direct individuals to take their cases to the original data sources—such as courthouses or other local government agencies— and attempt to have the inaccuracy corrected there. Several resellers stated that they would correct any identified errors introduced through their own processing and aggregation of data. As discussed above, resellers, as a matter of policy, do not make corrections to data obtained from other sources, even if the consumer provides evidence that the data are wrong. According to resellers, making corrections to their own databases is extremely difficult, for several reasons. First, the services these resellers provide concentrate on providing references to a particular individual from many sources, rather than distilling only the most accurate or current reference. For example, a reseller might have many instances in its databases of a particular individual’s current address. Although most might be the same, there could be errors as well. Resellers generally would report the information as they have it rather than attempting to determine which entry is correct. This information is important to customers such as law enforcement agencies. Further, resellers stated that making corrections to their databases could be ineffective because the data are continually refreshed with updated data from the source, and thus any correction is likely to be changed back to its original state the next time the data are updated. In addition, as discussed in the collection limitation section, resellers stated their belief that it would not be appropriate to allow the public to access and correct information held for certain purposes, such as fraud detection and locating criminals, since providing such rights could undermine the effectiveness of these uses (e.g., by allowing criminals to access and change their information). However, as a result of these practices, individuals cannot know the full extent of personal information maintained by resellers or ensure its accuracy. According to the accountability principle, individuals controlling the collection or use of personal information should be accountable for taking steps to ensure the implementation of the Fair Information Practices. Although information resellers’ overall application of the Fair Information Practices varied, each reseller we spoke with reported actions to ensure compliance with its own privacy policies. For example, resellers reported designating chief privacy officers to monitor compliance with internal privacy policies and applicable laws (e.g., the Gramm-Leach-Bliley Act and the Driver’s Privacy Protection Act). Information resellers reported that these officials had a range of responsibilities aimed at ensuring accountability for privacy policies, such as establishing consumer access and customer credentialing procedures, monitoring compliance with federal and state laws, and evaluating new sources of data (e.g., cell phone records). Auditing of an organization’s practices is one way of ensuring accountability for adhering to privacy policies and procedures. Although there are no industrywide standards requiring resellers to conduct periodic audits of their compliance with privacy policies, one information reseller reported using a third party to conduct privacy audits on an annual basis. Using a third party to audit compliance with privacy policies further helps to ensure that an information reseller is accountable for the implementation of its privacy practices. Establishing accountability is critical to the protection of privacy. Actions taken by data resellers should help ensure that their privacy policies are appropriately implemented. Agency practices for handling personal information acquired from information resellers did not always fully reflect the Fair Information Practices. Further, agencies generally lacked policies that specifically address their use of personal information from commercial sources, although DHS Privacy Office officials reported that they were drafting such a policy. As shown in table 8, four of the Fair Information Practices—the collection limitation, data quality, use limitation, and security safeguards principles—were generally reflected in agency practices. For example, several agency components (specifically, law enforcement agencies such as the FBI and the U.S. Secret Service) reported that in practice, they generally corroborate information obtained from resellers when it is used as part of an investigation. This practice is consistent with the data quality principle that data should be accurate, current, and complete. Agency policies and practices with regard to the other four principles, however, were uneven. Specifically, agencies did not always have policies or practices in place to address the purpose specification, openness, and individual participation principles with respect to reseller data. The inconsistencies in application of these principles as well as the lack of specific agency policies can be attributed in part to ambiguities in OMB guidance regarding the applicability of the Privacy Act to information obtained from resellers. Further, privacy impact assessments, which often are not conducted, are a valuable tool that could address important aspects of the Fair Information Practices. Finally, components within each of the four agencies did not consistently hold staff accountable by monitoring usage of personal information from information resellers and ensuring that it was appropriate; thus, their application of the accountability principle was uneven. The collection limitation principle establishes, among other things, that organizations should obtain only the minimum amount of personal data necessary to process a transaction. This principle also underlies the Privacy Act requirement that agencies maintain in their records “only such information about an individual as is relevant and necessary to accomplish a purpose of the agency.” Regarding most law-enforcement and counterterrorism purposes, which accounted for 90 percent of usage in fiscal year 2005, agencies generally limited their personal data collection in that they reported obtaining information only on specific individuals under investigation or associates of those individuals. Having initiated investigations on specific individuals, however, agencies generally reported that they obtained as much personal information as possible about the individuals being investigated, because law enforcement investigations require pursuing as many investigative leads as possible. The data quality principle states that, among other things, personal information should be relevant to the purpose for which it is collected and be accurate. This principle is mirrored in the Privacy Act’s requirement for agencies to maintain all records used to make determinations about an individual with sufficient accuracy, relevance, timeliness, and completeness as is reasonably necessary to ensure fairness. Agencies reported taking steps to mitigate the risk of inaccurate information reseller data by corroborating information obtained from resellers. Agency officials described the practice of corroborating information as a standard element of conducting investigations. Officials from several law enforcement component agencies, including ATF and DEA, said corroboration was necessary to build legally sound cases from investigations. For example, U.S. Secret Service officials reported that they instruct agents that the information obtained from resellers should be independently corroborated, and that none of it should be used as probable cause for obtaining warrants. Further, FBI officials from FTTTF noted that obtaining data from information resellers helps to improve the overall quality and accuracy of the data in investigative files. Officials stated that the variety of private companies providing personal information enhances the value, quality, and diversity of the information used by the FBI, noting that a decision to put an individual under arrest is based on “probable cause,” which is determined by a preponderance of evidence, rather than any single source of information, such as information in a reseller’s data base. Likewise, for non law-enforcement use, such as debt collection and fraud detection and prevention, agency components reported procedures for mitigating potential problems with the accuracy of data provided by resellers by obtaining additional information from other sources when necessary. For example, the Executive Office for U.S. Attorneys uses information resellers to obtain information on assets possessed by an individual indebted to the United States. According to officials, should information contained in the information reseller databases conflict with informataion provided by an individual, further investigation takes place before any action to collect debts would be taken. Likewise, officials from the U.S. Citizenship and Immigration Services (USCIS) component of DHS and the Office of Consular Affairs within the Department of State reported similar practices. While these practices do not eliminate inaccuracies in data coming into the agency, they help ensure the quality of the information that is the basis for agency actions. The use limitation principle provides that personal information should not be disclosed or used for other than a specified purpose without consent of the individual or legal authority. This principle underlies the Privacy Act requirement that prevents agencies from disclosing records on individuals except with consent of the individual, unless disclosure of the record would be, for example, to another agency for civil or criminal law enforcement activity or for a purpose that is compatible with the purpose for which the information was collected. Although agencies rely on resellers’ multipurpose collection of information as a source, agency officials said their use of reseller information was limited to distinct purposes, which were generally related to law enforcement or counterterrorism. For example, the Department of Justice reported uses specific to the conduct of criminal investigations on individuals, terrorism investigations, and the location of assets and witnesses. Other Justice and DHS components, such as the Federal Protective Service, U.S. Secret Service, FBI, and ATF, also reported that they used information reseller data for investigations. For uses not related to law enforcement, such as those reported by State and SSA, use of reseller information was also described as supporting a specific purpose (e.g., fraud detection or debt collection). The use limitation principle also precludes agencies from sharing personal information they collect for purposes unrelated to the original intended use of the information. Officials of certain law enforcement components of these agencies reported that in certain cases they share information with other law enforcement agencies, a use consistent with the purposes originally specified by the agency. For example, the FBI’s FTTTF supports ongoing investigations in other law enforcement agencies and the intelligence community by sharing information obtained from resellers (among other information) in response to requests about foreign terrorists from FBI agents or officials from partner agencies. The security safeguards principle requires that personal information be reasonably protected against unauthorized access, use, or disclosure. This principle also underlies the Privacy Act requirement that agencies establish appropriate administrative, technical, and physical safeguards to ensure the security and confidentiality of records on individuals. This principle is further mirrored in the FISMA requirement to protect information and information systems from unauthorized access, use, disclosure, disruption, modification, or destruction, including through controls for confidentiality. While we did not assess the effectiveness of information security or the implementation of FISMA at any of these agencies, we found that all four had measures in place intended to safeguard the security of personal information obtained from resellers. For example, all four agencies cited the use of passwords to prevent unauthorized access to information reseller databases. Further, agency components such as ATF, DEA, CBP, and USCIS, reported that they limit access to sensitive personal information (e.g., full Social Security number, driver’s license number) to those with a specific need for this information. Several agency components also reported that resellers were promptly notified to deactivate accounts for employees separated from government service to protect against unauthorized use. As another security measure, several components, including DEA and the FBI, reported that resellers notified them when accounts were accessed from Internet addresses at unexpected locations, such as outside the United States. Another measure to prevent unauthorized disclosure reported by law enforcement agencies, such as the FBI, ICE, and Secret Service, is the use of “cloaked logging,” which prevents vendor personnel from monitoring the queries being made by law enforcement agents. Officials in FBI’s FTTTF reported that, in order to maintain the integrity of investigations, resellers are contractually prohibited from tracking or monitoring the exact persons or other entities being searched by FTTTF personnel. Law enforcement officials stated that the ability to mask searches from vendors is important so that those outside law enforcement have no knowledge of who is being investigated and so that subjects of an investigation are not “tipped off.” Agency adherence to the collection limitation, data quality, use limitation, and security safeguards principles was based on general business procedures—including law-enforcement investigative practices— that reflect security and civil liberties protections, rather than written policies specifically regarding the collection, accuracy, use, and security of personal information obtained from resellers. Implementation of these practices provides individuals with assurances that only a limited amount of their personal information is being collected, that it is used only for specific purposes, and that measures are in place to corroborate the accuracy of the information and safeguard it from improper disclosure. These controls help prevent potential harm to individuals and invasion of their privacy by limiting the exposure of their information and reducing the likelihood of inaccurate data being used to make decisions that could affect their welfare. The purpose specification, openness, and individual participation principles stipulate, among other things, that individuals should be made aware of the purpose and intended uses of the personal information being collected about them and have the ability to access and correct such information, if necessary. The Privacy Act reflects these principles in part by requiring agencies to publish in the Federal Register, “upon establishment or revision, a notice of the existence and character of a system of records.” This notice is to include, among other things, the categories of records in the system as well as the categories of sources of records. In a number of cases, agencies did not adhere to the purpose specification or openness principles in regard to their use of reseller information in that they did not notify the public that they were using such information and did not specify the purpose for their data collections. Agency officials said that they generally did not prepare system-of-records notices that would address these principles because they were not required to do so by the Privacy Act. The act’s vehicle for public notification—the system-of- records notice—becomes binding on an agency only when the agency collects, maintains, and retrieves personal data in the way defined by the act or when a contractor does the same thing explicitly on behalf of the government. Agencies generally did not issue system-of-records notices specifically for their use of information resellers largely because information reseller databases were not considered “systems of records operated by or on behalf of a government agency” and thus were not considered subject to the provisions of the Privacy Act. OMB guidance on implementing the Privacy Act does not specifically refer to the use of reseller data or how it should be treated. According to OMB and other agency officials, information resellers operate their databases for multiple customers, and federal agency use of these databases does not amount to the operation of a system of records on behalf of the government. Further, agency officials stated that merely querying information reseller databases did not amount to agency “maintenance” of the personal information being queried and thus also did not trigger the provisions of the Privacy Act. In many cases, agency officials considered their use of resellers to be of this type—essentially “ad hoc” querying or “pinging” of reseller databases for personal information about specific individuals, which they believed they were not doing in connection with a formal system of records. In other cases, however, agencies maintained information reseller data in systems for which system-of-records notices had been previously published. For example, law enforcement agency officials stated that, to the extent they retain the results of reseller data queries, this collection and use is covered by the system of records notices for their case file systems. However, in preparing such notices, agencies generally did not specify that they were obtaining information from resellers. Among system of records notices that were identified by agency officials as applying to the use of reseller data, only one—TSA’s system of records notice for the test phase of its Secure Flight program—specifically identified the use of information reseller data. Other programs that involve use of information reseller data include the fraud prevention and detection programs reported by SSA and State as well as law enforcement programs within ATF, the U.S. Marshals, and USCIS. For these programs, associated system of records notices identified by officials did not specify the use of information reseller data. In several of these cases, agency sources for personal information were described only in vague terms, such as “private organizations,” “other public sources,” or “public source material,” when information was being obtained from information resellers. In one case, a notice indicated incorrectly that personal information was collected only from the individuals concerned. Specifically, USCIS prepared a system of records notice covering the Computer Linked Application Information Management System, which did not identify information resellers as a source. Instead, the notice stated only that “information contained in the system of records is obtained from individuals covered by the system.” The inconsistency with which agencies specify resellers as a source of information in system-of-records notices is in part due to ambiguity in OMB guidance, which states that “for systems of records which contain information obtained from sources other than the individual to whom the records pertain, the notice should list the types of sources used.” Although the guidance is unclear what would constitute adequate disclosure of “types of sources,” OMB and DHS Privacy Office officials agreed that to the extent that reseller data are subject to the Privacy Act, agencies should specifically identify information resellers as a source and that merely citing public records information does not sufficiently describe the source. The individual participation principle gives individuals the right to access and correct information that is maintained about them. However, under the Privacy Act, agencies can claim exemptions from the requirement to provide individual access and the ability to make corrections if the systems are for law enforcement purposes. In most cases where officials identified system-of-record notices associated with reseller data collection for law enforcement purposes, agencies claimed this exemption. Like the ability to mask database searches from vendors, this provision is important so that the subjects of law enforcement investigations are not tipped off. Aside from the law enforcement exemptions to the Privacy Act, adherence to the purpose specification and openness principles is critical to preserving a measure of individual control over the use of personal information. Without clear guidance from OMB or specific policies in place, agencies have not consistently reflected these principles in their collection and use of reseller information. As a result, without being notified of the existence of an agency’s information collection activities, individuals have no ability to know that their personal information could be obtained from commercial sources and potentially used as a basis, or partial basis, for taking action that could have consequences for their welfare. The PIA is an important tool for agencies to address privacy early in the process of developing new information systems, and to the extent that PIAs are made publicly available, they provide explanations to the public about such things as the information that will be collected, why it is being collected, how it is to be used, and how the system and data will be maintained and protected. In doing so, they serve to address the openness and purpose specification principles. However, only three agency components reported developing PIAs for their systems or programs that make use of information reseller data. As with system-of-records notices, agencies often did not conduct PIAs because officials did not believe they were required. Current OMB guidance on conducting PIAs is not always clear about when they should be conducted. According to guidance from OMB, a PIA is required by the E-Government Act when agencies “systematically incorporate into existing information systems databases of information in identifiable form purchased or obtained from commercial or public sources.” However, the same guidance also instructs agencies that “merely querying a database on an ad-hoc basis does not trigger the PIA requirement.” Reported uses of reseller data were generally not described as a “systematic” incorporation of data into existing information systems; rather, most involved querying a database and in some cases retaining the results of these queries. OMB officials stated that agencies would need to make their own judgments on whether retaining the results of searches of information reseller databases constituted a “systematic incorporation” of information. DHS has recently developed guidance requiring PIAs to be conducted whenever reseller data are involved. The DHS Privacy Office guidance on conducting PIAs points out, for example, that a program decision to obtain information from a reseller would constitute a new source of information, requiring that a PIA be conducted. However, although the DHS guidance clearly states that PIAs are required when personally identifiable information is obtained from a commercial source, it also states that “merely querying such a source on an ad hoc basis using existing technology does not trigger the PIA requirement.” Like OMB’s guidance, the DHS guidance is not clear, because agency personnel are left to make individual determinations as to whether queries are “on an ad hoc basis.” In one case, a DHS component prepared a PIA for a system that collects reseller data but had not identified in the assessment that resellers were being used. DHS’s USCIS uses copies of court records obtained from an information reseller to support evidentiary requirements for official adjudication proceedings concerning fraud. Although this use was reported to be covered by the PIA for the office’s Fraud Tracking System, the PIA identifies only “public records” as the source of its information and does not mention that the public records are obtained from information resellers. In contrast, the draft DHS guidance on PIAs instructs DHS component agencies to “list the individual, entity, or entities providing the specific information identified above. For example, is the information collected directly from the individual as part of an application for a benefit, or is it collected from another source such as a commercial data aggregator.” At the time of our review, this draft guidance had not yet been disseminated to DHS components. Lacking such guidance, DHS components did not have policies in place regarding the conduct of PIAs with respect to reseller data, nor did other agencies we reviewed. Until PIAs are conducted more thoroughly and consistently, the public is likely to remain incompletely informed about agency purposes and uses for obtaining reseller information. According to the accountability principle (individuals controlling the collection or use of personal information should be accountable for taking steps to ensure the implementation of the Fair Information Practices), agencies should take steps to ensure that employee uses of personal information from information resellers are appropriate. While agencies described activities to oversee the use of information resellers, such activities were largely based on trust of the user to use the information appropriately. For example, in describing controls placed on the use of commercial data, officials from component agencies identified measures such as instructing users that reseller data are for official use only and requiring users to sign statements of responsibility attesting to a need to access the information reseller databases and that their use will be limited to official business. Additionally, agency officials reported that in accessing reseller databases, users are required to select from a list of vendor-defined “permissible purposes” (e.g., law enforcement, transactions authorized by the consumer) before conducting a search. While these practices appear consistent with the accountability principle, they are focused on individual user responsibility rather than management oversight. For example, agencies did not have practices in place to obtain reports from resellers that would allow them to monitor usage of reseller databases at a detailed level. Although agencies generally receive usage reports from the information resellers, these reports are designed primarily for monitoring costs. Further, these reports generally contained only high-level statistics on the number of searches and databases accessed, not the contents of what was actually searched, thus limiting their utility in monitoring usage. For example, one information reseller reported that it does not provide reports to agencies on the “permissible purpose” that a user selects before conducting a search. Not all component agencies lacked robust user monitoring. Specifically, according to FBI officials from the FTTTF, their network records and monitors searches conducted by the user account, including who is searched against what public source database. The system also tracks the date and time of the query as well as what the analyst does with the data. FBI officials stated that the vendor reports as well as the network monitoring provide FBI with the ability to detect unusual usage of the public source providers. To the extent that federal agencies do not implement methods such as user monitoring or auditing of usage records, they provide limited accountability for their usage of information reseller data and have limited assurance that the information is being used appropriately. Services provided by information resellers serve as important tools that can enhance federal agency functions, such as law enforcement and fraud protection and identification. Resellers have practices in place to protect privacy, but these practices are not fully consistent with the Fair Information Practices. Among other things, resellers collect large amounts of information about individuals without their knowledge or consent, do not ensure that the data they make available are accurate for a given purpose, and generally do not make corrections to the data when errors are identified by individuals. Information resellers believe that application of the relevant principles of the Fair Information Practices is inappropriate or impractical in these situations. Given that reseller data may be used for a variety of purposes, determining the appropriate degree of control or influence individuals should have over the way in which their personal information is obtained and used—as envisioned in the Fair Information Practices—is critical. To more fully embrace these principles could require resellers to change the way they conduct business, and currently resellers are not legally required to follow them. As Congress weighs various legislative options, adherence to the Fair Information Practices will be an important consideration in determining the appropriate balance between the services provided by information resellers to customers such as government agencies and the public’s right to privacy. Agencies take steps to adhere to Fair Information Practices such as the collection limitation, data quality, use limitation, and security safeguards principles. However, they have not taken all the steps they could to reflect others—or to comply with specific Privacy Act and e- Government Act requirements—in their handling of reseller data. Specifically, agencies did not always have policies or practices in place to address the purpose specification, individual participation, openness, and accountability principles with respect to reseller data. An important factor contributing to this is that OMB privacy guidance does not clearly address information reseller data, which has become such a valuable and useful tool for agencies. As a result, agencies are left largely on their own to determine how to satisfy legal requirements and protect privacy when acquiring and using reseller data. Without current and specific guidance, the government risks continued uneven adherence to important, well- established privacy principles and lacks assurance that the privacy rights of individuals are adequately protected. In considering legislation to address privacy concerns related to the information reseller industry, Congress should consider the extent to which the industry should adhere to the Fair Information Practices. To improve accountability, ensure adequate public notice of agencies’ use of personal information from commercial sources, and allay potential privacy concerns arising from agency use of information from such sources, we are making three recommendations to the Director of OMB and the heads of the four agencies. Specifically, we recommend that: the Director of OMB revise guidance on system of records notices and privacy impact assessments to clarify the applicability of the governing laws (the Privacy Act and the E-Government Act) to the use of personal information from resellers. These clarifications should specify the circumstances under which agencies should make disclosures about their uses of reseller data so that agencies can properly notify the public (for example, what constitutes a “systematic” incorporation of reseller data into a federal system). The guidance should include practical scenarios based on uses agencies are making of personal information from information resellers (for example, visa, criminal, and fraud investigations). the Director of OMB direct agencies to review their uses of personal information from information resellers, as well as any associated system of records notices and privacy impact assessments, to ensure that such notices and assessments explicitly reference agency use of information resellers. the Attorney General, the Secretary of Homeland Security, the Secretary of State, and the Commissioner of SSA develop specific policies for the collection, maintenance, and use of personal information obtained from resellers that reflect the Fair Information Practices, including oversight mechanisms such as the maintenance and review of audit logs detailing queries of information reseller databases—to improve accountability for agency use of such information. We received written comments on a draft of this report from the Justice’s Assistant Attorney General for Administration (reproduced in appendix III), from the Director of the DHS Departmental GAO/OIG Liaison Office (reproduced in appendix IV), from the Commissioner of SSA (reproduced in appendix V), and from State’s Assistant Secretary and Chief Financial Officer (reproduced in appendix VI). We also received comments via E - mail from staff of OMB’s Office of Information and Regulatory Affairs. Justice, DHS, SSA, and OMB all generally agreed with the report and described actions initiated to address our recommendations. Justice and SSA also provided technical comments, which has been incorporated in the final report as appropriate. In its comments, Justice agreed that revised or additional guidance and policy could be created to address unique issues presented by use of personal information obtained from resellers. However, noting that the Privacy Act allows law enforcement agencies to exempt certain records from provisions of the law that reflect aspects of the Fair Information Practices, Justice recommended that prior to issuance of any new or revised policy, careful consideration be given to the balance struck in the Privacy Act on applying the Fair Information Practices to law enforcement data. We recognize that law enforcement purposes are afforded the opportunity for exemptions from some of the provisions of the Privacy Act. The report acknowledges this fact. We also agree and acknowledge in the report that the Fair Information Practices serve as a framework of principles for balancing the need for privacy with other public policy interests, such as national security and law enforcement. DHS also agreed on the importance of guidance to federal agencies on the use of reseller information and stated that it is working diligently on finalizing a DHS policy for such use. The agency commented that its Privacy Office has been reviewing the use and appropriate privacy protections for reseller data, including conducting a 2-day public workshop on the subject in September 2005. DHS also noted that it had just issued departmentwide guidance on the conduct of privacy impact assessments in March 2006, which include directions relevant to the collection and use of commercial data. We have made changes to the final report to reflect the recent issuance of the DHS guidance. SSA noted in its comments that it had established internal controls, including audit trails of systems usage, to ensure that information is not improperly disclosed. SSA also stated that it would amend relevant system- of-record notices to reflect use of information resellers and would explore options for enhancing its policies and internal controls over information obtained from resellers. State interpreted our draft report to “rest on the premise that records from ‘information resellers’ should be accorded special treatment when compared with sensitive information from other sources.” State indicated that it does not distinguish between types of information or sources of information in complying with privacy laws. However, our report does not suggest that data from resellers should receive special treatment. Instead, our report takes the widely accepted Fair Information Practices as a universal benchmark of privacy protections and assesses agency practices in comparison with them. State also interpreted our draft report to state that fraud detection, as a purpose for collecting personal information, is not related to law enforcement. However, the draft does not make such a claim. We have categorized agency uses of personal information based on descriptions provided by agencies and have categorized fraud detection uses separately from law enforcement to provide insight into different types of uses. We do not claim the two uses are unrelated. Finally, the department stated that in its view, it would be bad policy to require specification of sources such as data resellers in agency system of records notices. In contrast, we believe that adding clarity and specificity about sources is in the spirit of the purpose specification practice and note that DHS has recently issued guidance on privacy impact assessments that is consistent with this view. OMB stated that, based on a staff-level meeting of agency privacy experts, it believes agencies recognize that when personal data are brought into their systems, this fact must be reflected in their privacy impact assessments and system-of-record notices. We do not find this observation inconsistent with our findings. We found, however, that inconsistencies occurred in agencies’ determinations of when or whether reseller information was actually brought into their systems, as opposed to being merely “accessed” on an ad-hoc basis. We believe clarification of this issue is important. OMB further stated that agencies have procedures in place to verify commercial data before they are used in decisions involving the granting or recoupment of benefits or entitlements. Again, this is not inconsistent with the results of our review. Finally OMB stated that it would discuss its guidance with agency senior officials for privacy to determine whether additional guidance concerning reseller data is needed. We also obtained comments on excerpts of our draft report from the five information resellers we reviewed. General comments made by resellers and our evaluation are summarized below: Several resellers raised concerns about our reliance on the OECD version of the Fair Information Practices as a framework for assessing their privacy policies and business practices. They suggested that it would be unreasonable to require them to comply with aspects of the Fair Information Practices that they believe were intended for other types of users of personal information, such as organizations that collect information directly from consumers. Further, they commented that our draft summary appeared to treat strict adherence to all of the Fair Information Practices as if it were a legally binding requirement. In several cases, they suggested that it would be more appropriate for us to use the privacy framework developed by the Asia-Pacific Economic Cooperation (APEC) organization in 2004, because the APEC framework is more recent and because it explicitly states that it has limited applicability to publicly available information. As discussed in our report, the OECD version of the Fair Information Practices is widely used and cited within the federal government as well as internationally. In addition, the APEC privacy framework, which was developed as a tool for encouraging the development of privacy protection in the Asia Pacific region, acknowledges that the OECD guidelines are still relevant and “in many ways represent the international consensus on what constitutes honest and trustworthy treatment of personal information.” Further, our use of the OECD guidelines is as an analytical framework for identifying potential privacy issues for further consideration by Congress—not as legalistic compliance criteria. The report states that the Fair Information Practices are not precise legal requirements; rather they provide a framework of principles for balancing the needs for privacy against other public policy interests, such as national security, law enforcement, and administrative efficiency. In conducting our analysis, we noted that the nature of the reseller business is largely at odds with the principles of collection limitation, data quality, purpose specification, and use limitation. We also noted that resellers are not currently required to follow the Fair Information Practices and that for resellers to more fully embrace them could require that they change the way they do business. We recognize that it is important to achieve an appropriate balance between the benefits of resellers’ services and the public’s right to privacy and point out that, as Congress weighs various legislative options, it will be critical to determine an appropriate balance. We have made changes in this report to clarify that we did not attempt to make determinations of whether or how information reseller practices should change and that such determinations are a matter of policy based on balancing the public’s right to privacy with the value of reseller services. Several information resellers stated that the draft did not take into account that public record information is freely available. For example, one reseller stated that public records should be understood by consumers to be open to all for any use not prohibited by state or federal law. Another stated that information resellers merely effectuate the determination made by governmental entities that public records should be open to all. However, the views expressed by the resellers do not take into account several important factors. First, resellers collect information for their products from a variety of sources, including information provided by consumers to businesses. Resellers products are not based exclusively on public records. Thus a consideration of protections for public record information does not take the place of a full assessment of the information reseller business. Second, resellers do not merely pass on public record information as they find it; they aggregate information from many different sources to create new information products, and they make the information much more readily available than it would be if it remained only in paper records on deposit in government facilities. The aggregation and increased accessibility provided by resellers raises privacy concerns that may not apply to the original paper-based public records. Finally, it is not clear that individuals give up all privacy rights to personal information contained in public records. The Supreme Court has expressed the opinion in the past that individuals retain a privacy interest in publicly released personal information. We therefore believe it is important to assess the status of privacy protections for all personal information being offered commercially to the government so that informed policy decisions may be made about the appropriate balance between resellers’ services and the public’s right to privacy. Several resellers also noted that the draft report did not address the complexity of the reseller business—the extent to which resellers’ businesses vary among themselves and overlap with consumer reporting agencies. We have added text addressing this in the final report. The resellers also provided technical comments, which were incorporated in the final report as appropriate. We are sending copies of this report to the Attorney General, the Secretary of Homeland Security, the Secretary of State, the Commissioner of the Social Security Administration, the Director of the Office of Management and Budget, and other interested congressional committees. Copies will be made available to others on request. In addition, this report will be available at no charge on our Web site at www.gao.gov. If you have any questions concerning this report, please call me at (202) 512-6240 or send E-mail to [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Major contributors to this report are John de Ferrari, Assistant Director; Mathew Bader; Barbara Collier; Pamlutricia Greenleaf; David Plocher; and Jamie Pressman. Our objectives were to determine the following: how the Departments of Justice, Homeland Security, and State and the Social Security Administration are making use of personal information obtained through contracts with information resellers; the extent to which the information resellers providing personal information to these agencies have policies and practices in place that reflect widely accepted principles for protecting the privacy and security of personal information; and the extent to which these agencies have policies and practices in place for handling information reseller data that reflect widely accepted principles for protecting the privacy and security of personal information. To address our objectives, we identified and reviewed applicable laws such as the Privacy Act of 1974 and the E-Government Act, agency policies and practices, and the widely accepted privacy principles embodied in the Organization for Economic Cooperation and Development (OECD) version of the Fair Information Practices. Working with liaisons at the four federal agencies we were requested to review, we identified officials responsible for the acquisition and use of personal information from information resellers. Through these officials, we obtained applicable contractual documentation such as statements of work, task orders, blanket purchase agreements, purchase orders, interagency agreements, and contract terms and conditions. To address our first objective, we obtained and reviewed contract vehicles covering federal agency use of information reseller services for fiscal year 2005. We also reviewed applicable General Services Administration (GSA) schedule and Library of Congress FEDLINK contracts with information resellers that agencies made use of by various means, including through issuance of blanket purchase agreements, task orders, purchase orders, or interagency agreements. We analyzed the contractual documentation provided to determine the nature, scope, and dollar amounts associated with these uses, as well as mechanisms for acquiring personal information. In an effort to identify all relevant instances of agency use of information resellers and related contractual documents, we developed a list of structured questions to address available contract documents, uses of personal information, and applicable agency guidance. We provided these questions to agency officials and held discussions with them to help ensure that they provided all relevant information on uses of personal information from information resellers. To further ensure that relevant contract vehicles were identified, we asked major information resellers about their business with the four agencies. We also interviewed officials from GSA and the Library of Congress to discuss the mechanisms available to federal agencies for acquiring personal information and to identify any additional uses of these mechanisms by the four agencies. To further address our first objective, we categorized agency use of information resellers into five categories: counterterrorism, debt collection, fraud detection/prevention, law enforcement, and other. These categorizations were based on the component and applicable program’s mission, as well as the specific reported use of the contract. In identifying relevant uses of information resellers, we were unable to identify small purchases (e.g., purchases below $2,500), as agencies do not track this information centrally. In addition, to the extent practicable, we excluded uses that generally did not involve the use of personal information. For example, officials from several component agencies reported that their use of the LexisNexis and West services was primarily for legal research rather than for public records information. In other cases, reported amounts may reflect uses that do not involve personal information because agencies were unable to separate such uses from uses involving personal information. To address our second objective, we obtained and reviewed relevant private sector laws and guidance, such as the Gramm-Leach-Bliley Act, the Fair Credit Reporting Act, and the Fair Information Practices. We also identified major information resellers in agency contractual agreements for personal information and held interviews with officials from these companies, including Acxiom, ChoicePoint, Dun & Bradstreet, LexisNexis, and West, to discuss security, quality controls, and privacy policies. In addition, we conducted site visits at Acxiom, ChoicePoint, and LexisNexis, and obtained written responses to related questions from West. These five resellers accounted for approximately 95 percent of the dollar value of all reported contracts with resellers. To determine the extent that they reflect widely accepted Fair Information Practices, we reviewed and compared information reseller’s privacy policies and procedures with these principles. In conducting our analysis, we identified the extent to which reseller practices were consistent with the key privacy principles of the Fair Information Practices. We also assessed the effect of any inconsistencies; however, we did not attempt to make determinations of whether or how information reseller practices should change. Such determinations are a matter of policy based on balancing the public’s right to privacy with the value of services provided by resellers to customers such as government agencies. To address our third objective, we identified applicable guidelines and management controls regarding the acquisition, maintenance, and use of personal information from information resellers at each of the four agencies. We also interviewed agency officials, including acquisition and program staff, to further identify relevant policies and procedures. Our assessment of overall agency application of the Fair Information Practices was based on the policies and procedures of major components at each of the four agencies. We also conducted interviews at the four agencies with senior agency officials designated for privacy as well as officials of the Office of Management and Budget (OMB) to obtain their views on the applicability of federal privacy laws (including the Privacy Act of 1974 and the E-Government Act of 2002) and related guidance on agency use of information resellers. In addition, we compared relevant policies and management practices with the Fair Information Practices. We assessed the overall application of the principles of the Fair Information Practices by agencies according to the following categories: 1. General. We assessed the application as general if the agency had policies or procedures to address all major aspects of a particular principle. 2. Uneven. We assessed the application as uneven if the agency had policies or procedures that addressed some but not all aspects of a particular principle or if some but not all components and agencies had policies or practices in place addressing the principle. We performed our work at the Departments of Homeland Security, Justice, and State in Washington, D.C.; at the Social Security Administration in Baltimore, Maryland; Acxiom Corporation in Little Rock, Arkansas; ChoicePoint in Alpharetta, Georgia; Dun & Bradstreet in Washington, D.C.; and LexisNexis in Washington, D.C., and Miamisburg, Ohio. Our work was conducted from May 2005 to March 2006 in accordance with generally accepted government auditing standards. Major laws that affect information resellers include the Gramm-Leach- Bliley Act, the Drivers Privacy Protection Act, the Health Insurance Portability and Accountability Act, the Fair Credit Reporting Act, and the Fair and Accurate Credit Transactions Act. Their major privacy related provisions are briefly summarized below. The Gramm-Leach-Bliley Act requires financial institutions (e.g., banks, insurance, and investment companies) to give consumers privacy notices that explain the institutions’ information-sharing practices (P.L. 106-102 (1999), Title V, 15 U.S.C. 6801). In turn, consumers have the right to limit some, but not all, sharing of their nonpublic personal information. Financial institutions are permitted to disclose consumers’ nonpublic personal information without offering them an opt-out right in a number of circumstances including the following: to effect a transaction requested by the consumer in connection with a financial product or service requested by the consumer; maintaining or servicing the consumer’s account with the financial institution or another entity as part of a private label credit card program or other extension of credit; or a securitization, secondary market sale, or similar transaction; with the consent or at the direction of the consumer; to protect the confidentiality or security of the consumer’s records; to prevent fraud; for required institutional risk control or for resolving customer disputes or inquiries; to persons holding a legal or beneficial interest relating to the consumer; or to the consumer’s fiduciary; to provide information to insurance rate advisory organizations, guaranty funds or agencies, rating agencies, industry standards agencies, and the institution’s attorneys, accountants, and auditors; to the extent specifically permitted or required under other provisions of law and in accordance with the Right to Financial Privacy Act of 1978, to law enforcement agencies, self-regulatory organizations, or for an investigation on a matter related to public safety; to a consumer reporting agency in accordance with the Fair Credit Reporting Act or from a consumer report reported by a consumer reporting agency; in connection with a proposed or actual sale, merger, transfer, or exchange of all or a portion of a business if the disclosure concerns solely consumers of such business; and to comply with federal, state, or local laws; an investigation or subpoena; or to respond to judicial process or government regulatory authorities. The Driver’s Privacy Protection Act generally prohibits the disclosure of personal information by state departments of motor vehicles. (P.L. 103-322 (1994), 18 U.S.C. § 2721-2725). It also specifies a list of exceptions when personal information contained in a state motor vehicle record may be disclosed. These permissible uses include the following: for use by any government agency in carrying out its functions; for use in connection with matters of motor vehicle or driver safety and theft; motor vehicle emissions; motor vehicle product alterations, recalls, or advisories; motor vehicle market research activities; for use in the normal course of business by a legitimate business, but only to verify the accuracy of personal information submitted by the individual to the business and, if such information is not correct, to obtain the correct information but only for purposes of preventing fraud by pursuing legal remedies against, or recovering on a debt or security interest against, the individual; for use in connection with any civil, criminal, administrative, or arbitral proceeding in any federal, state, or local court or agency; for use in research activities; for use by any insurer or insurance support organization in connection with claims investigation activities; for use in providing notice to the owners of towed or impounded for use by a licensed private investigative agency for any purpose permitted under the act; for use by an employer or its agent or insurer to obtain information relating to the holder of a commercial driver’s license; for use in connection with the operation of private toll transportation for any other use, if the state has obtained the express consent of the person to whom a request for personal information pertains; for bulk distribution of surveys, marketing, or solicitations, if the state has obtained the express consent of the person to whom such personal information pertains; for use by any requester, if the requester demonstrates that it has obtained the written consent of the individual to whom the information pertains; and for any other use specifically authorized under a state law, if such use is related to the operation of a motor vehicle or public safety. The Health Insurance Portability and Accountability Act of 1996 (P.L. 104- 191) made a number of changes to laws relating to health insurance. It also directed the Department of Health and Human Services to issue regulations to protect the privacy and security of personally identifiable health information. The resulting privacy rule (45 C.F.R. Part 164) defines certain rights and obligations for covered entities (e.g., health plans and health care providers) and individuals, including the following: giving individuals the right to be notified of privacy practices and to inspect, copy, request correction, and have an accounting of disclosures of health records, except for specified exceptions; setting limits on the use of health information apart from treatment, payment, and health care operations (e.g., for marketing) without the individual’s authorization; permitting disclosure of health information without the individual’s authorization for purposes of public health protection; health oversight; law enforcement; judicial and administrative proceedings; approved research activities; coroners, medical examiners, and funeral directors; workers’ compensation programs, government abuse, neglect, and domestic violence authorities; organ transplant organizations; government agencies with specified functions, e.g., national security activities; and as required by law; requiring that authorization forms contain specific types of information, such as a description of the health information to be used or disclosed, the purpose of the use or disclosure, and the identity of the recipient of the information; and requiring covered entities to take steps to limit the use or disclosure of health information to the minimum necessary to accomplish the intended purpose, unless authorized or under certain circumstances. The Fair Credit Reporting Act (P.L. 91-508, 1970, 15 U.S.C. § 1681) governs the use of personal information by consumer reporting agencies, which are individuals or entities that regularly assemble or evaluate information about individuals for the purpose of furnishing consumer reports to third parties. The act defines a consumer report as any communication by a consumer reporting agency about an individual’s credit worthiness, character, reputation, characteristics, or mode of living and permits its use only in the following situations: as ordered by a court or federal grand jury subpoena; as instructed by the consumer in writing; for the extension of credit as a result of an application from a consumer or the review or collection of a consumer’s account; for employment purposes, including hiring and promotion decisions, where the consumer has given written permission; for the underwriting of insurance as a result of an application from a when there is a legitimate business need, in connection with a business transaction that is initiated by the consumer; to review a consumer’s account to determine whether the consumer continues to meet the terms of the account; to determine a consumer’s eligibility for a license or other benefit granted by a governmental instrumentality required by law to consider an applicant’s financial responsibility or status; for use by a potential investor or servicer or current insurer in a valuation or assessment of the credit or prepayment risks associated with an existing credit obligation; and for use by state and local officials in connection with the determination of child support payments, or modifications of enforcement thereof. The act generally limits the amount of time negative information can be included in a consumer report to no more than 7 years, or 10 years in the case of bankruptcies. Under the act, individuals have a right to access all information in their consumer reports; a right to know who obtained their report during the previous year or two, depending on the circumstances; and a right to dispute the accuracy of any information about them. The Fair and Accurate Credit Transactions Act (P.L. 108-159, 2003) amended the Fair Credit Reporting Act, extending provisions to improve the accuracy of personal information assembled by consumer reporting agencies and better provide for the fair use of and consumer access to personal information. The act’s provisions include the following: consumers may request a free annual credit report from nationwide consumer reporting agencies, to be made available no later than 15 days after the date on which the request is received; persons furnishing information about individuals to consumer reporting agencies, and resellers of consumer reports, must have polices and procedures for investigating and correcting inaccurate information, consumers are given the right to prohibit business affiliates of consumer reporting agencies from using information about them for certain marketing purposes; and consumer reporting agencies cannot include medical information in reports that will be used for employment, credit transactions, or insurance transactions unless the consumer consents to such disclosures. | Federal agencies collect and use personal information for various purposes, both directly from individuals and from other sources, including information resellers--companies that amass and sell data from many sources. In light of concerns raised by recent security breaches involving resellers, GAO was asked to determine how the Departments of Justice, Homeland Security, and State and the Social Security Administration use personal data from these sources. In addition, GAO reviewed the extent to which information resellers' policies and practices reflect the Fair Information Practices, a set of widely accepted principles for protecting the privacy and security of personal data. GAO also examined agencies' policies and practices for handling personal data from resellers to determine whether these reflect the Fair Information Practices. In fiscal year 2005, the Departments of Justice, Homeland Security, and State and the Social Security Administration reported that they used personal information obtained from resellers for a variety of purposes. Components of the Department of Justice (the largest user of resellers) used such information in performing criminal investigations, locating witnesses and fugitives, researching assets held by individuals of interest, and detecting prescription drug fraud. The Department of Homeland Security used reseller information for immigration fraud detection and border screening programs. Uses by the Social Security Administration and the Department of State were to prevent and detect fraud, verify identity, and determine eligibility for benefits. The agencies spent approximately $30 million on contractual arrangements with resellers that enabled the acquisition and use of such information. About 91 percent of the planned fiscal year 2005 spending was for law enforcement (69 percent) or counterterrorism (22 percent). The major information resellers that do business with the federal agencies we reviewed have practices in place to protect privacy, but these measures are not fully consistent with the Fair Information Practices. For example, the principles that the collection and use of personal information should be limited and its intended use specified are largely at odds with the nature of the information reseller business, which presupposes that personal information can be made available to multiple customers and for multiple purposes. Resellers said they believe it is not appropriate for them to fully adhere to these principles because they do not obtain their information directly from individuals. Nonetheless, in many cases, resellers take steps that address aspects of the Fair Information Practices. For example, resellers reported that they have taken steps recently to improve their security safeguards, and they generally inform the public about key privacy principles and policies. However, resellers generally limit the extent to which individuals can gain access to personal information held about themselves, as well as the extent to which inaccurate information contained in their databases can be corrected or deleted. Agency practices for handling personal information acquired from information resellers did not always fully reflect the Fair Information Practices. That is, some of these principles were mirrored in agency practices, but for others, agency practices were uneven. For example, although agencies issued public notices on information collections, these did not always notify the public that information resellers were among the sources to be used. This practice is not consistent with the principle that individuals should be informed about privacy policies and the collection of information. Contributing to the uneven application of the Fair Information Practices are ambiguities in guidance from the Office of Management and Budget (OMB) regarding the applicability of privacy requirements to federal agency uses of reseller information. In addition, agencies generally lack policies that specifically address these uses. |
The U.S. federal budget serves as the primary financial plan of the federal government and thus plays a critical role in the decision-making process. Policymakers, managers, and the American people rely on it to frame their understanding of significant choices about the role of the government and to provide them with information to make decisions about individual programs and overall fiscal policy. The budget process helps highlight for policymakers and the public the overall “cost” of government. Since the budget process also serves as a key point of accountability between policymakers and managers, the way “costs” are measured and reported in the budget can have significant consequences for managerial incentives. The term “cost” has different meanings in the budget and financial statements. In the budget, the term “cost” generally refers to the amount of cash needed during the period. In the financial statements, the term “cost” means the amount of resources used to produce goods or deliver services during the period regardless of when cash is used. Therefore, one goal of accrual budgeting is to report the “full cost” of government services provided during the year. The different methods of reporting (e.g., cash, obligations, or accrual) represent much more than technical means of cost measurement. They reflect fundamental choices about the information and incentives provided by the budget. Cash-based measurement records receipts and outlays when cash is received or paid, without regard to when the activity occurs that results in revenue being earned, resources being consumed, or liabilities being increased. In comparison, obligation-based budgeting—which is used in the U.S. federal government—focuses on the legal obligations entered into during a period regardless of when cash is paid or received and regardless of when resources acquired are to be received or consumed. Obligation- based budgeting provides an additional level of control over pure cash budgeting by requiring that federal agencies have statutory authority to enter into obligations to make outlays of government funds. With limited exceptions, the amounts to be obligated are measured on a cash or cash- equivalent basis. Therefore, we generally refer to the U.S. federal budget as “cash based.” In contrast to cash- and obligation-based budgeting, accrual budgeting generally involves aligning budget recognition with the period in which resources are consumed or liabilities increased, rather than when obligations are made or cash flows occur. Although accruals can be measured in a variety of ways, the term accrual budgeting typically has been used in case study countries to refer to the recording of budgetary costs based on concepts in financial accounting standards. Thus, accrual- based budgeting generally provides information similar to that found in a private sector operating statement. Choices about the appropriate method of budget reporting are complicated by the multiplicity of the budget’s uses and users, including policymakers and managers. The federal budget is simultaneously asked to provide full information and appropriate incentives for resource allocation, control over cash, recognition of future commitments, and the monitoring of performance. Given these multiple and potentially competing objectives, choices about the method of budget reporting involve trade-offs. For example, control over spending is greatest if the budget recognizes the full cash cost at the time the decision is made but assessing performance and its cost is generally best supported by accrual- based cost information, which recognizes resources as they are used to produce goods and services. The up-front funding requirement under an obligation-based budget helps ensure policymakers’ control over the acquisition of a new building but does not align its cost with its use. Conversely, accrual budgeting better aligns the cost of the building with the periods that benefit from its use, but in its simplest form it does not provide for up-front control over entering a legally binding commitment to purchase the building. Given the necessary trade-offs, the method of budget reporting should be selected to meet the primary decision-making and accountability needs of a governmental system while balancing the needs of multiple users. The federal government reports both cash and accrual measures of its current finances. The key focus of the policy debate is the unified budget deficit/surplus. With limited exceptions, the unified budget deficit/surplus is the difference between cash receipts and cash o the government as a whole including any Social Security surplus. utlays for The second measure, the government’s net operating cost, is the amount by mount by which costs—as reported on an accrual basis—exceed revenue and is which costs—as reported on an accrual basis—exceed revenue and is reported in the federal government’s financial statements. Figure 1 show reported in the federal government’s financial statements. The consolidated financial statements of the U.S. government are largely on an accrual basis. See Department of the Treasury, Financial Report of the United States Government, 2006. GAO is responsible for auditing the financial statements included in the Financial Report, but we have been unable to express an opinion on them because the federal government could not demonstrate the reliability of significant portions of the financial statements. Accordingly, amounts taken from the Financial Report may not be reliable. when cash payments are made. For many program areas, the timing difference is small but for others the timing differences can amount to billions of dollars each year. Differences arise when a cost is accrued (and affects the accrual deficit) in one fiscal year but paid (and affects the cash deficit) in another fiscal year. The following six areas account for the largest differences between cash and accrual deficits: civilian employee benefits, military employee benefits, veterans compensation, environmental liabilities (e.g., cleanup and disposal), insurance programs, and capital assets. For example, the accrual deficit includes an expense for current employees’ pension and other retirement benefits, which are earned during the employee’s working years and are part of the annual cost of providing government services but not paid until sometime in the future when the employee retires. The cash budget deficit does not include retirement benefits earned today, but it does reflect payments made to current retirees. (These cash payments reflect past accrued expenses.) The difference between the accrued retirement benefits recognized and cash payments made during the year is the difference between the accrual and cash measures due to employee benefits. In the year that capital assets such as structures and equipment are purchased, the budget recognizes the full cash cost to provide decision makers with the information and incentives to make efficient decisions at the only time that they can control the cost. Specifically, budget authority for the asset’s full cash cost must generally be provided up front before the asset can be purchased. The full cash cost of a capital asset is recorded as an outlay and included in the cash budget deficit when the asset is paid for. However, under the accrual basis of accounting used in the financial statements, the cash cost of the asset is initially recorded on the balance sheet. The cash cost of the asset is then spread over its expected useful life to match the asset’s cost with its use. Therefore, each year the accrual deficit only reflects one year’s worth of the cash cost, called depreciation expense. We have previously noted that while both cash and accrual measures of the government’s overall finances are informative, neither measure alone provides a full picture. For example, the unified budget deficit provides information on borrowing needs and current cash flow, but does not measure the amount of resources used to provide goods or services in the current year. While the accrual deficit provides information on resources used in the current year, it does not provide information on how much the government has to borrow in the current year to finance government activities. Nor does it provide information about the timing of payments and receipts, which can be very important. Therefore, just as investors need income statements, statements of cash flow, and balance sheets to understand a business’s financial condition, both cash and accrual measures are important for understanding the government’s financial condition. Although a more complete picture of the government’s fiscal stance today and over time comes from looking at both the cash and accrual measures than from looking at either alone, even the two together do not provide sufficient information on our future fiscal challenges. In addition to considering the federal government’s current financial condition, it is critical to look at other measures of the long-term fiscal outlook of the federal government. While there are various ways to consider and assess the long-term fiscal outlook, any analysis should include more than just the obligations and costs recognized in the budget and financial statements. It should take account of the implicit promises embedded in current policy and the timing of these longer-term obligations and commitments in relation to the resources available under various assumptions. For example, while the cash and accrual measures showed improvement between fiscal year 2005 and fiscal year 2007, our long-term fiscal outlook did not change. In fact, the U.S. government’s total reported liabilities, net social insurance commitments, and other fiscal exposures continue to grow and total more than $52 trillion, representing approximately four times the nation’s total output, or gross domestic product (GDP), in fiscal year 2007, up from about $20 trillion, or two times GDP in fiscal year 2000 (see table 1). Another way to assess the U.S. government’s long-term fiscal outlook and the sustainability of federal programs is to run simulations of future revenues and spending for all federal programs, based on a continuation of current or proposed policy. Long-term simulations by GAO, the Congressional Budget Office, and others show that we face large and growing structural deficits driven primarily by rising health care costs and known demographic trends. As shown in figure 2, GAO’s long-term simulations—which are neither forecasts nor predictions—continue to show ever-increasing long-term deficits resulting in a federal debt level that ultimately spirals out of control. The timing of deficits and the resulting debt buildup varies depending on the assumptions used, but under either optimistic (“Baseline Extended”) or more realistic assumptions (“Alternative simulation”), the federal government’s current fiscal policy is unsustainable. One summary measure of the long-term fiscal challenge is called “the fiscal gap.” The fiscal gap is the amount of spending reduction or tax increases that would be needed today to meet some future debt target. To keep debt as a share of GDP at or below today’s ratio under our Alternative simulation would require spending cuts or tax increases equal to 7.5 percent of the entire economy each year over the next 75 years, or a total of about $54 trillion in present value terms. To put this in perspective, closing the gap would require an immediate and permanent increase in federal tax revenues of more than 40 percent or an equivalent reduction in federal program spending (i.e., in all spending except for interest on the debt held by the public, which cannot be directly controlled). As demonstrated by these various measures, our nation is on an unsustainable fiscal path. This path increasingly will constrain our ability to address emerging and unexpected budgetary needs and will increase the burdens that will be faced by future generations. Since at its heart the budget debate is about the allocation of limited resources, the budget process can and should play a key role in helping to address our long-term fiscal challenge. The six countries reviewed in 2000 continue to use accrual budgeting. However, two countries that were considering broader expansions of accrual budgeting have thus far made only limited changes. Although each country’s budgeting framework has unique features, the six countries have taken one of two broad approaches toward accrual budgeting: One approach uses accruals for most or all items in the budget primarily to support broader efforts to improve government performance. A second approach more selectively uses accrual information in areas where it increases recognition of future cash requirements related to services provided during the year that are not fully recognized in a cash-based budget. Regardless of which approach is used, cash information remains important in all the countries to evaluate overall fiscal position. None of the countries reviewed include anticipated future payments for social insurance programs (namely public pensions and health services) in the current year’s budget measure. Social insurance programs are generally viewed as transfer payments rather than liabilities. Transfer payments are benefits provided without requiring the recipient to provide current or future goods or services of equivalent value in return. Since 2000, three countries—Australia, New Zealand, and Iceland—have continued to use the accrual budgeting frameworks in place in 2000. In 2000, we reported that the United Kingdom was planning to implement an accrual-based budgeting framework, called Resource Accounting and Budgeting. After Parliament passed the necessary legislation in 2000, the United Kingdom implemented resource accounting and budgeting in 2001. The United Kingdom has continued to make some modifications to its framework, including introduction of controls over cash. Although two countries—the Netherlands and Canada—have considered broader expansions of accrual budgeting since 2000, thus far they have made only limited changes. In the Netherlands only budgets for some government agencies are on an accrual basis and the governmentwide budget remains on a modified cash basis. The Dutch government decided against moving the governmentwide budget to an accrual basis in 2001. Although the Dutch cabinet thought that the accrual-based system added value at the agencies where it had been implemented, it thought the cost of implementing accrual budgeting governmentwide, including changing information systems, developing accounting standards, and changing regulations would outweigh any advantages. In 2003 Canada significantly expanded the use of accruals in the governmentwide budget, but the information used to support appropriations (called the Main Estimates) and the appropriations themselves remain largely on a cash basis. Since the 1990s, there has been debate within the Canadian government concerning the appropriate application of accruals. The Canadian Office of the Auditor General and a key committee in Parliament, the House of Commons Committee on Public Accounts, have advocated preparing the Main Estimates on a full accrual basis. The current government agrees in principle that accrual measurement can be useful but considers this to be a complex issue that requires study and consultation with parliamentarians. After consultation with parliamentarians, the current government plans to present a model for a new accrual-based appropriations process in 2008. Although the use of accrual budgeting in other major industrialized countries has grown, it is not currently the norm. Since 2000, the number of OECD countries that report using accruals at least in part has increased. For example, as noted previously, Denmark and Switzerland recently expanded the use of accruals in the budget. Some countries also report using both cash- and accrual-based accounting in the budget. However, the majority of OECD countries reported using either cash- or obligation-based budgeting or both. The extent to which countries in our study used accrual budgeting varied—from full accrual at all levels of government to more limited use at either the agency or program level. Figure 3 illustrates the broad range of use. The extent to which countries use accrual budgeting generally reflects the objectives to be satisfied. Countries that switched to accrual budgeting primarily as a way of providing better cost and performance information for decision making generally used accruals to a greater extent in the budget, as illustrated by the first two approaches—full accrual at all levels of government. In general, these countries also sought to put financial reporting and budgeting on a consistent basis. Countries that switched to accrual budgeting primarily as a way of increasing recognition of future cash requirements related to services provided during the year generally use it only for selected programs where accruals enhance up-front control and provide better information for decision making (e.g., loans and government employee pensions); this approach is similar to the United States’ current use of accruals. Regardless of the approach, cash information remains important. Most countries in our study continue to use cash-based measures for broad fiscal policy decisions. The following section describes each country’s objective and approach in more detail. Four countries—Australia, New Zealand, the Netherlands, and the United Kingdom—primarily use accrual budgeting to support broader efforts to improve the efficiency and performance of the public sector. Compared to cash-based budgeting, accruals are thought to provide better cost information and to encourage better management of government assets and liabilities. Among this group of countries, however, there is significant variation in the scope of accrual budgeting as well as the linkage between performance goals and appropriations. Since the 1990s, Australia and New Zealand have extensively used accruals in conjunction with output-based budgeting. The introduction of accrual budgeting in both countries was a key element of broader reforms meant to improve the efficiency and performance of the public sector. Reformers in both countries thought that accruals would provide better cost information and better management incentives than the previous cash- based budgeting framework. Reformers also thought it was important to have a consistent framework for budgeting and financial reporting to allow actual performance to be compared with expectations. Accrual budgeting in both countries is also intended to provide funding for the full cost of departments’ activities. Australia and New Zealand departments receive funding for noncash expenses, such as depreciation of existing assets, accrued employee pension benefits, and the estimated future costs of environmental cleanup resulting from government activities. Reformers in both countries thought that appropriating on a full- cost basis created compelling incentives for department managers to focus on the full cost of their department’s activities as well as manage noncash expenses. One important feature of Australia’s and New Zealand’s budgeting frameworks is that departmental appropriations are closely linked to outcomes and outputs, and department executives are given considerable flexibility in managing their department’s finances, provided that the department meets its performance goals. It is thought that giving department executives more flexibility generally contributes to better performance. In comparison to the United States, the appropriations acts in Australia and New Zealand place less emphasis on how departments allocate their funding among different types of expenses. Nevertheless, two key departments, the Treasury in New Zealand and the Department of Finance and Administration in Australia, do centrally review and must approve departmental plans for major capital purchases. The Netherlands has used accrual budgeting in select government agencies primarily as a tool for improving performance. In the early 1990s, the government allowed a limited number of government entities (called agencies) to operate as if they were private sector contractors by adopting a results-oriented performance-management model, including accrual accounting and budgeting. Under the Dutch approach, the agencies are effectively service providers for the central government’s ministries. These agencies receive funding for the accrual-based cost from the ministries that they service. For example, although the Ministry of Justice is appropriated funds on a cash basis to buy services from the Prison Service, the Prison Service charges the ministry the full cost of the services it provides. The number of government entities participating in this program has increased from 22 in 2000 to approximately 40 in mid 2007. However, while the agencies budgeting on an accrual basis represent about 60 percent of the government in terms of employees, they are a small part of the government’s overall budget since the majority of the Dutch government’s expenditures are spent on transfer payments, which continue to be budgeted on a cash basis. The United Kingdom implemented what it calls resource budgeting for financial year 2001–2002. The United Kingdom’s approach makes less use of the Australia–New Zealand form of performance-based budgeting and imposes tighter controls on cash than the Australia and New Zealand approaches. The United Kingdom’s Parliament votes both cash and “resources” (i.e., the full accrual-based cost of a department’s services). The resource budget recognizes such noncash expenses as accrued employee pension benefits as well as depreciation of existing assets but limits the ability of departments to use funds appropriated for noncash items to fund current spending. Treasury officials from the United Kingdom told us that in practice this near-cash limit on departmental spending is the focus of budgetary planning. Treasury officials also noted that although departments have public service agreements that include performance targets, the United Kingdom has not really used outcome- based budgeting. A second approach has been to use accrual information more selectively for programs or areas where it highlights annual costs that are not fully recognized in the cash-based budget. Iceland and Canada generally have taken this approach. Since 1998, Iceland has budgeted on an accrual basis except for capital expenditures, which remain on a cash basis. Iceland’s approach was designed primarily to improve transparency and accountability in its budget. The only areas with significant differences between cash- and accrual-based estimates are government employee pensions, interest, and tax revenue. Iceland also uses accrual budgeting for loan programs. Accrual budgeting in Iceland has had only a limited effect on department- level budgets for two reasons. First, capital budgeting remains on a cash basis. Second, the oversight and administration of employee pensions, tax revenue, and the subsidy costs for loans are located in the Finance Ministry, not individual departments. Consequently, for most Icelandic departments, there are only minor differences between cash- and accrual- based estimates for the department’s operating budgets. The federal government of Canada currently uses both accrual and cash for budgeting purposes. The governmentwide budget is largely on an accrual basis; the information used to support appropriations (called the Main Estimates) and the appropriations themselves remain largely on a cash basis; certain areas such as the future pensions for current employees are measured on an accrual basis. Canada’s current government has been considering moving the Main Estimates and appropriations to a full accrual basis. Since the 1990s, the Canadian Office of the Auditor General and a key parliamentary committee, the House of Commons Committee on Public Accounts, have recommended moving appropriations to an accrual basis so that managers would make more informed decisions about the use of resources. The Office of the Auditor General and the committee think it is important to use the same accounting standards in the budget and the Estimates. The current government agrees that moving to accrual- based budget and appropriations may have benefits. Officials from Canada’s Finance Department and Treasury Board Secretariat told us that it was important to study the experience of other governments with accruals before designing a new, accrual-based appropriations process. The officials also said the current government was consulting with members of Parliament and plans to present a model for Parliament’s consideration in 2008. Regardless of the approach taken in use of accrual budgeting, all of the countries consider cash information to be important, particularly for monitoring the country’s fiscal position even where fiscal indicators are accrual based. Three of the countries—Australia, the Netherlands, and the United Kingdom—calculate the governmentwide surplus/deficit on either a cash or near-cash basis. In the other three countries—Iceland, New Zealand, and Canada—aggregate fiscal indicators are largely accrual based, but officials we spoke with said that cash information continues to be important in evaluating fiscal policy. Although Australia extensively uses accruals for departmental appropriations, Australian officials said that a key measure for policymakers is the country’s surplus measured on a cash basis. This is due in part to a goal of running cash-based surpluses over the business cycle to contribute to national savings. Both the Netherlands and the United Kingdom, as members of the European Union (EU), are required to report the net lending or borrowing requirement, which officials described as a near-cash number. Officials from the United Kingdom also said that cash information is important because the current government has pledged to avoid borrowing to finance current expenditures and to keep net debt at prudent levels. New Zealand makes several adjustments to the accrual-based operating balance to remove items that do not affect the underlying financing of government and must pay attention to its cash position to ensure it meets its debt-to-GDP target. Since 2000, at least two additional OECD countries—Denmark and Switzerland—have expanded the use of accruals in the budget without moving to full accrual budgeting. Switzerland has recently expanded accrual measurement as part of broader reforms to improve government financial reporting. However, Switzerland’s governmentwide surplus/deficit continues to be calculated on a cash basis and some government assets, such as defense assets, are not capitalized. Beginning in 2007, Denmark moved departmental operating budgets and associated capital spending to an accrual basis, primarily to support efforts to improve the performance of government departments. However, Denmark does not accrue capital spending on infrastructure, and both grants and transfer payments are measured on a cash basis. Sweden and Norway considered moving toward accrual budgeting but decided against it. Between 1999 and 2003 Sweden developed a plan to move from cash to accrual budgeting but in 2004 chose not to implement these plans. Swedish officials said that the government was concerned that accrual budgeting would diminish control of cash spending, potentially undermine fiscal discipline and lead to bigger investments, principally for infrastructure and war equipment. Norway went through a similar decision process. In 2003, a government-appointed committee recommended Norway move to full accrual budgeting, but the government at that time argued that the fiscal policy role of the budget is better served by cash-based appropriations and that the cash system enables better control of investments. Parliament agreed. However, Norway is testing accrual accounting at 10 agencies to achieve purposes similar to those cited by other countries—namely to provide better cost information; to establish a baseline for benchmarking costs, both between government agencies and in relation to private organizations; and to generate more complete information on the assets and liabilities of the government. Any significant expansion in the use of accruals creates a number of transitional challenges, including how to develop accounting standards for the budget and deciding what assets to value and how to value them. Beyond transitional issues however, there are several challenges inherent to accrual budgeting, as we noted in 2000. These challenges illustrate the inherent complexity of using accrual-based numbers for managing a nation’s resources and led to some modifications in countries’ use of accrual reporting in the budget, such as reliance on more cash-based measures of the overall budget. Developing accounting standards to use in the budget and deciding what public assets to value and how to value them were initial challenges for countries moving to accrual budgeting. These took time to work out and refinements continue. Some countries in our study sought to put the government’s financial reporting and budgeting on the same basis and to make them comparable to the private sector. In all, three of the six countries in our 2000 report and Denmark said that the technical standards used in the budget were substantially based on private-sector accounting standards. Only Canada and Switzerland said the technical standards were based on public sector accounting standards. Three countries—Australia, the Netherlands, and the United Kingdom—reported that the standards used for aggregate measures were based on national accounting standards (similar to the national income and product accounts in the United States) set by an international organization (e.g., the International Monetary Fund’s Government Finance Statistics or the European System of Accounts). Some countries in our study thought that adopting standards and concepts developed by independent bodies was important. While both cash and accrual accounting can be subject to gaming, some believe that accrual accounting in particular opens up the opportunity for manipulation. Three countries responded that a commission of experts outside of government developed the standards. Other countries, however, said that although their standards were based on independent standards, the finance ministry or bureau of statistics has the ultimate responsibility for developing standards. In these countries, accounting standards were generally not adopted intact from an independent entity. For example, Switzerland’s accrual budgeting system is designed to be closely aligned with the international public sector accounting standards (IPSAS), but there were some deviations from IPSAS for constitutional reasons such as compliance with the cash-based balanced budget requirement. Also, for practical reasons, Switzerland does not capitalize defense investments, which is required under IPSAS. Besides developing the accounting standards to be used in the budget, a key challenge when switching to accrual budgeting, particularly for countries that choose to treat capital on an accrual basis (i.e., to capitalize assets and record them on the balance sheet) and provide funding for noncash depreciation costs, is to ensure that the recorded value of the capital asset is as accurate as possible. The value of the capital asset is used to calculate annual depreciation costs and in turn fund future capital acquisitions (replacements). If an agency overvalued its assets, it could be difficult to reduce the level of assets once accrual budgeting is implemented because the excess value represents a source of funding for the agency in the form of depreciation. On the flipside, if assets were undervalued, they may not provide good information on the cost of maintaining or replacing the asset. In 2004, for example, the New Zealand government purchased the nation’s rail network for only NZ$1. Officials with whom we spoke said the NZ$1 value did not yield good information about annual depreciation (maintenance) costs. Therefore the New Zealand government revalued the network at NZ$10.3 billion in 2006; this revaluation led to an increase in the New Zealand government’s net worth. More importantly, the annual operating balance used in the budget now reflects the associated depreciation costs. In Australia, the government thought that capitalizing assets would lead to a better understanding of what is owned and what would be needed in the future. However, an Australian official said departments still request supplementary funding to replace old assets. An Australian official said that this may be because some departments were not fully funded for all capitalized assets in their opening balance sheets during the move to accrual budgets. It could also be because new asset purchases are not identical to the assets they replace or because agencies did not have sufficient assets to carry out their goals in the first place. Asset identification and valuation were cumbersome and time-consuming efforts for the countries that chose to capitalize assets. Indeed, one of the reasons that Iceland decided against capitalizing assets was the difficulty it would have faced identifying and agreeing on the asset values. Valuing assets poses special problems in the public sector since it owns unique assets such as heritage assets (e.g., museums and national parks) and defense assets (e.g., weapons and tanks). By nature, heritage assets are generally not marketable. Their cost is often not determinable or relevant to their significance and they may have very long life cycles (e.g., hundreds of years). Although the recognition issues associated with heritage assets are challenging, these assets are generally not very significant in terms of the overall effect on fiscal finances. As a result, valuing heritage assets may be seen as not worth the effort. Indeed, of all the countries we reviewed, only Australia and New Zealand capitalize all assets. The other countries exclude unique government assets such as highways, bridges, national parks, historical buildings, and military assets. The most common approaches for valuing assets are historical cost and fair value. (Fair value is usually the same as market value; in the absence of reliable market values, replacement cost is often used.) Five of seven countries in our study that measure capital assets on an accrual basis use fair or market value. Only two—Canada and Denmark—use historical cost. Use of market value relies on professional judgments to assess values and the values can fluctuate sharply between reporting periods. Although historical cost is based on a verifiable acquisition price and does not fluctuate, the reported amounts may not reflect the current value of the asset. Furthermore, it is often very difficult to estimate the original costs of government assets that are hundreds of years old or for which cost records have not been maintained. We have reported that enhancing the use of performance and “full-cost” information in budgeting is a multifaceted challenge that must build on reliable cost and performance data, among other things. Reliable financial information was also viewed as important to have before moving to accrual budgeting in some countries we reviewed. For example, in the Netherlands, an agency must receive a “clean audit” or an unqualified audit opinion for the year prior to moving to accrual budgeting and at least 6 months must have been spent in a trial run of the accrual accounting system. Other criteria must also be met before moving to accrual-based budgeting and receiving the associated flexibilities including being able to describe and measure the agency’s products and services. Before moving to accrual budgeting in New Zealand, a department had to define its broad classes of outputs, develop an accrual-based system capable of monthly and annual reporting, and develop a cost-allocation system to allocate all input costs including depreciation and overhead to outputs and provide assurance it had an adequate level of internal controls. There was not, however, a requirement for an unqualified opinion for the agency. Accrual budgeting can also lead to improvements in financial information. Auditable financial accounts were not a prerequisite for moving to accrual budgeting in the United Kingdom. When the United Kingdom moved to accrual budgeting in 2001–2002, the government had 16 accounts for central government departments with “qualified” opinions. However, since the introduction of accrual budgeting, the United Kingdom reported that the number of qualified accounts had declined and the timeliness of financial reporting, which maximizes the usefulness of the information to managers, Parliament, and other stakeholders, has improved. Both cash and accrual measures are subject to volatility. Cash accounting may not be useful for measuring cost because spikes in receipts or payments can cause swings in the apparent “cost” of a program or activity. For example, if a program purchases a large amount of equipment in one year, it will appear costly under cash accounting, but under accrual accounting, only a proportion of the equipment’s cost in the form of depreciation would be shown in that year. Accrual measures experience volatility for other reasons such as changes in the value of assets and liabilities or changes in assumptions (e.g., interest rates, inflation, and productivity) used to estimate future payments. Because the accrual-based operating results can be volatile due to events outside the government’s control, New Zealand generally does not use it as a measure of the government’s short-term fiscal stewardship. For example, under New Zealand’s accrual-based accounting standards, most assets are revalued at least every 3 years. New Zealand uses fair value, which is usually the same as market value when there is an active market. As noted above, market values tend to fluctuate between reporting periods. The changing market values can cause swings in the reported accrual-based operating results because such changes are reflected as revenue or cost in the year revalued. Therefore, changes in operating results may reflect not a fundamental change to the government’s finances but rather changes in the value of assets or liabilities that do not affect the government’s financing in the current period. Fluctuations can also result from annual changes in the value of liabilities when there are deviations between actual experience and the actuarial assumptions used or changes in actuarial assumptions. The liabilities for New Zealand’s government pension and insurance programs, for example, fluctuate from year to year partly due to changes in the underlying assumptions such as interest rates and inflation. To deal with this, the New Zealand Treasury removes revaluations and other movements that do not reflect the underlying financing of government from its operating balance. It is this measure— the Operating Balance Excluding Revaluations and Accounting Changes (OBERAC)—that has been the focus of policy debates in New Zealand since about 2001. More recently the New Zealand Treasury shifted its focus to a new measure—Operating Balance Excluding Gains and Losses (OBEGAL). Gains and losses can result when the value of an asset or liability differs from the value booked on the balance sheet. If the government sells an asset and the sales price equals book value, there is no gain or loss, because a cash inflow equal to book value is the exchange of one asset for another of equal recorded value. However, if the sales price is more or less than the book value of the property, the difference is reflected as a gain or loss. New Zealand set up a fund to partially prefund future superannuation expenses. This fund reports gains and losses on its investments. Because the current government wishes to retain the investment returns in the fund, beginning with the 2007 budget the government has shifted its focus to the OBEGAL to ensure the government is meeting its fiscal objectives. New Zealand said that by excluding net gains and losses the OBEGAL gives a more direct indication of the underlying stewardship of the government. Accrual accounting is inherently more complex than cash-based accounting, which is like managing a checkbook. One Australian official noted that using accrual measures can be challenging because many cabinet ministers and members of Parliament are trained in professional fields other than finance and accounting and may be more familiar with cash budgeting. Focusing on accrual-based numbers can be difficult given the existence of cash-based fiscal policy targets. For example, several countries—Canada, New Zealand, and the United Kingdom—have fiscal policy targets that target the amount the country can borrow; borrowing (or debt) is based on cash measures. Also, while accrual numbers are used at the agency level in Australia, Australia has had a goal of running cash-based surpluses over the business cycle. This is due in part to a long-standing goal in Australia to improve national savings. At the time of our study, Australia’s Treasurer primarily focused on the cash-based fiscal position to show the government’s effect on national savings. Agency managers therefore have an obligation to manage both the cash and accrual implications of their resource use. New Zealand also pays attention to its cash position. New Zealand’s current fiscal policy goal is to maintain gross debt at around 20 percent of GDP. This means that New Zealand’s cash position must be such that cash receipts equal cash outlays excluding interest expense. It also means the accrual-based operating surplus must be sufficient to cover investments— cash needed today but not expensed until the future. Cash information is still used at both the overall fiscal policy level and department level in the United Kingdom. The current United Kingdom government has pledged to avoid borrowing to finance current expenditures and maintain public debt at a prudent level. Both of the government’s fiscal targets are measured on a near-cash basis. Consequently, United Kingdom Treasury officials said that Treasury has imposed limits on departmental cash spending because spending directly affects the country’s cash-based fiscal position. Different countries have taken different approaches to managing noncash expenses, particularly in regard to capital assets. In Australia and New Zealand, cash is appropriated for the full accrual amounts, including noncash items such as depreciation for existing assets. Agencies are expected to replenish their current assets from funding provided for depreciation and they have the funding to do so (subject to the oversight discussed below). The full cost of government is the focus of the operating budget rather than the immediate cash requirement. The downside of this approach is that control of cash and capital acquisitions to replace assets can become challenging. If an agency is given cash to fund depreciation expense, there is a risk that agencies may use the funds to cover other expenses. Similarly, Parliament may lose control over the acquisition of capital assets since it will have funded them through depreciation provided in previous years. To address these concerns, countries have implemented cash management policies and specific controls over capital acquisitions. For example, like Australia and New Zealand, the United Kingdom initially provided funding for the full cost of programs, outputs, or outcomes with the thought that it would generate efficiencies. Over time, however, United Kingdom Treasury officials said they became concerned that some departments were shifting noncash expenses to cash expenses, which adversely affected the government’s borrowing requirement. As a result, the United Kingdom has imposed controls on cash. Departments’ budgets now include both the amount of the full accrual costs and the cash required. The Parliament approves both numbers. This not only helps ensure that department spending is in-line with the government’s fiscal policy goals but also reinforces Parliament’s control over capital acquisitions. Australia also reported that it is considering a model that would give the Parliament both cash and accrual information in a form that better meets its needs and preferences. On the basis of reports by the Australian National Audit Office and others that departments could potentially use funds provided for depreciation of existing assets to fund noncapital acquisitions or that agencies are not appropriately using the funds to repair or replace existing assets, the Australian Senate expressed concern about the transparency of funding for depreciation and the potential loss of control over new capital purchases. The Senate recommended that the government consider reporting and budgeting for capital expenditures separately, including a subdivision of expenditures between asset replacement (i.e., the depreciation component) and asset expansion. All countries we reviewed that accrue capital investments have a process in place to facilitate oversight over capital. While most of these countries include depreciation of existing assets in operating budgets, most also preserve up-front control of capital by approving capital purchases above a certain threshold. For example, in New Zealand, all capital purchases above NZ$15 million must be approved by the cabinet. In Australia, any capital purchase above A$10 million in any one year must have a business case prepared and must be included in the budget proposal to be submitted for government approval. The United Kingdom Treasury reviews departmental capital plans. In the Netherlands, capital purchases by agencies are made through loans provided by the Ministry of Finance. The Ministry of Finance has to approve the level of loans per agency. As previously noted, all of the countries in our study are parliamentary systems in which the political party that controls the current government has primary control over budgetary matters. However, as noted above, in some countries Parliaments have expressed general concerns that the budget presentations are confusing under accrual budgeting. Several countries in our study use more than one method of budget accounting, which can be confusing for Parliament and other users. In Australia, for example, where two accounting standards are currently used in the budget, the Senate has recommended the adoption of a single agreed-upon accounting standard. In Canada, the government reports the budget surplus/deficit on an accrual basis but department-level appropriations remain on a cash basis. Canadian audit officials we spoke with said the Parliament wants the department-level appropriations prepared on an accrual basis in part because the two different measures and crosswalks are confusing. Canada is considering moving department-level budgets to an accrual basis in order to provide consistent financial information for all levels of government and a better linkage between the budget and appropriations. In the United Kingdom, some members of Parliament said it was unclear how the accrual-based appropriations related to the nation’s fiscal goals, which are largely cash based. As a result, the government is undertaking an “alignment project” to better align budget accounts with the government’s two fiscal rules to (1) avoid borrowing to finance current expenditures and (2) keep net debt at prudent levels. Australia’s Senate expressed concern about reduced transparency of some information and said that the budget could be improved if data were presented at the program level (in addition to outcomes). The Australian government official we spoke with said that the government already provides the Parliament and public with extensive information on both the full costs of government activities and the performance of agencies. It was not clear to the official, however, that providing more detailed information would improve the quality and usefulness of information considering the administrative workload involved and the potential for creating more “red tape” for managers. The Australian official thought more concise and relevant reports might be more useful than more information. Despite the inherent challenges, our six case study countries have continued to use accrual budgeting and additional countries have adopted accrual budgeting since 2000. These countries view having accrual-based cost information available to program managers for resource allocation decisions as outweighing the associated difficulties. In several countries, officials we spoke with said they believe accrual budgeting provides better information on the cost of annual operations and performance than cash- based budgeting particularly in regard to the use of capital assets and programs that incur costs that are not paid in cash today. In general, countries said that accrual-based cost information contributes to improved resource allocation and program management decisions. Under cash budgeting, a program’s budget shows only the immediate cash outlay and not the cash that will have to be paid in the future for the service provided today. Accrual budgeting, which recognizes resources as they are used to produce goods and services, provides the full cost of all programs and may allow for better comparisons between different methods of delivering government services. New Zealand officials, in particular, believe the cost information provided by accrual-based budgeting has led to efficiencies and better resource allocation decisions. New Zealand attributed the cost information provided by accrual budgeting as helping them identify where and how to cut spending to put the country on a more sound fiscal footing in the early 1990s. Several of the countries have attributed specific improvements on the departmental level to accrual budgeting. For example, under accrual accounting, the cost of a loan includes the subsidy cost—the cost of lending below market rates and provisions for bad debt. When New Zealand recently made student loans interest free, the cost of the subsidy was taken into consideration during the policy debate. The United Kingdom also reported the more complete information on student loans directly affects lending decisions at the Department of Education and Employment. In several of the countries, one perceived advantage of accruals was to facilitate comparisons between the public sector and private sector. Accrual-based cost estimates could be used to “benchmark,” or compare the cost of existing public service providers to alternative providers in either the public or private sectors. The OECD reported in 2005 that both agencies and core ministries in the Netherlands were content with the results from accrual budgeting at the agencies. Agencies, which now receive a budget for the full cost of their activities, like the flexibilities under accrual budgeting, while core ministries value the output and price information they receive from the agencies. The ministries also reported that agencies’ use of accrual budgeting enables them to consider the performance of the agencies relative to alternatives (i.e., decentralization to subnational government or contracting out). At the same time, the availability of the alternatives enabled ministries to put more pressure on agencies to improve cost efficiency and to reduce prices. New Zealand, however, reported that there is little evidence available that similar types of outputs are compared or benchmarked in a way that was thought desirable at the time the reforms were initiated. Concerns about the usefulness and robustness of cost accounting systems continue and there remains a concern that the specification of outputs is not at a sufficient standard to ensure high-quality government performance. In several case study countries, accrual budgeting helped policymakers recognize the full cost of certain programs at an earlier point and make decisions that limited future cash requirements. For example, as reported in 2000, both New Zealand and Iceland credited accrual budgeting with highlighting the longer-term budgetary consequences associated with public sector employee pension programs. In Iceland, accrual budgeting showed the consequences of wage negotiations on future public sector employee pension outlays. The full costs of these agreements were not fully realized by the public until the adoption of accrual budgeting. At that time, Icelandic officials told us that there was no longer public support for decisions that were so costly in the long term. Similarly, New Zealand officials decided to discontinue the defined benefit public employee pension program after pension liabilities were recognized on the balance sheet and the expense incurred was included in the budget. Since 2000, reforms aimed at putting government employee pensions on a more sustainable footing were enacted in Australia and the United Kingdom. In Australia, unfunded pension liabilities for government employees are currently the largest liability on Australia’s balance sheet (which is part of its budget documents). To cover this liability, the Australian government recently established an investment fund called the “Future Fund” to help pay future pension payments. Government employee pensions in the United Kingdom were also reformed. In 2007, the United Kingdom government raised the pension age to 65 for employees hired beginning in July 2007 and limited the government’s contribution to pensions to 20 percent. United Kingdom officials acknowledged that there was already recognition that the program needed significant reform before the introduction of accrual measures, but said accrual budgeting helped highlight the full cost of pension liabilities and forced the debate on pension reform to happen sooner. Accrual budgeting has also changed the information available for insurance programs, veterans benefits, and environmental liabilities. As reported in 2000, New Zealand officials attributed reforms of the Accident Compensation Corporation program to recognizing the liability and expenses from providing accident coverage in the budget. Recognizing the estimated future outlays associated with current accidents reduced budget surpluses by NZ$500 million. At that time, officials attributed New Zealand’s decision to raise premiums and add surcharges largely to this inclusion of program costs in the budget. Also, in 2002 New Zealand ratified the Kyoto Protocols committing to reduce net emissions of greenhouse gases over the 2008–2012 period. Consistent with financial accounting standards, New Zealand recognized a liability for the obligation created by this commitment. New Zealand officials attributed accrual accounting with helping them focus on ways to manage environmental liabilities. Canadian officials attributed accrual information with leading to recent changes in veterans benefits. The use of accrual accounting requires Veterans Affairs Canada to record the full cost of veteran benefits in the year they are earned rather than paid. Therefore when considering changes to veterans benefits, Veterans Affairs Canada considered the effect of future cash flows in discounted terms. Initial results indicated that the planned changes to veteran benefits represented a substantial expense for the year. As a result, Veterans Affairs Canada modified the admissibility requirements limiting the financial effect of the changes. Accrual budgeting was not used to increase awareness of long-term fiscal challenges that are primarily driven by old-age public pensions and healthcare programs. None of the countries in our study include future social insurance payments in the budget. Like the United States, the other countries do not consider future social insurance payments to be liabilities. Instead, in recent years, several countries have begun reporting on the sustainability of the government’s overall finances over longer-term horizons, given demographic and fiscal trends. Aging is a worldwide phenomenon. One of the key challenges that all developed economies are facing over the coming decades is demographic change. This demographic shift—driven by increased life expectancies, falling fertility rates, and the retirement of the baby boom generation—will place increased pressure on government budgets (i.e., public pensions and health care). For example, by 2047, a quarter of Australia’s population is projected to be aged 65 and over—nearly double the current proportion. Similarly, by 2050, New Zealand projects that the number of people over 65 is expected to grow almost threefold, while those 85 and over will grow sixfold. Similar trends hold for the other countries we studied. Although public pension benefits are a major driver, the most challenging aspect of the long-term fiscal outlook in many of the countries we studied—as in the United States—is health care spending. Health spending is expected to increase significantly over the next 40 years due to population aging, new medical technologies, new drugs, and other factors. For example, Australia projects that health care spending as a share of GDP will nearly double by 2046–2047. Similarly, the United Kingdom projects that its health spending will increase faster than other types of spending—from around 7½ percent of GDP in 2005–2006 to around 10 percent of GDP by 2055–2056. New Zealand projects a rise in the ratio of health spending to GDP of 6.6 percentage points between 2005 and 2050 resulting in health spending of about 12 percent of GDP. Similar trends are projected in the other countries we reviewed. In recent years, many countries in our study have started preparing long- term fiscal sustainability reports. Frequently cited reasons for this are to improve fiscal transparency and provide supplemental information to the budget; to increase public awareness and understanding of the long-term fiscal outlook; to stimulate public and policy debates; and to help policymakers make informed decisions. These reports go beyond the effects of individual pension and health care programs to show the effect of these programs on the government budget as a whole. Unlike accrual or cash budgeting, which are intended to provide annual cost information, fiscal sustainability reporting provides a framework for understanding the government’s long-term fiscal condition, including the interaction of federal programs, and whether the government’s current programs and policies are sustainable. In fiscal sustainability reports, countries measure both the effect of current policy on the government’s fiscal condition and the extent of policy changes necessary to achieve a desired level of sustainability. These countries hope that a greater understanding of the profound changes they will experience in the decades ahead will help stimulate policy debates and public discussions that will assist them in making fiscally sound decisions for current and future generations and in achieving high and stable rates of long-term economic growth. Fiscal sustainability is generally described by countries as the government’s ability to manage its finances so it can meet its spending commitments now and in the future. A sustainable fiscal policy would encourage investment and allow for stable economic growth so that future generations would not bear a tax or debt burden for services provided to the current generation. An unsustainable condition exists when demographic and other factors are projected to place significant pressures on future generations and government finances over the long term and result in a growing imbalance between revenues and expenditures. Four of six case study countries produce reports on long-term (i.e., more than 10 years) fiscal sustainability. The Netherlands first issued a report on the long term in 2000. Both the United Kingdom and Australia followed, issuing their first reports in 2002. New Zealand issued its first report in 2006. Of our case study countries, only Canada and Iceland currently do not issue long-term fiscal sustainability reports. However, Canada is planning to issue a comprehensive fiscal sustainability and intergenerational report in the near future. Of our limited review countries, Norway reported that it has traditionally provided Parliament reports on long-term budget projections as well as fiscal sustainability analyses. Further, Switzerland is planning to issue a long-term fiscal sustainability report in early 2008. The European Commission is also increasing its focus on the fiscal sustainability of the EU member states, including the Netherlands, United Kingdom, Denmark, and Sweden, as part of the Stability and Growth Pact (SGP). The SGP, an agreement by EU member states on how to conduct, facilitate, and maintain their Economic and Monetary Union requirements, requires member states to submit Stability or Convergence Reports, which are used by the European Council to survey and assess the member’s public finances. The guidelines for the content of these reports were changed in 2005 to include a chapter with long-term projections of public finances and information on the country’s strategies to ensure the sustainability of public finances. The European Commission uses this information to annually assess and report on the long-term sustainability of all EU members, including consideration of quantitative measures (e.g., primary balance, debt-to-GDP) and qualitative considerations of other factors, such as structural reforms undertaken and reliability of the projections. Such reporting includes an assessment of the sustainability of member countries’ finances, policy guidance to EU members to improve sustainability, and discussion of the effect of significant policy changes on the sustainability of member countries’ finances. The Commission released its first comprehensive assessment on the long-term sustainability of public finances in October 2006. Whether a government will be able to meet its commitments when they arise in the future may depend on how well it reduces its debt today so the burden does not fall entirely to future generations. Countries may have different assumptions about what is sustainable but one aim is to keep debt at “prudent levels.” Several of our case study countries have set debt- to-GDP targets in their efforts to address fiscal sustainability issues. For example, Canada wants to reduce its net debt (i.e., financial liabilities less financial assets) for all levels of government to zero by 2021. Similarly, New Zealand’s current objective is to reduce debt to around 20 percent of GDP over the next decade. The United Kingdom, under its sustainable investment rule, requires that public sector net debt is to be maintained below 40 percent of GDP over the economic cycle. Australia and the Netherlands have no explicit debt level targets, although the Netherlands is subject to EU limits on general government debt. The countries studied used a number of measures to assess the fiscal sustainability of their policies. Common approaches to assessing fiscal sustainability include cash-flow measures of revenue and spending and public debt as a percent of GDP as well as summary measures of fiscal imbalance and fiscal gap (see table 2). Each measure provides a different perspective on the nation’s long-term financing. Cash-flow measures are useful for showing the timing of the problem and the key drivers, while measures such as the fiscal imbalance or fiscal gap are useful for showing the size of action needed to achieve fiscal sustainability. Each measure has limitations by itself and presents an incomplete picture. Therefore, most countries use more than one measure to assess fiscal sustainability. Two measures—the fiscal gap and fiscal imbalance—show the size of the problem in terms of action needed to meet a particular budget constraint. Changes in these measures over time are useful for showing improvement or deterioration in the overall fiscal condition. The fiscal gap shows the change in revenue or noninterest spending needed immediately and maintained every year to achieve a particular debt target at some point in the future. The fiscal imbalance (or intertemporal budget constraint) is similar to the fiscal gap but the calculation assumes all current debt is paid off by the end of the period. These summary measures can also be calculated in terms of the adjustment needed in the future if adjustment is delayed (which would increase its size). The change in policy can be in the form of adjustments to taxes, spending, or both. A positive fiscal gap or imbalance implies that fiscal policy should be tightened (i.e., spending cut or taxes raised) while a negative fiscal gap or imbalance implies that fiscal policy could be loosened (i.e., spending increased or taxes reduced). A fiscal gap or imbalance implies potential harm to future generations if action to make public finances sustainable is deferred thus requiring more budgetary actions (or higher interest costs) in the future than today. It should be noted that a fiscal gap or imbalance of zero over a finite period does not mean that current fiscal policy is sustainable forever. For example, debt could still be rising faster than GDP at the end of the period. Another limitation to these summary measures is that by definition they do not provide information on timing of receipts and outlays, which is important. Most of the countries we studied used share of GDP measures rather than present value dollar measures. In part this is to avoid the situation in which a small change in the discount rate assumption leads to large swings in the dollar-based sustainability measures. Present value dollar measures are highly sensitive to assumptions about the discount rate. An increase of 0.5 percentage points in the discount rate used to calculate the U.S. fiscal gap reduces the present value of the fiscal gap from $54.3 trillion to $47.7 trillion; in contrast such a change results in a smaller proportional change to the gap as a share of GDP from 7.5 to 7.3 percent. Also, since the numbers can be so large, it may be difficult for policymakers and the general public to understand without placing the numbers in context of the resources available in the economy to finance the fiscal gap. Fiscal sustainability reports are required by law in two countries— Australia and New Zealand. The legislation underpinning both countries’ fiscal sustainability reports does not dictate in detail what measures should be included in the report. Rather, the law specifies only the frequency of reporting (i.e., every 4 years for New Zealand and every 5 years for Australia), the years to be covered, and the overall goal. Both Australia and New Zealand are required to assess the long-term sustainability of government finances over a 40-year horizon. Switzerland is required by law and an accompanying regulation to issue a sustainability report periodically, but at least every 4 years. Neither the Netherlands’ nor the United Kingdom’s reports are required by law. Instead, the reports stem from political commitments of the current government. The Netherlands prepared its first report in 2000 and reported again in 2006. In the United Kingdom the current government made a political commitment to annually report on the long-term fiscal challenges as part of the current government’s fiscal framework and has prepared reports annually since 2002. Canada’s upcoming report also stems from a commitment made by the current government. A drawback of not having any legal or legislative requirement for the report is that future governments may or may not continue what the current government started. The size of a nation’s fiscal gap or fiscal imbalance will depend on the time period chosen. Even if a particular sustainability condition is satisfied over the chosen period, there may still be fiscal challenges further out. Extending the time period can partially address this limitation, but it increases uncertainty. Most of the case study countries that prepare fiscal sustainability reports cover the next 40 to 50 years. However, the Netherlands report goes out through 2100. The United Kingdom calculates the intertemporal budget constraint over an infinite time horizon, which poses a high degree of uncertainty. Choosing the horizon for the fiscal gap or imbalance calculations therefore involves a trade-off in that it should be long enough to capture all the major future budgetary developments but also short enough to minimize uncertainty. It may be best to present these measures over a range of horizons. As with any long-term projection, uncertainty is an issue. To deal with the uncertainty of projections, countries have done sensitivity analysis. For example, the United Kingdom performed a sensitivity analysis using different assumptions for productivity growth and interest rates. The United Kingdom found that the fiscal gap was robust to changes in productivity growth, meaning that the required policy action changed little. However, the fiscal gap was more sensitive to changes in the interest rate assumption. For example, in the United Kingdom, an increase in the interest rate assumption from 2.5 percent to 3.0 percent increases the fiscal gap for the 50-year period by 50 percent from 0.5 percent to 0.75 percent of GDP. Sustainability requirements are important when setting short- and medium-term policy targets. The sooner countries act to put their governments on a more sustainable footing, the better. Acting sooner rather than later permits changes to be phased in more gradually and gives those affected time to adjust to the changes. Citizens can adjust their savings now to prepare for retirement. In the Netherlands, a medium-term fiscal target has been set based on the information presented in the sustainability report. The current government has explicitly linked expenditure ceilings and revenue targets to attaining a structural fiscal surplus of 1 percent of GDP at the end of 2011, which the Netherlands Bureau of Economic Policy Analysis has estimated is needed for public finances to be sustainable given the impending population aging. In addition a study group recommended that the adjustments should be introduced gradually so that they are bearable for all generations. According to New Zealand officials, its fiscal sustainability report shows that long-term demographic pressures will make it increasingly hard to meet fiscal objectives and therefore policy adjustments will be required. Recognizing that small changes made now will help to prevent making big changes later on, officials said the report has encouraged and enabled greater consideration of long-term implications of new policy initiatives in the budget process. New Zealand intends to link departments’ annual Statements of Intent to long-term projections. Under this approach, departmental objectives will have to be modified or justified to meet the long-term objectives. Before implementing accrual budgeting some countries were experiencing moderate to large deficits. Some countries’ dependence on trade and foreign borrowing led to concerns that increased deficits could lead to rising interest rates and devaluation of the currency, and ultimately a financial crisis. As a result, fiscal discipline was necessary. Accrual budgeting was adopted as part of larger reforms to improve transparency, accountability, and government performance. The United States faces long-term fiscal challenges that, absent reforms, could have adverse effects in the form of higher interest rates, reduced investment, and more expensive imports ultimately threatening our nation’s well-being. The range of approaches used by countries in our study illustrate that accrual budgeting need not be viewed as a “one size fits all” choice. The experiences of countries in our study show that the switch to accrual budgeting was most beneficial for programs where cash- or obligations- based accounting did not recognize the full program cost up front. As we stated in 2000 and in other GAO reports, increased accrual information in certain areas of the budget—insurance, environmental liabilities, and federal employee pensions and retiree health—can help the Congress and the President better recognize the long-term budgetary consequences of today’s operations and help prevent these areas from becoming long- term issues. However, accrual budgeting raises significant challenges for the management and oversight of capital purchases and noncash expenses, especially depreciation. Many of our case study countries implemented additional controls to maintain up-front control over resources within their accrual budget frameworks. Indeed, in the U.S. system of government where the Congress has the “power of the purse,” maintaining control over resources is important. While cost and performance information provided under accrual budgeting can be useful, this information must be reliable if budget decisions are to be based on it. We have reported that the financial management systems at the majority of federal agencies are still unable routinely to produce reliable, useful, and timely financial information. Until there is better financial information, a switch to full accrual budgeting may be premature. As we reported in a previous report on U.S. agencies’ efforts to restructure their budgets to better capture the full cost of performance, the use of full-cost information in budget decisions may reflect rather than drive the development of good cost information in government. Further, challenges exist in estimating accrual-based cost information for some areas, including veterans compensation, federal employee pensions and retiree health, insurance, and environmental liabilities, that require a significant amount of the government’s future cash resources. For example, estimates of future outlays for pensions or veterans compensation depend on assumptions of future wages, inflation, and interest rates that are inherently uncertain and subject to volatility. Trends in health care costs and utilization underlying estimates of federal employee postretirement health benefits have also been volatile. The estimated cleanup costs of the government’s hazardous waste are another area where the accrued expenses may not be based on reliable estimates. Not all environmental liabilities have been identified and cleanup and disposal technologies are not currently available for all sites. However, in areas such as these, it may be preferable to be approximately right than exactly wrong. Failure to pay attention to programs that require future cash resources can further mortgage our children’s future. Although accrual budgeting can provide more information about annual operations that require future cash resources, it does not provide sufficient information to understand broader long-term fiscal sustainability. An accrual budget does not include costs associated with future government operations and thus would not help recognize some of our greatest long-term fiscal challenges—related to Social Security, Medicare, and Medicaid. A growing trend in other countries is to develop reports on fiscal sustainability that evaluate the fiscal condition of not only the key drivers of the nation’s long-term fiscal outlook but government as a whole. Fiscal sustainability reports that show future revenue and outlays for social insurance programs and the interrelationship of these programs with all federal government programs would provide a comprehensive analysis of the nation’s fiscal path and the extent to which future budgetary resources would be sufficient to sustain public services and meet obligations as they come due. By highlighting the trade-offs between all federal programs competing for federal resources, such a report would improve policymakers’ understanding of the tough choices that will have to be made to ensure future generations do not bear an unfair tax or debt burden for services provided to current generations. Most countries recognize the need for various measures of fiscal position, including the projected debt-to-GDP ratios and fiscal gap measures. Since no single measure or concept can provide policymakers with all the information necessary to make prudent fiscal policy decisions, it is necessary to use a range of measures or concepts that show both the size of the problem and the timing of when action is needed. This study and the deterioration of the nation’s financial condition and fiscal outlook since 2000 confirm our view that the Congress should consider requiring increased information on the long-term budget implications of current and proposed policies on both the spending and tax sides of the budget. In addition, the selective use of accrual budgeting for programs that require future cash resources related to services provided during the year would provide increased information and incentives to manage these long-term commitments. While the countries in our study have found accrual-based information useful for improving managerial decision making, many continue to use cash-based information for broad fiscal policy decisions. This suggests that accrual measures may be useful supplements rather than substitutes of our current cash- and obligations-based budget. Presenting accrual information alongside cash- based budget numbers, particularly in areas where it would enhance up- front control of budgetary resources would put programs on a more level playing field and be useful to policymakers both when debating current programs and when considering new legislation. Since accrual-based budgeting would not provide policymakers with information about our nation’s largest fiscal challenges—Social Security, Medicare, and Medicaid—fiscal sustainability reporting could help fill this void. The reports could include both long-term cash-flow projections and summary fiscal gap measures for the whole of government that would show both the timing and overall size of the nation’s fiscal challenges. Accrual budgeting and fiscal sustainability reporting are only means to an end; neither can change decisions in and of itself. The change in measurement used in the budget provides policymakers and program managers with different information, but the political values and instincts of policymakers may not change. While recognizing fuller costs could help inform policymakers of the need to reform, it will require action on their part to address them. Any expansion of accrual-based concepts in the budget or increased reporting requirements would need to be accompanied by a commitment to fiscal discipline and political will. To increase awareness and understanding of the long-term budgetary implications of current and proposed policies for the budget, the Congress should require increased information on major tax and spending proposals. In addition, the Congress should consider requiring increased reporting of accrual-based cost information alongside cash-based budget numbers for both existing and proposed programs where accrual-based cost information includes significant future cash resource requirements that are not yet reflected in the cash-based budget. Such programs include veterans compensation, federal employee pensions and retiree health, insurance, and environmental liabilities. To ensure that the information affects incentives and budgetary decisions, the Congress could explore further use of accrual-based budgeting for these programs. Regardless of what is decided about the information and incentives for individual programs, the Congress should require periodic reports on fiscal sustainability for the government as a whole. Such reports would help increase awareness of the longer-term fiscal challenges facing the nation in light of our aging population and rising health care costs as well as the range of federal responsibilities, programs, and activities that may explicitly or implicitly commit the government to future spending. We are sending copies of this report to interested parties. Copies will also be sent to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. Please contact Susan Irving at (202) 512-9142 or [email protected] if you have any questions about this report. Key contributors are listed in appendix II. To update the findings of our 2000 report, we examined (1) where, how, and why accrual budgeting is used in select Organisation for Economic Co-operation and Development (OECD) countries and how it has changed since 2000; (2) what challenges and limitations were discovered and how select OECD countries responded to them; (3) what select OECD countries perceived the effect to have been on policy debates, program management, and the allocation of resources; (4) whether accrual budgeting has been used to increase awareness of long-term fiscal challenges and, if not, what is used instead; and (5) what the experience of select OECD countries and other GAO work tell us about where and how the increased use of accrual concepts in the budget would be useful and ways to increase the recognition of long-term budgetary implications of policy decisions. To address these objectives, we primarily focused on the six countries in the 2000 GAO report: Australia, Canada, Iceland, the United Kingdom. We also did a limited review of two other nations—Denmark and Switzerland—that have recently expanded the use of accrual measures in the budget. Since these countries may not provide a complete picture of the potential limitations or the use of alternative ways to increase the focus on long-term fiscal challenges, we also looked at two countries— Norway and Sweden—that considered expanding the use of accrual measurement in the budget but decided against it, to understand why. We reviewed budget publications and used a set of questions to gather information on how and why accrual concepts are used in the budget in the selected countries and how this has changed since 2000. For context, we also reviewed the results of a recent survey done by the OECD on budgeting practices in all OECD countries and compared to older survey results to understand general trends in the use of accrual budgeting over time. To identify factors that facilitated accrual budgeting; strategies for addressing commonly cited implementation challenges; and how and where accrual has or has not changed the budget debate, we primarily focused on the six countries studied in 2000. We interviewed (by e-mail, telephone, and videoconferencing) officials from the budget and national audit offices in select countries and reviewed official budget documents and related literature to gather information on the challenges and limitations of accrual budgeting; how the use of accruals in the budget has affected policy debates, resource allocation decisions, and program management; and other approaches used to address long-term fiscal challenges. We did not interview parliamentary officials or staff or program managers. The information on foreign laws in this report does not reflect our independent legal analysis, but is based on interviews and secondary sources. We identified key themes from the experience of other nations, reviewed past GAO work, and considered the differences between other nations and the United States to identify useful insights about how to use more accrual-based or other information to inform budget debates. The experience of any one OECD country is not generalizable to other countries. In analyzing other countries’ experiences and identifying useful insights for the United States, it is important to consider the constitutional differences between Parliament in parliamentary systems of government and the Congress of the United States, especially in the role each legislature plays in the national budget process. The U.S. Congress is an independent and separate, but coequal, branch of the national government with the constitutional prerogative to control federal spending and resource allocation. Many important decisions that are debated during the annual budget and appropriations process in the United States occur in case study countries before the budget is presented to Parliament for approval. Also, most case study countries generally deal with the approval of obligations through agency or bureaucratic controls whereas in the United States congressional approval (i.e., “budget authority”) is required before federal agencies can obligate funds. Further, most case study countries used purely cash reporting for budgeting before adopting accrual budgeting. In contrast, the United States’ obligation-based budgeting already captures many obligations not apparent in a purely cash system. These differences are likely to influence perspectives on the trade- offs associated with the use of accrual budgeting, particularly in terms of accountability and legislative control. Key contributors to this assignment were Jay McTigue, Assistant Director; Melissa Wolf, Analyst-in-Charge; Michael O’Neill; and Margit Willems Whitaker. | The federal government's financial condition and fiscal outlook have deteriorated dramatically since 2000. The federal budget has gone from surplus to deficit and the nation's major reported long-term fiscal exposures--a wide range of programs, responsibilities, and activities that either explicitly or implicitly commit the government to future spending--have more than doubled. Current budget processes and measurements do not fully recognize these fiscal exposures until payments are made. Increased information and better incentives to address the long-term consequences of today's policy decisions can help put our nation on a more sound fiscal footing. Given its interest in accurate and timely information on the U.S. fiscal condition, the Senate Committee on the Budget asked us to update our study of other nations' experiences with accrual budgeting and look at other ways countries have increased attention to their long-term fiscal challenges. In 2000, GAO reviewed the use of accrual budgeting--or the recording of budgetary costs based on financial accounting concepts--in Australia, Canada, Iceland, the Netherlands, New Zealand, and the United Kingdom. These countries had adopted accrual budgeting more to increase transparency and improve government performance than to increase awareness of long-term fiscal challenges. Accrual budgeting continues to be used in all six countries; Canada and the Netherlands, which use accrual information selectively, considered expanding the use of accruals but thus far have made only limited changes. Since 2000, other countries have considered using accrual budgeting. For example, Denmark and Switzerland began using accrual budgeting on a selective basis. Norway and Sweden, however, rejected accrual budgeting primarily because they believed cash budgeting enables better control over resources. Countries have taken different approaches in the design of their accrual budgets. Regardless of the approach taken, cash information remains important in all the countries for evaluating the government's finances. Other countries' experiences show that accrual budgeting can be useful for recognizing the full costs of certain programs, such as public employee pensions and retiree health, insurance, veterans benefits, and environmental liabilities, that will require future cash resources. However, these other countries do not use accrual budgeting to recognize their long-term fiscal challenges that are primarily driven by public health care and pension programs. Instead, many countries in GAO's study have begun preparing fiscal sustainability reports to help assess these programs in the context of overall sustainability of government finances. European Union members also annually report on longer-term fiscal sustainability. Although no change in measurement or reporting can replace substantive action to meet our longer-term fiscal challenge, GAO believes that better and more complete information on both the full-cost implications of individual decisions and on fiscal sustainability of the government's finances can help. |
Before FDA will approve a new drug application (NDA), allowing the drug to be marketed in the United States, its manufacturer must demonstrate to FDA’s satisfaction that the drug is safe and effective for its intended use and patient populations. The review process includes examination of the proposed drug labeling, which specifically cites, among other things, the conditions and population the drug has been approved to treat. After the NDA and labeling are approved, any promotional materials used or distributed by the drug companies must be consistent with and limited to the information included in the approved labeling. Drug companies that want to expand the approved uses for their products, and promote those new uses, must submit new safety and effectiveness data and obtain FDA’s approval prior to marketing them for new uses. The Federal Food, Drug, and Cosmetic Act (FFDCA) authorizes FDA to regulate the promotion of prescription drugs. FFDCA and implementing regulations require that prescription drug promotional materials not be false or misleading. FDA has issued implementing regulations that attempt to prevent overstatement in product claims and require balanced disclosure of side effects, contraindications, and warnings. They state, in part, that drug promotions may not recommend or suggest any use that is not in the approved labeling. Any approved new drug promoted for an off-label use is “misbranded” and in violation of FFDCA. FDA has traditionally differentiated between industry-supported scientific and educational activities that are otherwise independent and nonpromotional from other industry activities that are neither. For drugs, only the latter have been treated as labeling or advertising and therefore subject to the applicable provisions of FFDCA and its implementing regulations. In 1997, Congress passed the Food and Drug Administration Modernization Act (FDAMA), which included a provision authorizing drug manufacturers to disseminate journal articles and referenced publications on off-label uses under certain conditions. The FDAMA provision expired on September 30, 2006. However, on February 15, 2008, FDA released draft guidance that recommends good practices for drug companies concerning the dissemination of articles and publications that address off-label use. Among other things, this draft guidance recommends that drug companies limit such activities to the distribution of reprints of peer-reviewed research from scientific or medical journals published by organizations with editorial boards that use experts who have demonstrated expertise in the subject of the article. The draft guidance also states that these reprints should not be material that is written, edited, or otherwise influenced by drug companies or individuals with financial ties to them, nor would false or misleading information be allowed. While some aspects of the draft guidance are similar to the FDAMA provision and its implementing regulations, there are two key differences. The draft guidance does not address, recommend, or suggest that (1) reprints of journal articles and reference publications on off-label uses of drugs be previewed by FDA or (2) supplemental NDAs containing new safety and effectiveness data on the off-label use discussed in the reprint should be sent to FDA. FDA regulates the content of all drug promotional materials and activities, whether directed to medical professionals or consumers. These materials and activities may take many forms, as shown in table 1. FDA does not generally regulate the exchange of scientific information, but when such information is provided by or on behalf of a drug company regarding one of the company’s products, the information may be subject to the labeling and advertising provisions of the law and regulations. For example, while information provided at CME programs—such as medical conferences and professional gatherings intended to enhance physicians’ knowledge and enable them to meet certain practice requirements—is not generally subject to FDA regulation, it will be if the program has been funded and substantially influenced by a drug company. Similarly, FDA’s position is that companies may respond to unsolicited requests for information from health care professionals, even if responding to requests requires the companies to provide information regarding off-label uses. As of March 2008, DDMAC had the equivalent of 44 full-time staff devoted to overseeing prescription drug promotions. This oversight involves reviews of submitted materials and monitoring and surveillance efforts. The two types of promotional materials submitted to the agency for review are: Required submissions of final promotional materials: Drug companies are required to submit all final materials associated with promotions to FDA when they are first disseminated to the public. These materials include everything that a drug company may use as part of a promotion, such as print advertisements; professional slides, exhibit panels, and reprints; and Internet promotions. Once submitted to FDA, promotional materials are distributed to DDMAC staff. When a concern is identified, the agency determines whether it represents a violation and merits a regulatory letter. Voluntary submissions of draft promotional materials: Drug companies have the option of voluntarily submitting draft promotional materials to FDA for advisory review. For example, they may exercise this option before launching expensive promotions, such as a marketing campaign for a new drug or a new television advertisement. For these draft materials, FDA may provide the drug company with advisory comments to consider before the materials are disseminated, particularly if claims are identified that could violate applicable laws and regulations. As part of its comments, FDA provides guidance to the drug company on how to address the agency’s concerns regarding the promotional materials. FDA supplements its reviews of final and draft material that drug companies submit with monitoring and surveillance efforts. These efforts include attending medical conferences, reviewing drug company Web sites, and following up on complaints received. Once DDMAC identifies a violation—whether it be detected through its review processes or its monitoring and surveillance activities—it makes a determination on whether to pursue regulatory action, by issuing an untitled letter or a warning letter. The warning letter is issued for more serious violations with regulatory significance and may lead to enforcement action if corrections are not made. An untitled letter cites violations that do not meet this threshold. Both types of regulatory letters cite any identified violation and ask the drug company to cease dissemination of the violative promotion and any other promotions with the same or similar claims. A warning letter also goes a step further and requests that the company take action to correct the misleading impression left by the violative promotion. Such action may include issuing a correction in the same media as the original violative promotion or notifying appropriate health care professionals. DDMAC prepares these regulatory letters and, prior to their issuance, OCC reviews and approves them to ensure that the letters are legally sufficient and consistent with agency policy. FDA generally posts regulatory letters on its Web site within several days of issuance. Upon receiving either type of letter, drug companies are requested to send FDA a written response within 10 business days. While FDA does not have explicit authority to require drug companies to act upon regulatory letters, when matters raised in these letters, particularly warning letters, are not resolved, the agency may initiate enforcement action through DOJ, which could include seizures of violative products and injunctions prohibiting the company from continuing off-label promotions. In addition, the Food and Drug Administration Amendments Act of 2007 authorizes FDA to impose civil monetary penalties against anyone disseminating false or misleading DTC advertisements, which could include promoting off-label use. OCC provides legal opinions within FDA and participates in both civil and criminal cases, including those related to off-label promotions. FDA’s OCI conducts criminal investigations and may work closely with OCC as well as HHS-OIG and DOJ in conducting off-label investigations. We previously reported on shortcomings in FDA’s oversight of the promotion of prescription drugs in DTC advertising. In 2002 we reported that FDA’s oversight was generally effective but had limitations in halting the dissemination of violative materials or in preventing companies from repeatedly committing violations. We also reported that FDA took increased time to issue regulatory letters, therefore prolonging the time violative materials remained on the market. We recommended that HHS expedite its issuance of regulatory letters to ensure that misleading materials are withdrawn as soon as possible. In 2006, we reported that FDA reviews a small portion of the DTC materials it receives. We also reported that it did not have a process to systematically prioritize its submissions for review. Consequently, we recommended that FDA develop such a process for all of the materials it receives and track which materials it has reviewed—a recommendation we believe remains valid. We also reported that FDA was taking longer to issue regulatory letters than it did in 2002 and we stated that the recommendation in our 2002 report—that the agency issue regulatory letters more quickly—remained valid. In May 2008, we updated this work and testified that FDA still did not systematically prioritize its review of all of the DTC materials it receives and thus could not ensure that it was reviewing the highest- priority materials. We also noted that the amount of time it takes to issue regulatory letters has continued to lengthen. The primary mechanism FDA uses to oversee off-label promotions is its review of materials submitted by drug companies. The oversight of off- label promotions occurs within a broad review process meant to detect a wide range of promotional violations—the agency does not have separate activities designed specifically to detect off-label promotion of prescription drugs. DDMAC staff use a process to prioritize their review of submitted materials, but they do not apply this process systematically. In addition, limitations in FDA’s oversight make it unlikely that it is able to detect all off-label violations that occur. For example, FDA lacks a tracking system to manage its review process. FDA also acknowledges that it cannot review all submissions because of the volume of materials it receives and that only a small portion of the required submissions of final promotional materials are examined for potential violations. Although the agency conducts additional monitoring and surveillance to detect violations that could not be identified through a review of submitted materials, the extent and variety of promotional activities make it difficult for FDA to monitor these in a comprehensive manner. The primary mechanism FDA uses to oversee the promotion of drugs for off-label uses is to review promotional materials submitted to the agency by drug companies. DDMAC staff examine submitted materials for a variety of potential violations simultaneously, such as minimizing the risk of the drug or overstating the safety or effectiveness of the drug, as well as off-label promotions. Although DDMAC staff are tasked with reviewing final versions of materials that are required to be submitted and draft materials voluntarily submitted for advisory review, officials emphasized that advisory review of draft materials is particularly important. They said that this is because the advisory review process encourages voluntary compliance and allows FDA to identify potential violations, including off-label promotion, before materials are disseminated to the public. FDA’s goal is to review all draft materials submitted for advisory review. Consequently, DDMAC staff spend the majority of their time reviewing and responding to these voluntary submissions. DDMAC officials told us that responding to the requests for advisory review can be very time consuming and labor intensive because staff want to ensure that the agency identifies all potential violations during this time. To manage the workload associated with their reviews of final materials that drug companies are required to submit and draft materials submitted for advisory review, DDMAC staff rely on a process to prioritize their reviews that is intended to address those submissions that have the greatest potential to impact public health. DDMAC officials told us that DDMAC’s priorities are regularly updated to reflect changes in agency needs and legal requirements. Currently, it prioritizes its reviews based on whether the promotion involves 1. an apparent, egregious violation; 2. a drug that has undergone recent labeling changes and updates to its 3. a television advertisement disseminated for the first time for a drug or indication, or certain draft promotions that are associated with drugs approved under FDA’s accelerated approval process and that reflect central themes from a company’s promotion; 4. new promotional campaigns that reflect central themes from the company’s promotion; 5. other television advertisements and other draft campaigns submitted under the accelerated approval process; 6. other new promotional campaigns; and 7. other issues of concern. DDMAC officials acknowledged that this process for prioritizing its reviews is not systematically applied to all of the materials it receives. Absent a systemic approach, DDMAC staff sort through large volumes of materials submitted and use the process to review as many submissions as possible. During their reviews of both final and draft materials, staff may use their clinical knowledge about a particular type of drug and its history to help determine if a submission contains an off-label promotion. DDMAC staff are organized into therapeutic review groups by drug category, such as allergy medications, to maximize individual knowledge about specific drugs and the marketing issues related to those drugs. Staff are assigned promotional materials based on their therapeutic review group. DDMAC officials told us that this organization allows staff to develop familiarity with certain types of drugs, making them knowledgeable about information in the approved labeling and better able to identify off-label promotions. In addition to its reviews of submitted materials, FDA also engages in monitoring and surveillance efforts. These efforts are intended to detect violations that could not be identified through FDA’s reviews—such as violative oral statements made by sales representatives in discussions with physicians. These efforts may also identify violations that may be missed by FDA’s review of submitted materials. As part of their monitoring and surveillance efforts, DDMAC and other FDA staff may attend educational events, such as CME programs, to monitor for inappropriate promotions. For example, an FDA official attending a CME conference might obtain a brochure discussing off-label use, which should have been submitted to the agency but was never provided to the agency. AMA and ACCME officials acknowledged that even though there are safeguards built into the CME accreditation process to ensure presenter independence and CME compliance with FDA regulations, violations may still occur. FDA’s monitoring and surveillance efforts also include reviewing and following up on complaints it receives. These may be submitted by a drug company’s competitors, health care providers, consumers, and former drug company personnel who have knowledge about violative promotions. DDMAC officials said that these complaints may inform FDA of potentially inappropriate oral promotions and also provide a backup system for identifying violations that may be on submitted materials that FDA never examined. It is unlikely that FDA can detect all off-label promotion that occurs because of limitations in its oversight process for reviewing the promotion of prescription drugs. FDA’s oversight is hampered by the lack of a system or process that consistently tracks its receipt and review of submitted materials. For example, DDMAC does not track the number of drafts it receives for advisory review. Despite its goal of reviewing all such submissions, DDMAC is unable to do so because, as officials explained, some drug companies release their promotions before they receive FDA’s advisory comments. However, DDMAC does track the number of letters it issues in response to the draft submissions staff are able to review. Conversely, DDMAC tracks the number of final submissions it receives but does not track the number of the final submissions staff review. In 2006, GAO recommended that FDA track which materials it has reviewed but the agency has not taken action to address this recommendation. For example, DDMAC officials could not provide us with information on the prevalence of off-label promotions among material reviewed, the time it takes to complete reviews, or the status of their reviews. DDMAC officials said that obtaining this type of information is not currently possible due to the design of existing systems. As these are the issues that led us to our 2006 recommendation, we believe that this recommendation remains valid. In addition, DDMAC officials told us that they receive substantially more materials than the agency can review. FDA received approximately 277,000 final promotional materials that drug companies were required to submit during calendar years 2003 through 2007. As shown in figure 1, FDA has received a steadily increasing number of final promotional materials during this time—the annual number increased from just over 40,000 in 2003 to over 68,000 in 2007. DDMAC officials generally attribute this growth to increases in DTC advertising as well as the increase in materials that drug companies are using to promote more complex new drugs. DDMAC and other FDA officials acknowledge that it is very difficult, if not impossible, for FDA’s supplementary monitoring and surveillance efforts to identify all off-label promotion that may occur. This is because inappropriate promotion can take many forms and occur in a myriad of places. For instance, DDMAC and other FDA staff attend only a small number of the thousands of CME programs that occur each year. FDA is further challenged by the possibility that off-label promotional material, unrelated to a CME presentation, may be available to participants at nearby exhibition booths that drug companies often sponsor in conjunction with CME events. Although drug companies are required to submit such material to FDA for review, they might not do so or FDA might not review these materials until the conference or activity is completed. DDMAC officials told us that they consistently follow up on all complaints received as part of their monitoring and surveillance efforts, including those related to off-label promotion. According to DDMAC officials, FDA received and investigated an average of 150 complaints annually on possible promotional violations from 2003 through 2007. However, they could not provide us with data on the total number of their monitoring and surveillance efforts because this information is not tracked. FDA’s monitoring and surveillance efforts are further complicated by difficulties in assessing the merits of potential violations and the validity of complaints received. For example, according to FDA officials, the agency does not have sufficient authority to gather the key evidence necessary to determine whether educational activities are independent of the influence of drug companies. For example, DDMAC may not be able to determine whether a speaker at a CME event has been paid by the drug company to promote a drug for off-label uses. In such instances, DDMAC officials told us that they may work with other agencies, such as HHS-OIG and DOJ, which have the necessary investigative tools, such as subpoena authority, to investigate. Similarly, complaints can be difficult to validate. For example, a physician may complain to FDA about promotional material that was shown during a sales visit, but FDA staff may not be provided or have access to the material and therefore may be unable to determine if its use was violative. In addition, because FDA allows the exchange of information upon a request from a physician, it may be difficult to determine if information a sales representative provided orally to a physician was not requested. Without physicians’ complaints, however, FDA would be unaware of these violative conversations. FDA not only depends on a physician’s initiative to make a complaint but also on the physician’s knowledge of when such conversation is inappropriate. FDA and DOJ have taken regulatory and enforcement actions against drug companies for violative off-label promotions. During calendar years 2003 through 2007, FDA issued 42 regulatory letters—23 warning letters and 19 untitled letters—in response to off-label promotions. However, it took FDA an average of about 7 months to issue these letters, during which time violative material remained in the market. Most of the off-label promotional violations cited in those regulatory letters were identified through FDA’s review of required drug company submissions. The promotional violations typically were targeted toward physicians and other medical professionals. According to DDMAC officials and our own analysis of correspondence between drug companies and FDA, drug companies have generally complied with the agency’s directives as suggested in these letters, but may not have always done so in a timely manner. For example, it took drug companies receiving warning letters issued in response to the more serious violations an average of 4 months to take corrective action. According to DDMAC officials, they did not refer any violations to DOJ for enforcement action during 2003 through 2007. However, DOJ initiated civil and criminal enforcement actions in response to instances involving off-label promotion it identified from other sources. DOJ actions resulted in 11 settlements with drug companies that dealt, at least partially, with off-label promotion. While none of these were initiated by DDMAC, entities within FDA were ultimately involved in their resolution. Overall, FDA issued 117 regulatory letters for promotional violations during calendar years 2003 through 2007. However, according to DDMAC officials, there were more identified violations than those for which FDA issued regulatory letters because FDA prioritizes violations. Specifically, they said that in this regard, FDA’s first priority is to issue warning letters because they generally address the most serious violations. For less serious violations—those involving untitled letters—these officials said that the issuance of such letters may be delayed, depending on the agency’s workload. Our analysis of the 117 regulatory letters indicates that off-label promotion was the third most common violation, cited in 42, or approximately 36 percent, of the regulatory letters, as shown in table 2. Our analysis of the 42 regulatory letters citing off-label promotion indicates that review of submissions was the primary manner in which FDA identified off-label promotion. Specifically, for 31 of these letters, or 74 percent, FDA identified at least one violative promotion through its review of required submissions of final promotional materials. Fourteen letters indicate that FDA identified at least one violative promotion through monitoring and surveillance activities. For more information on the off-label promotions cited in the 42 letters, see appendix I. Half of the promotions cited in the 42 regulatory letters were targeted toward physicians and other medical professionals. Our analysis showed that 21 of the 42 off-label regulatory letters were issued in response to off- label promotions that included materials such as professional journal ads and exhibit panels, which solely targeted physicians and other medical professionals. Seven letters were issued in response to promotions directed solely to consumers, such as DTC magazine, television, or radio advertisements. The remaining 14 letters addressed promotions directed toward both medical professionals and consumers, such as product Web sites, as shown in figure 2. only (7) Medicl profession nd consumer (14) Medicl profession only (21) Our analysis of FDA documents related to the 42 regulatory letters citing off-label promotion indicated that it took FDA an average of about 7 months to issue the letters after DDMAC staff first drafted the letters. For example, on March 7, 2006, FDA drafted a warning letter to Alcon, Inc. for off-label promotion, among other things. Over 7 months later, on October 20, 2006, FDA issued the letter. In 2002, GAO recommended that the agency issue regulatory letters more quickly. Because violative materials remain in circulation prior to the issuance of related regulatory letters, the length of time it takes FDA to issue these letters limits their effectiveness. As these are the issues that led us to our 2002 recommendation, we believe that this recommendation remains valid. According to DDMAC officials, drug companies sent FDA written responses to the regulatory letters, and in most instances, they ceased dissemination of identified violative materials upon receipt of a regulatory letter. However, DDMAC officials noted that there were occasions when they engaged in extensive discussions with drug companies that challenged the agency’s assessment of a violation or the action requested in the regulatory letter. For example, a drug company may seek to negotiate with FDA in order to avoid having to take corrective actions, such as retracting an expensive DTC advertisement. DDMAC officials told us that during calendar years 2003 through 2007, FDA did not have to reverse any of its regulatory letter decisions as a result of such negotiations. Although FDA cannot ensure that a drug company has ceased dissemination of all violative materials related to a regulatory letter, it obtains a company’s written agreement to stop dissemination of such materials, ensures that the list of materials a company is to stop disseminating is comprehensive, and reviews any new material submitted by the company for 6 months after issuance of a regulatory letter. Twenty-three of the 42 off-label regulatory letters issued were warning letters, which, according to DDMAC officials, are issued for more serious violations than those cited in untitled letters. Ultimately, they said all but one company—which was issued a warning letter on May 25, 2007, and remained in negotiations with FDA as of April 22, 2008—had taken the necessary action requested in these warning letters. Consequently, DDMAC did not refer any violations regarding off-label promotions to DOJ for enforcement action. However, corrective action may not have always occurred in a timely manner. Our review of FDA documentation related to the 23 warning letters showed that it took drug companies an average of 4 months to implement corrective action from the time FDA issued the regulatory letter. For example, on September 14, 2006, FDA issued a warning letter to Reliant Pharmaceuticals, Inc. for, among other things, off-label promotion of its drug Rythmol SR. Following the company’s formal response letter on September 29, 2006, FDA and Reliant Pharmaceuticals, Inc. participated in at least three teleconferences and FDA wrote two letters in response to Reliant’s proposed corrective action. Over 7 months after the letter was issued, the drug company disseminated the first set of corrective materials on April 17, 2007. While DDMAC officials told us that drug companies have generally complied with FDA requests in the 42 regulatory letters, such letters do not prevent drug companies from repeatedly disseminating violative promotional materials. Our analysis of the 42 regulatory letters showed that for 11 of the 42 drugs cited in those letters for off-label promotion, FDA had issued regulatory letters citing off-label promotion in the past, as shown in table 3. For example, on March 18, 2004, Wyeth Pharmaceuticals was issued an untitled letter citing off-label promotion, among other things, for its drug Effexor XR. Prior to that letter, FDA had issued two other regulatory letters issued for off-label promotion of Effexor XR and Effexor, a related drug, on October 11, 2000, and June 25, 1997, respectively. Additionally, in another 2 of the 42 drugs FDA had prior communication with the drug companies about off-label promotion concerns. According to DDMAC officials, they did not refer any violations to DOJ for enforcement action during calendar years 2003 through 2007 because drug companies have generally complied with requests made in FDA’s regulatory letters during that time period. However, in the same time period, DOJ pursued a number of alleged violations in response to off- label promotion that it identified from other sources. Specifically, DOJ enforcement action resulted in 11 settlements with drug companies, which involved, at least partially, allegations of off-label promotion and resulted in, among other things, a monetary settlement. These settlements involved the types of promotional practices that are most difficult for FDA to identify, such as violative discussions between physicians and drug company sales representatives. For example, at least 3 of the settlements involved specific allegations of off-label promotion between sales representatives and physicians. For more information on the alleged actions by drug companies, see appendix II. The resulting monetary settlements ranged from almost $10 million to over $700 million. For example, in September 2007, Bristol-Myers Squibb Company agreed to pay over $500 million for, among other things, promoting its drug Abilify—approved to treat schizophrenia and bipolar disorder—for pediatric use and for the treatment of dementia-related psychosis. In this instance, DOJ alleged that Bristol-Myers Squibb Company created a group of salespeople to target nursing homes where dementia is much more prevalent than schizophrenia or bipolar disorder. See table 4 for a summary of the 11 settlements negotiated by DOJ. FDA had previously taken action against the drug companies with which DOJ reached settlements. We reviewed regulatory letters that FDA issued to drug companies from calendar years 1997 through 2007 for the same 12 drugs cited in the 11 settlements. This review indicated that, since 1997, FDA had identified promotional violations and issued one or more regulatory letters to drug companies for 7 of the 12 drugs. Of these 7 drugs, drug companies received regulatory letters for 5 drugs that cited off-label promotion. For 1 of these 5 drugs, the drug company received an FDA regulatory letter in June 2001 citing off-label promotion that was directly linked to the settlement. In response to the letter, the drug company assured FDA that the cited violation was an isolated incident. In the 2006 settlement, the company agreed, among other things, to plead guilty to criminal conspiracy to make false statements to FDA regarding its promotion cited in the 2001 regulatory letter. Specifically, the company acknowledged in the settlement that it knowingly misled FDA by claiming the violation was an isolated incident instead of a nationwide campaign. The regulatory letters FDA issued to drug companies for the other 4 drugs cited companies for off-label promotions that were not cited as the basis for the settlement. For example, for 1 of these 4 drugs, FDA issued an untitled letter to the drug company in September 2000, citing off-label promotion in a submitted DTC television advertisement. The related December 2005 DOJ settlement, however, was in response to off-label promotion conducted by the drug company’s sales representatives and not the DTC advertisement cited in FDA letter. Table 5 provides information on the 12 drugs cited in the 11 settlements for off-label promotion and any prior regulatory letters issued by FDA. While DDMAC did not refer the violations to DOJ that resulted in the 11 settlements, it participated in their resolution. Specifically, DDMAC officials told us that they provided input to DOJ, such as information on whether the matter promoted off-label use or was otherwise violative, as well as opinions on the seriousness of the violation. Similarly, FDA’s OCC and OCI participated in almost all of the investigations by providing legal counsel and conducting criminal investigations, respectively. Specifically, in all 11 settlements, one or more of FDA’s offices—OCC, OCI, or both— were involved. In many of those instances, FDA became involved at DOJ’s request and remained involved from the preliminary investigation through the final settlement. FDA’s OCC and OCI officials told us that these investigations can be long term and very resource intensive. According to an FDA official, FDA is currently working on approximately 40 investigations regarding off-label promotion. HHS reviewed a draft of this report and provided comments, which are reprinted in appendix III. HHS’s comments focused on our discussion of FDA’s process for prioritizing and tracking promotional materials submitted by drug companies for review. First, HHS raised concerns with our finding that DDMAC staff do not systematically prioritize all of the materials they receive. HHS stated that DDMAC staff apply prioritization criteria systematically to, among other things, the advisory submissions they receive. In addition, HHS stated that DDMAC staff also use criteria to determine which of the submissions of disseminated materials—that is, those final materials submitted for review—should be examined. However, we found no evidence that FDA systematically prioritizes all of the submissions it receives. We found that DDMAC staff do not screen all of the tens of thousands of final promotional materials they receive per year to determine which ones need to be reviewed. This means that FDA is not systematically applying its prioritization criteria to the majority of submissions the agency receives. We recognize that the volume of materials FDA receives presents a challenge for completing a detailed review of each submission, but without a systematic application of its criteria to screen submissions, it cannot be certain that it is reviewing the highest-priority materials or that violative materials are not being circulated. Applying the current criteria to the submissions DDMAC staff review, even if done consistently, is not the same as systematically screening all submissions in order to determine which ones should be reviewed. Second, HHS commented that a tracking system would not improve the agency’s ability to identify promotional violations nor would it change which submissions are actually reviewed. HHS said that such a system would not enable DDMAC to more efficiently regulate off-label promotion. We disagree. We continue to believe that, as we recommended in 2006, a tracking system would facilitate a more systematic approach to DDMAC’s reviews, would allow FDA to more readily group materials for review, and could enhance its monitoring and surveillance efforts by providing data on materials reviewed and the findings of those reviews. In short, a simple tracking system would provide key information for managing the program. HHS did not comment on our reiteration of our 2002 recommendation that the agency issue regulatory letters more quickly. HHS also provided technical comments, which we have incorporated as appropriate. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after its issuance date. At that time, we will send copies to the Secretary of HHS, the Commissioner of FDA, and other interested parties. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. Purdue Pharma L.P. Hoffmann-La Roche, Inc. Gilead Sciences, Inc. Amgen, Inc. Actelion Pharmaceuticals US, Inc. Pfizer, Inc. Reliant Pharmaceuticals, Inc. BioMarin Pharmaceuticals, Inc. Alcon Research, Ltd. Ligand Pharmaceuticals, Inc. Treatment of persistent or recurrent cutaneous T-cell lymphoma in particular patients (Ontak) and treatment of cutaneous manifestations of cutaneous T-cell lymphoma in certain patients (Targretin) Treatment of T-cell lymphoma in a broader patient population (Ontak and Targretin) Cephalon, Inc. Alcon Laboratories, Inc. DUSA Pharmaceuticals, Inc. Allergan, Inc. Pharmaceuticals, Inc. Santen, Inc. Genpharm, Inc. Alcon Research, Ltd. Treatment of eye infections caused by specific microorganisms in the conditions of corneal ulcers and conjunctivitis (Ciloxan) Treatment of otitis media and otitis externa (Ciloxan) Acute treatment of migraine headaches with or without aura and the acute treatment of cluster headache episodes (D.H.E. 45) Treatment of status migrainosis or intractable migraine (D.H.E. 45) Treatment of major depressive disorder (Effexor XR) Treatment of normal periodic feelings of low interest or energy (Effexor XR) Hoffman-La Roche, Inc. Alcon Research, Ltd. Takeda Pharmaceuticals North America, Inc. Treatment of actinic keratoses in combination with cryotherapy In this case, off-label promotional activities were cited for two different drugs. Therefore, information on the approved condition and the off-label promotion cited for both drugs is presented. In this case, violative promotional activities were cited for two different drugs, but off-label promotion was cited for only one of these drugs. Only information on the approved condition and the off-label promotion cited for that drug is presented. Encouraged sales representatives to provide one-on-one sales pitches to physicians about off-label uses. Sponsored “independent medical education” events on off-label uses and misled the medical community on the content and lack of independence. Trained sales representatives to prompt or bait questions by physicians to promote the drug for off-label uses. Encouraged sales representatives to send medical letters and other marketing materials that were not requested by physicians in order to promote off-label uses. Conspired with a medical device manufacturer to market computer software packages to diagnose AIDS-wasting, although the device was not approved by FDA for this use. The drug company then tried to increase the market for such devices in order to increase the market for the drug. Offered physicians all-expense paid trip to encourage off-label prescriptions. Conspired to make false statements to FDA regarding its improper promotional activity in response to FDA’s inquiry regarding certain illegal promotional activities by the company’s sales representatives at a national medical conference for oncologists. These false statements were designed to reassure FDA that the promotional activities were isolated and not directed by the home office, when they were actually widespread and part of the national marketing plan. Conducted a clinical trial, which failed to establish statistically significant evidence of benefit, but published press releases indicating false outcomes from the clinical trials. Conducted sales of the drug from August 2002 through January 2003 that were attributable to the prescribing of the drug for the treatment of Idiopathic Pulmonary Fibrosis, an off-label use. Promoted drug for off-label uses, such as anti-aging, cosmetic use, and athletic performance enhancement. Falsely marketed to physicians by suggesting that it was FDA approved for treating a different type of cancer than approved for, and was listed as medically accepted in the compendia for treating other types of cancers. Used illegal kickbacks to induce physicians to prescribe the drug and paid them to attend dinners or conferences on off-label uses. Targeted pediatricians and urged them to use the drug as a treatment for diaper rash—the drug is approved as a fungicide and not for treating children under 10 years of age. Promoted the drug as less addictive, less subject to abuse, and less likely to cause withdrawal symptoms than other pain medications without FDA approval. Made sales calls to physicians, who did not specialize in the area that the drug was approved for, and promoted the drug for off-label treatments and distributed off-label promotional materials. Paid a psychiatrist to give talks around the country to promote the drug for off-label uses. Promoted the sale of the drug for pediatric use and dementia-related psychosis, both off-label uses. In addition to the contact named above, Geri Redican-Bigott, Assistant Director; Cathy Hamann; Mollie Hertel; Julian Klazkin; Michaela M. Monaghan; and Pauline Seretakis made key contributions to this report. | The Food and Drug Administration (FDA), an agency within the Department of Health and Human Services (HHS), regulates the promotion of prescription drugs to ensure that promotional materials are not false and misleading and that they comply with applicable laws and regulations. Among other things, FDA prohibits drug companies from promoting drugs for off-label uses--that is, for a condition or patient population for which the drug has not been approved or in a manner that is inconsistent with information found on the approved drug label. Although doctors may prescribe drugs off label, it is not permissible for drug companies to promote drugs for off-label uses. FDA may take regulatory actions for violations, and may also pursue enforcement action through the Department of Justice (DOJ). GAO was asked for information about the promotion of drugs for off-label uses. GAO reviewed (1) how FDA oversees the promotion of off-label uses of prescription drugs and (2) what actions have been taken to address off-label promotions. GAO examined documentation related to the promotion of drugs for off-label uses and FDA correspondence with drug companies on identified violations and obtained information from DOJ on relevant actions. GAO also interviewed officials at FDA and the HHS Office of Inspector General and representatives of national medical and pharmaceutical associations. FDA oversees drug promotion for off-label uses by reviewing promotional materials that drug companies submit to the agency. However, because FDA does not have separate oversight activities to specifically capture off-label promotion, its oversight occurs within a broader process that targets a variety of promotional violations. Furthermore, FDA reports it is unable to review all submissions because of the volume of materials it receives and prioritizes its reviews in order to examine those with the greatest potential impact on human health. However, FDA does not prioritize its reviews in a systematic manner but rather relies on its staff to sort through large volumes of material and select submissions for review. FDA is also hampered by the lack of a system that consistently tracks the receipt and review of submitted materials. To address these shortcomings, GAO recommended in 2006 that FDA track which materials it has reviewed. FDA has not acted on this recommendation and still lacks a standardized tracking system to monitor its review efforts. GAO believes that this recommendation remains valid. In addition to its reviews, FDA conducts monitoring and surveillance to identify violations that would not be identified through its review of submitted material--for instance, discussions between doctors and sales representatives. These efforts are also limited because FDA cannot observe all off-label promotion activities as they can take many forms and occur in a myriad of places. FDA and DOJ have taken regulatory and enforcement actions against drug companies in response to off-label promotions. During calendar years 2003 through 2007, FDA issued 42 regulatory letters in response to off-label promotions requesting drug companies to stop dissemination of violative promotions. FDA took an average of 7 months to issue these letters from the time it first drafted them. In addition, drug companies that were cited for more serious violations took an average of 4 months to take the corrective actions requested. While FDA did not refer any of these violations to DOJ for enforcement action, during calendar years 2003 through 2007, DOJ settled both civil and criminal cases that involved, at least partially, off-label promotion. These actions were initiated as a result of violations identified by sources other than FDA and resulted in 11 settlements. In commenting on a draft of this report, HHS raised concerns with GAO's assessment that FDA does not systematically prioritize all of the promotional materials it receives. It also stated that a tracking system would not improve the agency's ability to identify promotional violations. GAO found that FDA does not screen all promotional materials. GAO continues to believe that a tracking system would help ensure that staff screen all material received, facilitate a more systematic approach to FDA's reviews, and help the agency manage the program. |
IRS initiated the IRDM program, in part, to implement new information reporting requirements, but more generally to increase voluntary compliance with tax laws by expanding and maximizing IRS’s ability to match existing and future information returns with tax return data and establishing a new business information matching program. Previously, IRS had only matched information returns to individuals’ and sole proprietors’ tax returns. Under IRDM, IRS plans to build several new IT systems and enhance some existing systems as well as implement numerous organizational and process changes. IRS plans for IRDM to use information returns to identify individual and business tax returns that are likely sources of revenue, which the current individual tax return matching system is not designed to identify. IRDM implementation is led by IRS’s Small Business/Self Employed division and MITS, which is leading the IRDM IT system development. Cost estimates are a vital factor for sound management decision making and they aid in the formation of a project’s budget. IRS uses cost estimates, in part, to justify budget requests and prioritize the selection of IT projects for possible funding. After an IT project is approved, the cost estimate is later used as a starting point for developing the performance measurement baseline for EVM, a project management approach that, if implemented appropriately, provides management important tools such as objective reports of project status and early warning signs of impending schedule delays and cost overruns. Data from a reliable performance management system, such as EVM, are necessary inputs for an updated cost estimate, among other things. OMB issued guidance on managing IT projects, which discusses cost estimation and refers to our cost guide for how to meet cost estimating requirements.continuously updated based on the latest information available to ensure that they are current, accurate, and valid. According to our cost guide, effective program and cost control requires ongoing revisions to the cost estimate, budget, and projected estimates at completion. Specifically, our guide states that estimates should be continuously updated with actual costs incurred to that point so that significant cost, schedule, or performance variances can be examined. In addition, it says that cost estimates should be updated to reflect significant changes to a project’s OMB guidelines state that cost estimates should be scope or specifications and when certain projects approach key milestones. Within MITS, project managers and EPO are involved in estimating program costs. EPO is an independent group of cost estimation experts that assists project teams by developing and updating cost estimates using a standard documented process. Project managers are responsible for maintaining a program’s cost estimate. EPO only becomes involved in updating a program’s cost estimate at the request of project managers, according to EPO officials. IRS procedures for developing, using, and updating cost estimates and EVM are described in several guidance documents, specifically: EPO’s Estimator’s Reference Guide, which is used by EPO staff, is the general resource on the processes and procedures for developing and delivering IT cost estimates. It discusses the technical aspects of updating cost estimates, such as what documents are used in cost modeling once a project has begun. IRS’s Information Technology Investment Planning and Management Guide (Investment Guide) outlines the framework for selecting, managing, and evaluating IRS IT projects. The Investment Guide includes discussions of how IT projects are selected using cost information, and how managers should use cost information to monitor a project. Project managers are responsible for managing cost, schedule, and performance for a project. MITS’s Estimation Procedures document describes IRS’s organizational approach to cost estimation, applicable to all IRS projects. The document is directed at project managers and it includes discussions of the steps and staff roles necessary to develop an estimate, and the circumstances when EPO typically becomes involved with updating a cost estimate. The document states that project managers are responsible for monitoring project progress and suggests initiating assistance from EPO if a project meets certain thresholds. The Department of the Treasury’s Earned Value Management Guide provides guidance for implementing EVM on major IRS projects. Program or project managers have the ultimate responsibility for implementing and monitoring the EVM system for their program or project. IRS developed two cost estimates for IRDM early in the program. In 2007, IRS developed a preliminary cost estimate for budgetary purposes when very little program information was available (referred to in this report as the 2007 preliminary cost estimate). As shown in table 1, in 2007, IRDM system development was estimated to cost about $5 million in fiscal year 2009 and about $23 million per year thereafter. In 2009, EPO developed a solution concept-based estimate (SCBE, referred to in this report as the 2009 SCBE), which was more rigorous than the 2007 preliminary estimate. The 2009 SCBE was developed before program implementation began, when MITS had more information than it did in 2007, but system design plans were still under development. The 2009 SCBE was about $36 million less through the first 4 years of the project than the 2007 preliminary estimate. IRS used the 2007 preliminary estimate to justify the initial IRDM budget. According to a MITS official, the 2009 SCBE was not used for budgetary purposes because program specifications were undergoing modification in fiscal year 2010, requiring the full $23-million per-year funding. For example, in 2010, IRS made several changes to the complexity of the IRDM program, which included dividing it into four projects. One of the projects required restructuring as a separate development effort, using different resources and technologies. Changes also necessitated using new software development methods not previously used within IRS, and additional contracting support. According to MITS officials, such changes increased IRDM funding needs above the amounts supported by the 2009 SCBE. As a result, IRS did not revise its initial funding request for future fiscal years using the SCBE and instead relied on the 2007 preliminary estimate. From fiscal years 2009 through 2011, IRS received about $52 million in total funding for IRDM, of which the IRS had spent $46 million (when accounting for $2.6 million carried over from fiscal year 2009 funds and $5.8 million carried over from fiscal year 2010), through fiscal year 2011, as shown in table 2. In our May 2011 report,the most rigorous IRDM cost estimate available at the time and the 2007 preliminary estimate lacked documentation for a complete review. We found that the 2009 SCBE did not fully follow best practices. We recommended that if IRS updated the cost estimate, it should follow best practices from our cost guide. In response to our report, IRS said it would update the 2009 SCBE. IRS subsequently decided not to revise the estimate because, according to officials, they already have a plan, schedule and funding, the program is not over-budget, and the risks associated with IRDM, and the program’s size, do not warrant an update. we assessed the 2009 SCBE because it was Over the summer of 2011, MITS provided us with additional cost information. Officials referred to these documents as IRDM’s new cost estimate, and they were used in supporting IRS’s fiscal year 2012 IRDM budget. Consequently, in this report, we refer to the materials provided to us as the 2011 cost estimate. This estimate was not a SCBE or developed by EPO. The 2011 cost estimate was based on several data sources, including IRDM’s Exhibit 300, EVM data, spend plans, and schedule with work breakdown structures (WBS). MITS projects IRDM to cost $115 million for fiscal years 2012 through 2016, or about $23 million per year. According to best practices established by our cost guide, a cost estimate We should be comprehensive, well documented, accurate, and credible.assessed the 2011 cost estimate against cost estimation best practices because IRS told us the estimate was used to support its budget requests for fiscal year 2012 and beyond. While the 2011 IRDM cost estimate shows some characteristics of a reliable cost estimate, it does not fully meet best practices. The estimate partially meets best practices for a comprehensive cost estimate, as shown in figure 1. It reflects the current program schedule and contains information about the program’s technical characteristics. The estimate provided some details about costs for IRDM’s fiscal year 2012 budget request, but the cost estimate does not cover the program’s entire life-cycle. Without fully accounting for life-cycle costs, management may have difficulty successfully planning program resource requirements and making informed resource-planning decisions. IRS defined assumptions used to estimate some IRDM costs, but did not provide the assumptions used to estimate labor or program operations costs. Furthermore, IRS did not include ground rules used to develop the estimate. Unless ground rules and assumptions are clearly defined, the cost estimate will not have a basis to identify and mitigate areas of potential risk. The estimate minimally meets best practices for a well documented cost estimate, as shown in figure 2. IRS provided supporting information for some staff resources, but detailed data for the staffing level requested for fiscal year 2012 was missing. The cost estimate documentation says that the labor cost justification was captured in the resource loaded project schedules, but we found that these schedules only justified about 6 out of the 86 requested full-time equivalent (FTE) staff for IRDM. Furthermore, although IRS officials cited the WBS as the basis for cost projections, we found no evidence linking the WBS to cost. Multiple documents linked software and hardware specifications to cost, but they did not provide consistent cost information. As a best practice, documentation should describe the source data used, the estimating methodology, and show step-by-step how the estimate was developed. Without a well documented cost estimate, the program’s credibility may suffer because the documentation cannot explain the rationale of the methodology or the calculations underlying the cost elements. The estimate minimally meets best practices for an accurate cost estimate, as shown in figure 3. Calculations in the estimate are mathematically correct. However, documentation that IRS provided to support estimated costs for IRDM hardware and software’s estimated costs did not match estimates in IRDM’s spend plans. IRS officials said the discrepancies occurred because the spend plans were developed using more recent cost information for software purchases that were not included in the supporting documentation. Additionally, the estimate does not list any confidence levels or provide a range of possible costs. According to best practices, unless an estimate is based on an assessment of the most likely costs and reflects the degree of uncertainty given all of the risks considered, management will not be able to make informed decisions. IRDM uses EVM to identify variances between planned and actual costs, but as discussed below, we found IRDM’s EVM data to be unreliable, and there was no evidence that IRS uses actual cost data to evaluate whether cost projections are realistic. The estimate does not meet best practices for a credible cost estimate, as shown in figure 4. For example, the estimate was not crosschecked or assessed for risk and uncertainty. According to best practices, an estimate without risk and uncertainty analysis can be unrealistic because it does not assess how the cost estimate would be affected if, for example, the schedule slipped, the mission changed, or a proposed solution did not meet users’ needs. In addition, IRS did not perform a sensitivity analysis. Further, there is no evidence that another office performed a separate cost estimate—referred to as an independent cost estimate—to validate the 2011 cost estimate. In previous work we found that because of limited resources, IRS generally only does an additional independent cost estimate for its largest programs, and according to officials, IRDM is not considered a large enough program in terms of its funding level. While some of MITS’s cost estimates are done by EPO— which is independent of program management offices—the 2011 cost estimate was done by the IRDM program office. Because the 2011 cost estimate does not meet best practices, it does not provide reliable support for IRDM’s fiscal year 2012 budget request, or any of the projected budget requests. IRS officials said current IRS policy does not require projects to routinely re-estimate project cost. The IRDM program office—which does not use the same software or modeling techniques as EPO—relied on spend plans, EVM, and other documents to estimate costs. In July 2011, MITS officials said that it would take 90 days for IRDM and EPO staff to complete a new cost estimate for IRDM. When considering FTEs and time, a new cost estimate developed by IRDM and EPO staff would require a total of about eight staff months. EPO has specialized cost estimation tools, such as software that incorporates many best practices from our cost guide, and expertise that project teams can leverage to update cost estimates. If used correctly, EPO estimation procedures could help IRDM management to maintain reliable cost information for use in budget requests. EPO officials said they did not work with the IRDM team to maintain an updated SCBE because the team did not seek their assistance. IRDM was not required to do so because MITS guidance, as of September 2011, does not require project teams to consult with EPO when updating a cost estimate. Without EPO involvement, IRS has less assurance that cost estimate updates will follow best practices. Our cost guide states that cost estimates should be (1) updated to reflect actual costs and changes (i.e., significant modifications to a project’s scope or specifications) in order to keep the estimate current as the program passes through new phases and milestones and (2) updated if there are significant cost, schedule or performance variances. The continual updating of the cost estimate as the program matures not only results in a higher-quality estimate, but also gives cost estimators the opportunity to collect data for use in future estimates as well as incorporate lessons learned. Our cost guide also states that cost estimation work should be done by a central independent estimating organization, and estimators should monitor programs to determine whether preliminary information and assumptions remain relevant and accurate. EPO has the following characteristics—unlike the IRDM project team—that could help provide more reliable cost estimate updates: EPO is able to use robust cost estimation techniques, including the SEER-SEM software cost estimation model.project histories and cost relationships to produce cost estimates and can estimate costs consistent with best practices—such as adjusting for risk and incorporating the results of a sensitivity analysis. When EPO estimators validate or update a project’s funding requirements, they tailor SEER-SEM to the project and use it to consider actual cost data from the project team, according to EPO’s Estimator’s Reference Guide. Estimators calibrate the model to include the schedule for remaining work and evaluate and revise key cost drivers, according to the guidance. As mentioned previously, the 2011 IRDM cost estimate did not use a cost estimation model. EPO, as a whole, has more cost estimation experience than the IRDM project team. EPO’s six cost estimators have 43 years of combined cost estimation experience. They all have received training in SEER- SEM and other cost estimation models. In addition, EPO officials said that project teams generally would not have the technical skills to update a SCBE using cost estimation models. Although the IRDM project team has some cost estimation experience and relevant training, IRDM officials do not have SEER-SEM training or experience. Further, our analysis of IRDM’s 2011 cost estimate illustrates that estimate updates done by project teams may not result in reliable cost information. According to EPO officials, an updated estimate developed by EPO would also be independent, more holistic, and would include elements that project teams may miss. EPO-produced updates can help build a historical record of IRS cost estimate data. According to our cost guide, historical data are crucial to developing high-quality cost estimates because estimators usually develop estimates for new programs by relying on data from programs that already exist and adjusting for any differences. EPO officials told us that they are working to build a historical database that compares estimated costs to actual costs. As IRS’s central estimating organization, EPO is uniquely qualified to use cost estimate updates as an opportunity to obtain data that are consistent with other estimates and to use the data to build a historical cost database, which can ensure that future cost estimates are credible. According to MITS officials, it is up to project teams to seek EPO assistance. However, MITS’s Estimation Procedures document, which provides cost estimation guidance to project teams, suggests some cost and schedule variance and project size thresholds that, if exceeded, should cause project managers to contact EPO for an updated estimate. It recommends EPO involvement in updating estimates when: cost or schedule variance are 10 percent or greater for major projects; cost or schedule variance are 25 percent or greater for non-major projects; or a project with development/modernization/enhancement (DME) costs greater than $5 million reaches milestone 3. IRDM meets the first threshold. Specifically, IRDM meets the IRS’s criteria for a “major” project because IRDM’s projected life-cycle costs are about $166 million, based on funding projections submitted to OMB. Also, according to IRDM EVM data, as of September 30, 2011, the program had a greater than 18 percent cost variance and an almost 13 percent schedule variance. IRDM officials said they did not work with EPO to update the 2009 SCBE because it was not required by current MITS guidance. DME is a term used by OMB to describe the program cost for new investments, changes or modifications to existing systems to improve capability or performance, changes mandated by the Congress or agency leadership, personnel costs for investment management, and direct support. For major IT investments, this amount should equal the sum of amounts reported for planning and acquisition plus the associated FTE costs reported in the Exhibit 300. developing cost estimates in 2006. Most of its work has focused on developing estimates for proposed projects, rather than updating estimates for existing projects. The Estimation Procedures document was developed in September 2011, and as of October 2011, officials said EPO has six cost estimators on staff. As a result, EPO officials said that project teams generally update IT project cost estimates without EPO assistance and that, as of October 2011, EPO has been involved in few cost estimate updates. However, senior MITS and EPO officials said they would like for EPO to have a greater role in cost estimate updates. If IRS does not have reliable cost estimate updates, projects may face risks and their budget requests may not be adequately justified to inform decision making; these outcomes could be even more significant for projects with cost or schedule variances or high DME costs, such as IRDM. MITS guidance documents used by project managers do not clearly discuss the appropriate uses of different types of cost estimates. According to current guidance used by EPO estimators, non-SCBE cost estimates are less rigorous and are not for use in budgets, but as stated above, neither the IRDM initial budget request nor the current and projected budgets were developed using information from the 2009 SCBE. Three IRS guidance documents describe the relationship between cost estimates and budgets. However, the documents are directed at different audiences, do not present consistent information, and contain different levels of detail. Specifically: EPO’s Estimator’s Reference Guide, used by EPO estimators, states that budgets for IT projects should be established using SCBEs. IRS’s SCBEs rely on cost estimation methods that incorporate best practices from our cost guide, including considerations of risk, and provide a level of confidence associated with the estimate. According to our cost guide, for management to make good decisions, the program estimate must reflect the degree of uncertainty, so that a level of confidence can be given about the estimate. The Estimator’s Reference Guide also discusses techniques that estimators can use to update SCBEs, and aligns that process with annual budget submissions. Although this guidance contains many best practices, it is directed at cost estimators; therefore, project managers do not typically have access to it or use it. The MITS Investment Guide, directed at MITS project managers, discusses the role of EPO in developing initial rough estimates and budget-ready SCBEs and states that, if a project does not yet have an SCBE, a rough estimate may be used as a placeholder in a budget request. The guide requires that MITS staff should work to ensure that if an SCBE exceeds an initial rough estimate, the project’s scope and SCBE fit within the appropriated budget. However, the guide does not discuss how, if at all, a budget request should be adjusted if an SCBE provides an estimated cost that is lower than the budget, or how any future cost information should be incorporated into budgets. The Estimation Procedures document, directed at MITS project managers, does not define types of cost estimates or discuss whether they are appropriate for budget decisions. EPO officials said they did not believe it is necessary to characterize the different types of cost estimates in the Estimation Procedures document because they are not necessary for defining the organizational approach to estimation, which is the intent of the document. The document states that if updated cost estimates indicate that a project’s budget needs to change, the changes must be approved. However, it does not specify who should approve the estimates. Without consistent guidance about what types of cost estimates are appropriate for budget requests, project teams may not use the best information available. Our cost guide states that, as a best practice, an estimate intended to support budgetary decisions should cover the project’s entire life-cycle and should be supported by a description of the program’s technical characteristics, which would be found in an SCBE. Using a cost estimate that lacks sufficient rigor—such as a preliminary cost estimate, instead of an SCBE—could lead to budget requests that do not accurately reflect program funding needs. For example, the 2007 preliminary IRDM estimate lacks an uncertainty analysis, which would provide a basis for adjusting the estimate to reflect unknown facts and circumstances that could affect costs, and as a result, IRDM managers do not have assurance that the program’s funding level remains appropriate. Further, not providing project managers with guidance on how to incorporate new cost information—either from an SCBE that has replaced a preliminary estimate or from an updated cost estimate—into budget requests could result in requests that do not reflect current or accurate funding needs for a project. IRS provided EVM data in the 2011 IRDM cost estimate to justify its budget requests, but we found that the program’s EVM data are not reliable in any of the areas we reviewed. Reliable data on actual performance, obtained from an EVM system, are a necessary input if an updated cost estimate is to be considered accurate and credible. Because IRDM’s 2011 cost estimate is based on unreliable EVM data, it does not provide adequate support for IRDM’s budget requests. Until IRS addresses deficiencies in its EVM data, it cannot provide reliable cost estimate updates for IRDM. EVM data reliability deficiencies, such as those we observed for IRDM, are common in federal agencies, and we have also previously reported on inconsistencies in implementation of EVM for IT projects at Treasury bureaus. See GAO-09-3SP and Office of Management and Budget, Capital Programming Guide: Supplement to Circular A-11, Planning, Budgeting, and Acquisition of Capital Assets (Executive Office of the President, Washington, D.C.: August 2011). determine whether all program requirements have been addressed, risks have been identified, mitigation plans are in place, and available and planned resources are sufficient to complete the work. Using qualified staff, conduct surveillance on the EVM system. Surveillance is reviewing a contractor’s EVM system to observe ANSI compliance and how well a contractor is using its EVM system to manage cost, schedule, and technical performance. Treasury’s EVM guidance requires a project of IRDM’s size to follow an abbreviated set of 10 ANSI guidelines, and to conduct surveillance on the EVM system. Other departments also scale ANSI guidelines according to the size of projects, which could result in some agencies not fully following certain best practices. Further, projects like IRDM, according to Treasury guidance, only need to complete an independent baseline validation—which although not defined in the guidance, appears to be a less rigorous version of an integrated baseline review. Following ANSI guidelines and conducting an integrated baseline review and surveillance can help ensure that EVM data can indicate how well a program is performing in terms of cost, schedule, and technical matters. This performance information is necessary for proactive program management and risk mitigation, and to maintain a reliable cost estimate. Where applicable, we assessed IRDM’s EVM data against the standards cited in the Treasury guidance. We assessed IRDM on three ANSI guidelines, which are fundamental elements for an EVM system and are included in Treasury’s abbreviated 10 EVM guidelines. We found that IRDM’s EVM system is not compliant with these guidelines. For an overview of our findings on IRDM’s EVM data reliability, see table 4 in appendix III. For each selected guideline we found: WBS: This ANSI guideline states that authorized work elements for the program should be defined, which typically includes using a WBS tailored for effective internal control. Further, a project’s schedule, cost estimate, and EVM system should be based on the same WBS, according to our cost guide. The WBSs used in IRDM’s schedules do not match the WBS used for EVM. The WBSs in four of the five IRDM schedules reflect detailed project-level tasks, while the WBS used for the EVM data is only broken down by contractor and government efforts, and does not include any project-level data. The WBS used to inform the 2011 IRDM cost estimate was not broken down by contractor or government data, and it did not provide costs for detailed tasks. Without a WBS, from which to measure progress and to serve as a consistent framework for the schedules and EVM, there is no basis for reliable EVM data, according to our cost guide. An IRS official said that the WBS in the IRDM schedules is not the primary source of financial information for IRDM. Instead, officials use IRS’s financial tracking system, which is much less detailed than the schedules’ WBS, to obtain project level data for EVM. This technique for gathering EVM data at a project level is contrary to the purpose of EVM, which is to integrate cost, schedule, and technical data from detailed work packages that can be monitored for variances against the original plan. Since a resource-loaded schedule forms the foundation for the EVM baseline, both the schedule and the EVM data should be based on the same WBS, according to our cost guide.Because the financial tracking system only provides project level data, and financial information cannot be traced to the WBS elements in the schedules, the cost associated with the project tasks is unknown. Sequencing: This ANSI guideline states that projects should have a schedule that describes the sequence of work—that is, a list of activities in the order in which they are to be carried out—and identifies significant task interdependencies required to meet project requirements. All of the IRDM schedules had significant problems with sequencing. For example, many predecessor and successor tasks were not linked to one another, which are necessary for properly sequencing work so that the schedule will update in response to changes. Because the schedule is missing so many of these “logic links,” it will not automatically recalculate forecasted start and finish dates of remaining activities. Thus, any activities that are late will not automatically recalculate the dates for affected successor activities. IRS officials said they are aware of some issues with missing links. As a result of these missing links in all of the IRDM schedules we reviewed, IRDM’s schedules are not reliable. Because the schedules form the basis for the performance measurement baseline used to track cost and schedule variances in an EVM system, the data from the IRDM EVM system are not reliable. Time-phased budget baseline: This ANSI guideline states that a program should establish and maintain a time-phased budget baseline, at the control account level, against which performance can be measured. Resources must be accounted for in a schedule in order to develop this baseline, according to our cost guide. IRS was unable to show evidence that it has established and maintained a time-phased budget baseline for IRDM. Specifically, the program does not have one overall schedule, and none of the IRDM schedules we reviewed were completely resource-loaded. IRS officials said although they do not have a program level schedule for IRDM, the individual project schedules are linked through interdependencies. However, we could not identify these links in our analysis. IRS officials said the resource-loaded schedules do not show all project resources because some resources, such as contractor personnel being used across projects, are used in more than one IRDM project. Without schedules that include the resources needed to complete tasks, IRS was unable to prove that it had established and maintained a time- phased budget baseline. This baseline is a critical EVM component for measuring IRDM’s performance. According to Treasury EVM guidance, an independent baseline validation should be conducted for a DME project like IRDM. As stated above, GAO and OMB consider an integrated baseline review to be a key element of EVM data reliability, while Treasury guidance allows projects like IRDM to complete a less-rigorous independent baseline validation. IRS did not conduct an integrated baseline review or an independent baseline validation for IRDM. IRS officials told us that many of the activities typically done during an integrated baseline validation were performed— such as developing a WBS and schedules. However, as previously discussed we identified problems with the WBSs and schedules and the baseline process. Without a comprehensive integrated baseline review or independent baseline validation, and resolving any issues, IRS has not sufficiently evaluated the validity of IRDM’s baseline. This calls into question the reliability of IRDM’s EVM data, and could affect the program’s ability to identify and mitigate risks. OMB and Treasury require surveillance on EVM systems, and, according to our cost guide, surveillance should be conducted to check whether the EVM system summarizes timely and reliable cost, schedule, and performance information, among other things. IRS officials said IRS is not performing surveillance on the IRDM EVM system because they did not believe it was required and because, according to officials, IRS does not have staff with the necessary technical skills to conduct surveillance. IRS officials said Treasury is reviewing whether Department-level surveillance would be efficient. It is important for the agency to conduct surveillance of EVM systems to ensure that contractors are following their own processes and satisfying ANSI guidelines. The 2011 IRDM cost estimate does not fully meet best practices for a reliable estimate. It is important for IRDM to have a cost estimate that meets best practices to inform budgetary decisions and ensure that the program is implemented as planned. This standard is particularly important in a budgetary environment with scarce resources. IRDM’s 2011 cost estimate could be improved by using EPO’s expertise to ensure the cost estimate follows best practices from our cost guide. Additionally, more consistent guidance on using cost estimates to develop budget requests could help program managers for IRDM, as well as other programs, make budget decisions that are supported by reliable cost information. IRS could increase the credibility of IRDM’s 2011 cost estimate by ensuring that IRDM’s EVM data are reliable. Such reliability could also allow for IRS to update projected costs for the remainder of IRDM’s implementation. Using a WBS that is developed using best practices from our cost guide could provide a baseline from which to measure progress, a key component to an EVM system. Similarly, developing a single integrated schedule for IRDM, that contains all resources needed to implement the program, could provide more meaningful EVM data. Finally, providing oversight of the EVM system, through an independent baseline validation and EVM surveillance, could help identify potential program risks and any possible issues with contractor performance. To improve the quality of cost and budget information for IRS IT projects, we recommend that the Commissioner of Internal Revenue take the following four actions: 1. Ensure that the IRDM life-cycle cost estimate is reliable and that budget requests are justified by a reliable cost estimate that follows best practices. 2. Require project managers to consult with EPO to determine if projects could benefit from EPO assistance in updating cost estimates for programs that exceed thresholds recommended by MITS’s Estimation Procedures document. For those projects where EPO does not update the cost estimate, IRS should require that the decision and rationale be documented. EPO should use the information from updated cost estimates to develop a historical repository of cost estimation data. 3. Review all guidance applicable to cost estimates and take steps to ensure that they are consistent. As a first step, IRS guidance should require the use of current and reliable project cost estimates to inform budget requests, in accordance with the Estimator’s Reference Guide. 4. Improve the reliability of IRDM’s EVM data, specifically: address WBS issues by developing an EVM baseline for IRDM that reflects the same WBS as the detailed schedule and IRDM cost estimate; address sequencing issues and enable the development of a time- phased budget baseline by creating a single integrated master schedule for IRDM that is properly sequenced and resource- loaded so that effective and meaningful EVM data can be obtained to better manage the program; conduct an independent baseline validation for the IRDM EVM baseline; and conduct independent surveillance of EVM systems to ensure that data are reliable. We provided a draft of this report to the Commissioner of Internal Revenue for his review and comment. We received written comments from the Deputy Commissioner for Operations Support, which are reprinted in appendix IV. We sought clarification on IRS’s written response in regards to whether it agreed with two of our recommendations, and on a reference to OMB guidance. IRS provided us with additional comments, which are summarized below. In addition, the agency provided technical comments, which we incorporated into the report as appropriate. IRS agreed with one of our four recommendations, partially agreed with another, and disagreed with two. While IRS’s comment letter did not address the recommendation, the Director of Risk Management in MITS’s Strategy and Planning Office told us the agency agrees with the recommendation to require certain IT project managers to consult with EPO about updating cost estimates, documenting decisions not to update cost estimates, and placing data from updated cost estimates in a repository. Similarly, IRS’s comment letter did not address our recommendation to ensure that its cost estimation guidance is consistent. However, IRS officials said they partially agree with this recommendation and have taken steps to ensure that their estimation practices and procedures follow consistent, documented guidance. They noted that in all instances, however, IRS IT cost estimates will be based on the best information available at the time the estimate is requested or required as opposed to our recommendation that IRS require the use of current and reliable cost estimates to inform all budget requests. We note in our report that using an unreliable cost estimate could lead to budget requests that do not accurately reflect program funding needs. Once done in conformance with guidance and best practices, current and reliable cost estimates can be maintained through normal required monitoring and cost tracking procedures, unless significant changes in project circumstances warrant updating the cost estimate. IRS disagreed with our recommendation to ensure that the IRDM life- cycle cost estimate is reliable and that budget requests are justified by a reliable cost estimate that follows best practices. In its comment letter, IRS wrote that implementing the recommendation would require it to spend resources that do not directly contribute to the successful implementation of the IRDM program, and that the IRDM program is within its budget and schedule. At the end of fiscal year 2011, IRDM was under budget by more than 18 percent and behind schedule by almost 13 percent, equal to over 4 months behind schedule, according to IRDM’s EVM data. Because the IRDM program does not have a reliable cost estimate, budget authorities do not have reliable information to determine an appropriate funding level. Such variances indicate that the current funding level may not be appropriate. As we reported, IRS estimated in July 2011 that it would take about 90 days, comprising 8 staff months (or a direct staff cost likely less than $200,000, according to our calculations using a top government salary rate and general benefits rate), to complete a new cost estimate for IRDM. While we agree that federal resources are tight, we believe that such an investment could produce benefits that not only improve the reliability of the IRDM cost estimate, but also better ensure that IRS requests the correct amount of resources to ensure IRDM fully achieves successful implementation. Further, benefits of a new estimate would also stretch beyond that program and provide an important foundation for improving IRS cost estimates in general. The OMB Capital Programming Guide directs agencies to develop sound cost estimates based on our cost guide and states that during the budget process, the credibility of the costs will be examined, and OMB and the Congress will hold agencies accountable for meeting the schedule and performance goals within the cost estimates. More reliable cost estimates enable Congress and other budget authorities to make more complete and informed decisions. IRS also disagreed with our recommendation to improve the reliability of EVM data, and stated that OMB’s revisions to Circular A-11 remove EVM system requirements due to negative cost benefit. We disagree with IRS, as Circular A-11, Appendix J and the Capital Programming Guide still contain language that directs agencies to use EVM for major projects. Former Circular A-11, section 300.7, instructed agencies to use EVM system requirements to identify areas where problems are occurring when reporting on ongoing investments. While current guidance for Exhibit 300 no longer explicitly discusses EVM reporting on ongoing investments under section 300.7, other sections of the guidance still direct the use of EVM for managing IT capital assets and state that in general cost, schedule and performance goals are to be controlled and monitored by using an EVM system. Moreover, our report assessed the reliability of IRDM’s EVM data because the data were included in IRDM’s 2011 cost estimate and the data are used to track the program’s progress. Regardless of OMB requirements, any data used for cost estimation and program management, particularly when it helps to support a budget request, should be reliable. We will send copies of this report to the Chairmen and Ranking Members of Senate and House committees and subcommittees that have appropriation, authorization, and oversight responsibilities for IRS. We will also send copies to the Commissioner of Internal Revenue, the Secretary of the Treasury, the Chairman of the IRS Oversight Board, and the Acting Director of the Office of Management and Budget. Copies are also available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions concerning this report, please contact me at (202) 512-9110 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix V. This report builds on our May 2011 reportService’s (IRS) Information Reporting and Document Matching (IRDM) program and analysis performed during a budget justification review of the IRDM program, conducted from June 2011 to August 2011, which we provided to Congress as technical assistance. In the May 2011 report, we assessed IRDM’s 2009 solution-concept based cost estimate. For the budget justification work and this report, we assessed IRDM’s 2007 preliminary cost estimate. Also for this report, we assessed IRDM’s 2011 cost estimate. See GAO, Cost Estimating and Assessment Guide: Best Practices for Developing and Managing Capital Program Costs, GAO-09-3SP (Washington, D.C.: March 2009). documentation provided to us. Once we determined that the 2011 cost estimate did not fully meet best practices, we determined why this occurred. To this end, we interviewed officials in IRDM’s Program Management Office and MITS’s Estimation Program Office (EPO). We also compared IRS guidance that addresses cost estimation—the EPO’s Estimator’s Reference Guide, MITS’s Estimation Procedures document, and IRS’s Information Technology Investment Planning and Management Guide—to criteria in our cost guide. To assess the extent to which IRS’s practices for capturing IRDM’s actual costs and comparing them to estimated costs, or EVM, generate reliable performance data, we compared the EVM data for IRDM and IRS’s process for maintaining the data to the high-level EVM data reliability tasks outlined in our cost guide. We assessed the extent to which IRDM’s EVM data adhered to three of the American National Standard Institute’s (ANSI) 32 guidelines; we selected the three guidelines to represent some of the fundamental steps for maintaining a reliable EVM system, as identified in our cost guide, and because these guidelines are also included in the Department of the Treasury’s Earned Value Management Guide, which applies to IRS (see appendix III for a list of the data reliability tasks and ANSI guidelines we reviewed). In situations where Treasury’s Earned Value Management Guide did not require certain OMB or GAO best practices, we assessed IRDM’s EVM practices against the Treasury guidance. To do this analysis, we compared the work breakdown structures used in IRDM’s EVM system, schedules, and cost estimates and identified differences in each. We also assessed each of the five IRDM schedules against scheduling best practices for ensuring that the activities are sequenced and related using network logic, as identified in our cost guide. For both objectives, we interviewed IRS officials in the MITS division, specifically, officials from the IRDM Program Management Office and the Investment Planning and Management Office, which includes EPO. We spoke primarily with officials at IRS Headquarters in Washington, D.C. and IRS’s division office in New Carrollton, Maryland, where the officials responsible for IRDM are located. To assess the reliability of the cost estimate data that we used to support findings in this report, we reviewed relevant program documentation, such as cost estimation spreadsheets, as available, to substantiate evidence obtained from interviews with knowledgeable agency officials. We found the data we used to be sufficiently reliable for the purposes of our report. As appropriate, we attributed the sources of the data. We are making recommendations to IRS to improve data reliability in the future. We conducted this performance audit from August 2011 through January 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The following table outlines our assessment of the extent to which the Internal Revenue Service’s (IRS) 2011 Information Reporting and Document Matching (IRDM) program cost estimate meets best practices, depicted in figures 1-4. In addition to the contact named above, Libby Mixon, Assistant Director; Laurel Ball; Amy Bowser; Bill Cordrey; Jennifer Echard; Robert Gebhart; Paul Middleton; Donna Miller; Sabine Paul; Karen Richey; Stacy Steele; and Lindsay Swenson made key contributions to this report. | The Internal Revenue Service (IRS) began developing the Information Reporting and Document Matching (IRDM) program in fiscal year 2009 to enhance IRSs ability to automatically compare different sources of tax information and thus improve its capacity to identify and address taxpayer noncompliance. GAOs May 2011 report recommended that IRS follow best practices from the GAOs Cost Estimating and Assessment Guide if IRS updated the cost estimate for building IRDM systems. IRS provided a new cost estimate for IRDM in August 2011. In this report, GAO assessed the extent to which (1) the IRDM funding request is supported by a reliable cost estimate and, if not reliably supported, why not; and (2) IRSs practices for capturing data on IRDMs actual costs and comparing them to estimated costsknown as earned value management (EVM)generate reliable performance data. GAO compared IRSs 2011 IRDM cost estimate to criteria in GAOs cost guide and analyzed IRDMs earned value management data. The 2011 Information Reporting and Document Matching (IRDM) cost estimate, used to justify the programs projected budgets of $115 million for fiscal years 2012 through 2016, generally does not meet best practices for reliability. The cost estimate did not fully meet any of the four best practices for a reliable cost estimate. For example, the cost estimate minimally meets best practices for a well documented estimate because the Internal Revenue Service (IRS) did not provide detailed support for staff resources, and the cost estimate documentation only justified about 6 out of the 86 requested full time equivalent staff for IRDM, among other things. If documentation does not provide source data or cannotexplain the calculations underlying the cost elements, the estimates credibility may suffer. Although IRS has an independent office of cost estimators that can develop and update cost estimates using cost modeling software that generally follows GAOs best practices, this office did not develop the 2011 IRDM cost estimate. IRS policy does not require project teams to work with the office to update cost estimates. Additionally, IRSs cost estimation guidance for project managers is inconsistent regarding how cost estimates should be related to a budget, an inconsistency that could lead to budget requests that do not accurately estimate program funding needs. The IRDM programs earned value management (EVM) data did not meet data reliability criteria in the areas GAO reviewed. For example, the IRDM project schedule was not properly sequencedmeaning activities were not properly linked in the order in which they are to be carried out. In addition, surveillance was not conducted on IRDMs EVM system, as required by the Office of Management and Budget and the Department of the Treasury. Surveillance involves having qualified staff review an EVM system. Because IRDMs 2011 cost estimate is based on unreliable EVM data, it does not provide adequate support for IRDMs budget requests. Until IRS addresses deficiencies in the EVM data, it cannot provide a reliable cost estimate for IRDM. GAO recommends that IRS ensure that IRDM has a reliable cost estimate, require certain project teams to work with its Estimation Program Office, improve cost estimation guidance, and improve the reliability of IRDMs EVM data. IRS agreed with one, partially agreed with one, and disagreed with two of GAOs recommendations. GAO generally disagrees with IRSs concerns, and still believes the recommendations have merit. |
Historically, the U.S. Customs Service under the Department of the Treasury was responsible for collecting revenue from trade in the form of customs duties, taxes, and fees. However, these functions were transferred to DHS under the Homeland Security Act when the U.S. Customs Service was merged with parts of the Immigration and Naturalization Service and the Department of Agriculture’s Animal and Plant Health Inspection Service to form CBP in March 2003. At that time, CBP’s priority mission was homeland security, but the agency was also responsible for facilitating the movement of legitimate trade and people. Congress required in Section 412(b) of the Homeland Security Act that the Secretary of DHS, at a minimum, maintain the level of staff and associated support staff in certain customs revenue functions, which was in part defined as those functions performed by staff in nine positions that were present in the U.S. Customs Service when it became part of DHS in 2003. These customs revenue functions involve trade functions, including trade enforcement. For the purposes of our report, we refer to these as the nine mandated trade positions. Since the creation of CBP, Congress has passed additional legislation that relates to CBP’s trade functions. For example, the Security and Accountability for Every Port Act of 2006 (SAFE Port Act) requires CBP to prepare a resource model to determine the optimal staffing levels that are required to carry out commercial operations, including inspection and release of cargo and the revenue functions described in Section 412(b) of the Homeland Security Act. Accordingly, CBP developed the Resource Optimization Model for trade positions. In its model, CBP identified the staffing levels for 15 positions, of which 9 were the mandated trade positions from the Homeland Security Act, and 6 were nonmandated trade positions that also perform trade functions. According to CBP, the Resource Optimization Model generated a staff level range that projects the optimal staffing levels necessary for each of the 15 positions to conduct trade processing and enforcement across the seven Priority Trade Issue areas for a given fiscal year. CBP’s Resource Optimization Model is based on projected workloads, staffing needs, and attrition levels for trade positions, according to officials from CBP’s Office of Trade. The Trade Facilitation and Trade Enforcement Act of 2015 acknowledged many existing CBP trade practices and enforcement processes, such as Priority Trade Issues and partnerships with the trade industry, and also placed additional requirements on CBP, such as enforcing revised U.S. laws concerning imported goods made using forced labor. The Act covers trade facilitation and trade enforcement issues such as import safety, protection of intellectual property, and prevention of evasion of duties. The Act also requires the development and implementation of Centers of Excellence and Expertise (Centers) that CBP had already been piloting and, among other things, centralizes CBP’s trade enforcement and trade facilitation efforts. According to CBP officials, the Act complements and bolsters CBP’s existing initiatives to enhance trade enforcement and facilitation. According to CBP officials, two strategic documents inform the agency’s approach to trade enforcement. CBP Trade Strategy, Fiscal Years 2009-2013 is the most recent CBP trade-specific strategy, according to CBP officials. The strategy lays out four trade goals: (1) facilitate legitimate trade and ensure compliance, (2) enforce U.S. trade laws and collect accurate revenue, (3) advance national and economic security, and (4) intensify modernization of CBP’s trade processes. According to officials in the Office of Trade, CBP’s current trade priorities are reflected in CBP’s Vision and Strategy 2020 document. The document lays out four overarching strategic goals with objectives for CBP and addresses the agency’s dual security and trade mission. Two of these four goals address trade enforcement. CBP’s Priority Trade Issues are high-risk issue areas in which violations can cause significant revenue loss, harm the U.S. economy, or threaten the health and safety of the American people, according to CBP. Priority Trade Issues focus CBP’s actions and resources to better direct an effective trade facilitation and enforcement approach, according to CBP. Table 1 provides information on CBP’s Priority Trade Issues and examples of violations and enforcement actions that can occur. The Act required CBP to establish seven Priority Trade Issues, of which five already existed prior to being mentioned in the Act, according to CBP. CBP is led by a Commissioner who oversees CBP’s dual mission of protecting national security objectives while promoting economic prosperity and security, according to CBP documents. This mission was being carried out by more than 60,000 employees as of March 2017, with less than 20 percent working on trade-related issues, according to CBP officials. Two of CBP’s six offices are involved in carrying out trade enforcement: the Office of Trade and the Office of Field Operations (see fig. 1). Two other CBP offices, Enterprise Services and Operations Support, provide the Office of Trade and the Office of Field Operations with technical and administrative support for trade enforcement. The flow of imports, or goods, into U.S. commerce is a regulated, multifaceted process that CBP is responsible for facilitating and enforcing. Imported goods enter at over 300 ports by air, land, or sea. The flow of goods can be characterized by three stages: pre-entry, entry, and post- entry (see fig. 2). At pre-entry, before goods leave their country of origin and prior to goods arriving at a U.S. port of entry, importers and carriers file paperwork and provide required advance electronic information for CBP to review. At entry, importers or brokers file entry documents when goods reach a U.S. port of entry where CBP scans and possibly examines them for import security and trade enforcement purposes before they enter into U.S. commerce. In some cases, CBP may target cargo for examination based on a risk assessment. Cargo that is scanned or inspected may be deemed as nonadmissible because of trade law violations, among other things. If CBP finds such violations, it may seize the cargo and issue penalties and/or fines. If the goods pose a risk of nonpayment of duties, and the shipment meets certain requirements, CBP may require additional bond coverage. Admissible goods are released from the port and enter into U.S. commerce. At post-entry, importers or brokers file an additional set of entry summary documents that CBP reviews to ensure trade compliance, after entry of the goods has been authorized. CBP verifies the importer’s cargo classifications and calculation of customs duties, taxes, and fees owed, taking action when needed. For example, CBP may determine that an importer misclassified cargo in an attempt to pay lower duty rates, such that the agency issues the importer a bill for a greater amount based on the proper classification and possibly applies a penalty. CBP also continues to review and process trade information provided by the importer. For example, CBP may conduct audits and reviews and validate information provided by the importer to check for importer compliance. CBP’s trade policy, processing, and enforcement are primarily carried out by two offices—the Office of Trade and the Office of Field Operations. The Office of Trade develops policies to guide trade enforcement efforts, while the Office of Field Operations conducts a range of trade processing and enforcement activities at ports. CBP staff in a variety of positions, including import specialists and CBP officers, conduct a range of trade processing and enforcement activities. CBP’s previously port-centric approach to trade enforcement has shifted to a national-level, industry- focused approach with the establishment of the Office of Field Operations’ 10 Centers of Excellence and Expertise. These Centers represent a shift in trade operations, centralizing the processing of imported goods on a national scale through a single industry-related Center rather than individual ports of entry. Within CBP, the Office of Trade is the lead entity for trade policy and operational guidance. The Office of Trade is responsible for developing policy and practices to ensure that importers comply with U.S. trade laws and regulations, directing enforcement when compliance does not occur, and facilitating processes with industry partners. The Office of Trade guides the Office of Field Operations’ trade enforcement efforts at the ports through policy documents and directives, according to CBP officials. The Office of Trade is composed of six directorates: five are focused on trade issues, and one provides human capital and financial support to the office (see table 2 for a description of the directorates and app. III for an organizational chart of the Office of Trade). The Trade Policy and Programs directorate oversees CBP’s seven Priority Trade Issues, according to CBP officials. Each branch within this directorate is headed by a director who oversees components covering policy, enforcement, targeting, and operations as they relate to enforcement of the Priority Trade Issues at the field office and port level. See figure 3 for an organizational chart of the Trade Policy and Programs directorate and its branches. While the Office of Trade focuses solely on trade, the Office of Field Operations is responsible for both border security and the facilitation of lawful trade and travel at U.S. ports of entry. The Office of Field Operations operates 20 field offices located throughout the United States. The field offices, which are organized by regions, manage over 300 ports where cargo enters. The Office of Field Operations is composed of seven directorates, six with responsibility for trade enforcement and border security, and one that provides human capital and financial support. According to CBP, four of the seven directorates and relevant divisions are involved in carrying out CBP’s trade enforcement. Table 3 describes the four directorates that carry out trade enforcement as well as one directorate that carries out human capital and financial support. (See app. III for the organizational chart of the Office of Field Operations.) Within the Office of Trade and the Office of Field Operations, trade operations and enforcement are carried out by CBP staff in various positions at ports located throughout the United States and CBP headquarters. In the Resource Optimization Model, CBP’s Office of Trade identified 15 positions that carry out trade functions, including the 9 mandated trade positions and 6 nonmandated trade positions that may perform security functions in addition to trade functions. In this report, we refer to these 15 positions as trade positions. Thirteen of the 15 trade positions are located within the Office of Trade or the Office of Field Operations, one is in Enterprise Services, and one is in Operations Support (see table 4). CBP’s 10 Centers of Excellence and Expertise have changed the way in which CBP conducts trade operations, centralizing the processing of imported goods on a national scale through a single industry-related Center rather than through individual ports of entry. Within the Office of Field Operations, the Centers of Excellence and Expertise are organized by industry, with staff located in ports across the United States (see fig. 4). Each Center is responsible for performing trade functions related to its industry sector, such as the processing of entry summary and post-entry summary documentation and account management, regardless of the cargo’s port of entry. Each Center is located within a CBP field office and has a Center director. Each Center is organized into three divisions – Validation and Compliance, Enforcement, and Partnership Programs – each covered by an assistant Center director. Center staff are located at ports across the United States. The Centers have made it easier for CBP to gain a national perspective on the movement of trade, compliance issues, and enforcement patterns; they also have enhanced commodity expertise and industry-based knowledge for import specialists, according to CBP officials. Before CBP established the Centers, documents associated with the imported goods were processed by import specialists at the ports where the goods physically entered, so importers had to communicate with multiple import specialists at ports across the United States to process goods if they imported goods into more than one port (see fig. 5). As a result, it was harder for CBP to uniformly process entries and detect import patterns across the nation. For example, a potential trade violation caught by an import specialist at one port might not have been caught by an import specialist at another port. Now, after the establishment of the Centers, goods are processed by import specialists assigned to specific Centers, and importers work with one Center. In addition, the Centers have centralized CBP’s support for the trade community; now the trade community can reach out to one Center instead of multiple ports with questions about the import process, according to CBP officials. With the 10 Centers fully operationalized, Center staff are adjusting to a new work environment that involves remote teams. Prior to the creation of the Centers, import specialists were reporting to and managed by a supervisor and port director located at their port, while national account managers reported to the Office of Trade. Now, import specialists and national account managers are reporting to and managed by the Center they are assigned to, even though they may not be physically co-located with their supervisors or Center director. According to CBP officials, entry specialists are currently reporting to their local port supervisors and managers, but will be reporting to and managed by 1 of the 10 Centers later in fiscal year 2017. Figure 6 illustrates an example of the remote management environment of the Electronics Center. According to CBP officials, Centers and their staff face challenges that stem mainly from the transition to working in a remote environment and are making efforts to address these challenges: Virtual communication. Management and staff are increasing their usage of technology, teleconferences, and webcams and undergoing training to facilitate building remote teams. In addition, division directors and supervisory import specialists may teleconference once a month to discuss challenges in remote management. New policies and procedures. Managers are learning different administrative policies such as leave policies as well as union rules that vary by port and operating across different time zones. In general, managers cover one geographic region, so they do not need to learn every port’s policies. Funding and support structure. While the Centers do not have a separate budget to support activities such as travel and do not have mission support staff, the field offices have generally been supportive, according to CBP officials. The CBP field offices that house each of the Center directors have discretion on budget matters and mission support for the Centers. The Centers also find alternatives to activities that may require funds, such as attending webinars instead of in- person training. CBP uses a layered, risk-based approach to guide its trade enforcement activities across its Priority Trade Issues but generally does not set performance targets to assess the effectiveness of its activities. CBP’s trade enforcement activities leverage many different units within CBP and at other government agencies, according to CBP officials. See figure 7 for examples of these activities by stage of entry of goods into the United States. CBP has created plans to set goals and objectives for its Priority Trade Issues; these plans contain some performance measures but generally lack targets to measure achievements and effectiveness. CBP focuses on creating partnerships with industry to support its goal of increasing compliance among importers, according to CBP, and thereby reducing the risk of allowing noncompliant goods to enter the United States. According to CBP, partnership with industry helps expedite the flow of legitimate trade shipments and reduces the examination rate of low-risk importers, allowing CBP to focus its trade enforcement efforts on higher-risk importers. Examples of CBP’s partnership programs include the following: The Customs-Trade Partnership Against Terrorism (C-TPAT) program. Through this public–private partnership program, members of the trade community volunteer to adopt tighter security measures throughout their international supply chains in exchange for enhanced trade facilitation, such as expedited processing. According to CBP, over 50 percent of the imports into the United States by value in 2016 are C-TPAT imports. The Importer Self-Assessment (ISA). Current members of C-TPAT can apply to be ISA-certified, which means that importers have developed and implemented internal controls and assessed risk based on self-testing. The benefits of ISA to industry, according to CBP, include importers’ exemption from comprehensive CBP audits, fewer cargo exams, and faster clearance of cargo at the ports of entry. The Trusted Trader Program (currently in a pilot phase). The Trusted Trader Program’s goal is to unify the current C-TPAT and ISA processes in order to integrate supply chain security and trade compliance and further increase the low-risk importer population, which allows CBP to focus on the high-risk importers. The development of Trusted Trader is a coordinated effort with members of the trade community, CBP, and partner government agencies. The program is carried out by the Partnership Divisions of the Centers of Excellence and Expertise, which is composed of national account managers, import specialists, and entry specialists who work with the Trusted Trader accounts. According to CBP, this arrangement enables CBP and partner government agencies to provide additional incentives to participating entities and enhance efficiencies by managing supply chain security and trade compliance within one partnership program. In addition to partnership programs, according to CBP officials, staff at the Centers work closely with company representatives through meetings and seminars to enrich their knowledge of products and better understand the nuances of a particular industry to enhance their ability to identify high- risk importers. According to CBP, such partnerships with the private sector include participation in the Commercial Customs Operations Advisory Committee (COAC) to U.S. Customs and Border Protection working groups, roundtable meetings with industry representatives, and educational seminars. Another partnership is through E-allegations, CBP’s online referral process for alleging trade violation(s) by importers, which provides a means for industry and the public to report to CBP any suspected violations of trade laws or regulations related to the importation of goods into the United States. CBP’s targeting efforts are carried out by different targeting groups at the national and port levels, and can support or help identify enforcement actions that are carried out on high-risk shipments. Targeting involves obtaining information about the shipment entering U.S. ports and those parties involved in moving the cargo and goods, such as the importer. Targeting groups develop user-defined rules (targeting rules) based on advance data, research, and other information that might show trends or emerging trade issues. The targeting rules are generally entered into CBP’s Automated Targeting System and flag certain cargo and goods for CBP officials responsible for trade enforcement to inspect or review documentation for potential trade violations. The targeting rules are ranked as mandatory, or high-, medium-, or low-risk, providing CBP officials some discretion on the need to take action. At the national level, CBP’s targeting efforts are carried out by different units, according to CBP. Within the Office of Trade, the National Targeting and Analysis Groups (NTAG) and Commercial Targeting and Analysis Center (CTAC) target for further review high-risk imports that are related to the Priority Trade Issues. Currently there are five NTAGs, located in various locations across the country and staffed mainly by international trade specialists who specialize in targeting mainly at post- entry for their respective Priority Trade Issue. The CTAC facilitates information sharing among partner government agencies on targeting and enforcement at all stages of the import process – pre-entry, entry, and post-entry – focusing on a variety of issues, including import safety and environmental crime, natural resources, wildlife trafficking, and cultural property, according to CBP. These targeting groups also collect information to develop targeting rules from import specialists who are assigned to 1 of the 10 Centers that cover their respective industry. Also at the national level, the Office of Field Operations’ National Targeting Center provides advance targeting, research, and coordination for CBP field units, the intelligence community, foreign counterparts, and investigative and law enforcement agencies in support of CBP’s antiterrorism mission and includes a cargo component focused on trade called the Tactical Trade Targeting Unit. This unit provides a national targeting perspective and is focused on developing user-defined rules that have a high probability of leading to investigations, according to National Targeting Center officials. The National Targeting Center established an integrated operational network, known as the Integrated Trade Targeting Network (the network), between all of CBP’s national level trade targeting assets in the Office of Trade and the Office of Field Operations. According to CBP officials, the purpose of the network is to improve communication, coordinate actions, and standardize procedures for more effective trade targeting. The network provides training to enhance the knowledge of relevant CBP field and headquarters personnel in the areas of automated targeting and reporting systems and targeting techniques. At the port level, CBP’s targeting efforts are also carried out by CBP officers who utilize user-defined rules to identify high-risk cargo coming through the port for examination, according to CBP officials. CBP officers located at ports must examine all shipments that have a mandatory rule in the Automated Targeting System, but are given discretion to examine or not examine cargo for all other nonmandatory rules, according to CBP. For example, the targeting rules may trigger a requirement that CBP hold up the movement of a specific shipment at a port (called “holds”) until CBP officers can examine the shipment or import specialists can review documentation to determine whether there are possible trade violations. CBP may also conduct targeting based on advance information it receives on shipments coming into ports (see fig. 8 for an example of goods seized as a result of targeting). CBP’s targeting efforts can support enforcement operations that are carried out by numerous CBP entities such as import specialists, agriculture specialists, and CBP officers as well as U.S. Immigration and Customs Enforcement’s (ICE) Department of Homeland Security Investigations (HSI), a partnering component agency of CBP under the Department of Homeland Security, and other partner government agencies. For example, CBP reported in December 2016 that it conducted several operations in the last quarter of fiscal year 2016 focused on targeting counterfeit goods—such as apparel, footwear, auto parts, and handbags—coming through express consignment and international mail facilities. CBP reported that these operations resulted in seizures of 948 shipments of counterfeit goods with an estimated manufacturer’s suggested retail price of over $20 million for the goods. CBP conducts post-entry audits and validation activities to assess trade compliance and identify possible trade violations across the Priority Trade Issue areas. Customs auditors in the Regulatory Audit directorate within CBP’s Office of Trade reported that they use a risk-based approach to select candidates for assessment. According to officials from Regulatory Audit, their audit plans tend to focus on examining importers with high dollar value shipments and potential risk to revenue—generally large and midrange importers. Regulatory Audit conducts various types of audits and reviews, including the following: Focused assessment–comprehensive audits of major importers. These audits assess internal control over import activities to determine whether the importers pose acceptable risk for complying with CBP laws and regulations. Such assessments can look at value, classification, free trade agreements, intellectual property rights, free trade zones, prior disclosures, and antidumping and countervailing duties. Quick response audits–targeted compliance audit with narrowly defined objectives. These audits focus on a single issue or specific concern referred to Regulatory Audit by CBP’s NTAG or ICE, and cover Priority Trade Issues and other areas. Such audits can cover antidumping, intellectual property, import safety, textiles and wearing apparel, drawback, and foreign trade zone issues. Surveys of importers. These surveys are performed as a result of an assertion or allegation, such as an e-allegation, with an objective to obtain an understanding of a company’s importing practices to determine if there are concerns that merit future CBP consideration. Regulatory Audit officials we met with in Los Angeles and New York told us that the majority of their assignments are made up of quick response audits and surveys. According to CBP officials, the audits and reviews can result in the collection of additional revenue. For example, Regulatory Audit in headquarters reported that for fiscal year 2015, it identified, through its audit recommendations, over $109 million of revenue owed by importers and was able to collect almost $60 million that year. The audits and reviews also provide informed and enforced compliance to the trade community and act as a deterrent by discouraging potential violators. In addition to activities carried out by Regulatory Audit, the Centers also conduct validation and compliance checks by reviewing post-entry importer records. All Centers have a division that conducts validation and compliance checks to ensure importer compliance. The checks are mainly carried out by teams of import specialists that specialize in different commodity types within their industry section. For example, by reviewing an importer’s post-entry summary records, the Center may validate that the entry complied with all conditions to be treated as a specific commodity under free trade agreements in which the United States is a party. When prompted by mandatory targeting rules, import specialists at the Centers are also responsible for reviewing the importers’ post-entry documentation and recording in the Automated Targeting System whether any potential violations were found. Some officials from targeting groups that we met with told us that import specialists are inconsistent about recording the results of the reviews they conduct, which impedes the targeting groups’ ability to assess the effectiveness of their targeting rules. According to Center procedures, import specialists are responsible for working with importers to ensure future compliance if violations have occurred and goods are seized or penalties are issued. CBP can take actions against noncompliant importers when applicable. For example, CBP can seize imported goods if it believes there is a violation of a trade law. The types of seized shipments range across the different Priority Trade Issue areas such as intellectual property rights, textiles, wearing apparel, and import safety (see fig. 9). For instance, according to CBP, 28,865 shipments were seized based on intellectual property rights violations in fiscal year 2015. The top five categories for intellectual property rights seizures include consumer electronics and pharmaceuticals, each of which often involve shipments that pose threats to consumer safety. According to CBP, as part of the seizure process, the Office of Field Operation’s Fines, Penalties and Forfeitures office will send the importer or another relevant party a notice of seizure. Any interested party may file a petition for relief from seizure within 30 days, according to CBP. CBP stores the seized goods until a final decision is made; if it does not return the goods to the importer, it generally destroys or sells the goods, as appropriate. CBP also has the authority to issue penalties and/or fines against importers, brokers, and other entities bringing goods into the United States that violate the law. Penalties may result, for example, if products are not properly marked with the country of origin, trademarks are violated with counterfeit goods, or illegal goods such as controlled substances are found. Penalties are monetary and are established by statute, according to CBP. In fiscal year 2015, CBP assessed over $237 million in penalties for violations related to its Priority Trade Issues and collected $3.5 million, according to CBP. CBP officials collaborate with officials from the HSI directorate within ICE on civil and criminal cases involving trade fraud, according to CBP and HSI officials. CBP and HSI coordinate at the field office level through regularly scheduled meetings, according to HSI officials. At these meetings, according to HSI and CBP officials, HSI special agents meet with CBP officials, such as import specialists, customs auditors, and international trade specialists, to discuss potential and active cases and upcoming operations. While HSI officials have traditionally worked with local import specialists, they have had to adjust to working with import specialists located in other cities because of the virtual nature of the import specialists at the new Centers of Excellence and Expertise. For example, HSI officials in one city may be working on a case that involves a commodity assigned to a particular Center, but CBP no longer has a local import specialist that works for that Center—so the HSI officials must work with the import specialist located in another city who works for that Center, according to HSI officials. This has caused a decrease in cooperation and communication between CBP and HSI resulting in fewer investigations, according to HSI. CBP officials will share information on suspected activities or importers that are noteworthy based on their own trade enforcement efforts, which may lead to an HSI investigation. For example, while processing shipments entering the United States, import specialists at the Centers may identify unusual trade patterns or documentation discrepancies for shipments of significance, such as large volume or value, with the shipments having health and safety or national security concerns, according to CBP officials. In addition, according to CBP officials, customs auditors from Regulatory Audit may send referrals to HSI if they find potential fraud or other significant issues during their audits of importers. Officials from CBP’s Fines, Penalties and Forfeitures office might also work with HSI on cases that may include seizures of goods. According to Fines, Penalties and Forfeitures officials at one of the ports we visited, more than 50 percent of their time is spent working with HSI on cases involving seized property. According to CBP and HSI officials, other U.S. partner government agencies may also be involved in investigations involving trade violations. CBP also conducts investigations stemming from allegations brought forth by industry and pertaining to the Enforce and Protect Act of 2015. According to the Office of Trade, such investigations draw upon the expertise of resources that include the Centers, targeting groups, and Regulatory Audit to look into the allegations and share the outcomes of investigations with industry. CBP has created strategic or annual plans that contain strategic goals for its Priority Trade Issues and some performance measures but generally lack targets to measure achievements and effectiveness. Leading practices for managing for results note, among other things, that a plan should identify goals and measures covering each of its program activities and contain targets to assess progress toward performance goals. Numerical targets or other measurable values facilitate further assessments of whether overall goals and objectives were achieved because comparisons can be easily made between projected performance and actual results. The Act mandates CBP to develop by February 24, 2017, a joint strategic plan with ICE that incorporates enforcement activities and performance measures. However, the Act does not require that the strategic plan include targets. Currently this plan is in draft form, according to CBP and, because the plan is still changing, CBP declined to share a copy and declined to discuss the extent to which the plan would identify performance measures and targets that would enable CBP to gauge the effectiveness of its activities by Priority Trade Issue. We asked CBP to provide us with strategic or annual plans for its seven Priority Trade Issues to determine how the agency measures its effectiveness in carrying out its trade enforcement efforts for each Priority Trade Issue area. CBP provided us with various plans for its Priority Trade Issues. While one of these Priority Trade Issues had plans that were current, the other Priority Trade Issues either did not have a plan or were using plans that had not been updated in several years. According to officials from the Office of Trade, CBP is in the process of finalizing plans for those Priority Trade Issues with outdated plans or no plans (see app. IV for the status of strategic or annual plans by Priority Trade Issue, as of February 2017). Our analysis of all the Priority Trade Issue plans provided to us shows that they generally lacked performance targets that would enable CBP to assess the effectiveness of its enforcement activities. While we found that most plans identified strategic goals, some plans had performance measures but generally did not contain performance targets that would allow CBP to assess its actual performance against planned performance or compare to past performance. For example, the AD/CVD annual plan for fiscal year 2017 has a number of performance measures that pertain to enforcement and these include reporting the number of AD/CVD- related audits, nonaudit services, surveys conducted, and importers removed from a targeting rule. However, CBP did not set a target level for performance for these measures in the plan, such as the target number of AD/CVD-related audits and surveys to be conducted. Without targets, CBP may not be able to determine the effectiveness of its trade enforcement activities, particularly to see if its projected performance for an activity met its actual results. CBP also reports on some trade enforcement performance measures related to its Priority Trade Issues in various documents, including reports mandated by Congress and annual reports prepared by various CBP entities. For example, a textiles report to Congress describes the types and number of enforcement actions taken, such as the number and value of seizures and penalties and the number of cargo examinations conducted by fiscal year. Also, in its first annual report covering its operations and programs for fiscal year 2015, CTAC—which covers CBP’s import safety Priority Trade Issue—reported on the number of seizures by operations that were conducted with U.S. partner government agencies, ineligible products that were prevented from entering the United States, and the number of enforcement operations conducted, among other things. However, while these metrics allow yearly changes to be identified and tracked, none of the two reports we reviewed included targets that would help officials with oversight responsibilities assess performance and the effectiveness of the CBP’s enforcement activities. Over the past 5 fiscal years, CBP generally has not met the minimum staffing levels set by Congress for four of nine positions that perform customs revenue functions, and it generally has not met the optimal staffing level targets set by the agency for these positions. The Homeland Security Act set mandatory minimum staffing levels for nine mandated trade positions. CBP’s Resource Optimization Model projected optimal staffing levels for the 15 identified trade positions, 9 mandated and 6 nonmandated, for the period of fiscal years 2015 through 2022. Staffing shortfalls can lead to decreased effectiveness of trade enforcement. CBP faces several challenges to hiring and filling staffing gaps, according to CBP officials. We found that CBP has not articulated how it plans to address challenges to filling staffing gaps for trade positions. Our analysis of CBP data from fiscal years 2012 through 2016 shows that the numbers of staff in four of the nine mandated trade positions – import specialist, customs auditor, national import specialist, and drawback specialist – were generally below the minimum mandated staffing and optimal staffing levels. In addition, staffing levels for these positions generally declined during this period. Staffing levels for import specialists provide an example. The Homeland Security Act set the minimum mandated staffing level for import specialists at 984, and CBP’s Resource Optimization Model calculated an optimal staffing level range from 984 to 1,748 import specialists for fiscal years 2015 through 2022. However, the actual staffing levels were below the mandated levels for 4 of the 5 years, consistently declining in the last 3 years from 954 import specialists at the end of fiscal year 2014 to 917 import specialists at the end of fiscal year 2016. The actual staffing levels for customs auditor, national import specialist, and drawback specialist were also generally below the minimum mandated staffing levels from fiscal years 2012 through 2016 and below the optimal staffing level targets in fiscal years 2015 and 2016. In addition, the staff levels for the financial system specialist position were below the mandated levels for 2 out of the 5 fiscal years and below optimal staffing levels for both fiscal years 2015 and 2016. For the four other mandated trade positions, CBP met or exceeded the mandated staffing levels. See table 5 for a list of mandated trade positions and their mandated and optimal staffing levels compared to actual staffing levels as of the end of fiscal years 2012 through 2016. For the six nonmandated trade positions identified in CBP’s Resource Optimization Model, CBP reported that actual staffing levels for five of the positions, as of October 2014, were below the optimal staffing range. For example, the actual level of staff for the CBP officer position, as of October 4, 2014, was significantly below the optimal staffing range, with an actual staff level of 6,889, representing about 1,800-2,800 fewer CBP officers on board than the model’s optimal staffing level. See table 6 for the optimal and actual staffing levels for nonmandated trade positions as reported by CBP, but not mandated by the Homeland Security Act. The optimal and actual staffing levels for the nonmandated trade positions, according to CBP officials, are based on an estimated percentage of time that staff spend on trade functions. For example, while there were 22,000 CBP officers overall in fiscal year 2014, CBP reported that the full-time equivalent of 6,889 staff spent time on trade activities. All of the nonmandated trade positions, except the Chemist position, assigned to Operations Support, are assigned to the Office of Field Operations. Staffing shortfalls in certain key trade positions can also impact CBP’s ability to identify and address risk in trade operations. For example, according to officials from Regulatory Audit in headquarters, if CBP met the mandatory staffing levels for customs auditor, Regulatory Audit would be able to more effectively address risk in the multitude of trade areas. Specifically, they told us that Regulatory Audit would be able to increase the number of importers it audits and expand the scope of their work. In addition, according to Office of Field Operations officials, if CBP met the mandatory staffing levels for import specialists, Centers could conduct more enforcement operations and focus on specific trade issues. Staffing shortfalls in trade positions can impact CBP’s trade processing and enforcement efforts, including CBP’s ability to enforce trade effectively. For example, staffing shortfalls can lead to decreased cargo inspections, according to several CBP officers at three ports we visited. According to these CBP officers, CBP officers at ports respond to rules from targeting groups to inspect cargo for trade violations, particularly when the instructions are mandatory. CBP officers have discretion to inspect cargo that is characterized as nonmandatory, which may be helpful in gaining new information about potential trade violations according to CBP officers. However, according to CBP, with staffing shortages of CBP officers, some CBP officers told us that they focus their efforts on addressing the mandatory inspections and may not be able to conduct any additional inspections, contributing to missed opportunities for assessing risk. Shortfalls in CBP officers may also lead to reassigning CBP officers during high periods of traffic volume, as we found in a 2013 report. In 2013, at three of the six land borders visited, CBP field and port officials reported to us that CBP had insufficient staff to process cargo arriving by commercial vehicles. As a result, CBP had to reduce the number of CBP officers assigned to secondary inspection to open up additional primary inspection lanes for commercial traffic. We reported that staffing shortages were caused in part by budget constraints and time needed to train and assign new CBP officers. In addition, during our port visits, some CBP officers told us that they were pulled from trade functions, such as examining air consignment cargo, to temporarily fill shortages of CBP officers needed to screen air passenger traffic, particularly during the holidays and summer. CBP faces a number of challenges to hiring staff for trade-related positions, such as other hiring priorities and limited numbers of staff focused on hiring for trade positions, according to CBP officials we met with. Some of the hiring challenges identified by CBP officials we met with include the following: Hiring priorities focused on security positions. CBP has focused on hiring staff for security positions, such as border patrol agents and CBP officers, and hiring for trade positions is not an agency-wide priority. Furthermore, trade positions with mandated staffing levels, such as import specialists, have not been hiring priorities. Limited numbers of staff focused on hiring trade positions. CBP’s hiring centers have limited numbers of staff dedicated to hiring for trade positions. As a result, they have a backlog in hiring for trade positions. Lengthy hiring process. Filling trade positions within the Office of Trade and the Office of Field Operations is a lengthy process. Potential candidates tend to drop out because of the time it takes to process an applicant. All positions require lengthy background investigations, and some positions, such as CBP officers, require a polygraph and additional clearances, which can take a long time. Location-based issues. Some positions, such as customs auditor, are harder to fill because other government agencies or the private sector are competing for the same pool of applicants with specialized knowledge, or because the positions are in locations that are less desirable for applicants. CBP does not offer incentives to recruit potential candidates for mandated trade positions, according to CBP officials. CBP has not articulated how it plans to address gaps in staffing for most of its trade positions. While CBP has established targets, it has not articulated a plan to attain those numbers or how budgetary constraints, if any, impact its ability to meet staffing levels. Leading practices in human capital management indicate that agencies, through strategic workforce planning, should address developing long-term strategies for acquiring, developing, and retaining staff to achieve programmatic goals. Specifically, leading practices suggest that agencies have a plan to identify strategies for recruiting staff that includes customized strategies to recruit highly specialized and hard-to-fill positions. Such a plan would help CBP to ensure that it meets its staffing targets for trade positions, particularly those positions where it has not met its mandated staffing levels for a number of years. However, we found that CBP has not developed such a plan and, during the course of our audit, we found that it generally conducts hiring for trade positions on an ad hoc basis. For example, officials from the Office of Field Operations told us that CBP hired for import specialist positions, as well as most other trade positions, on an as-needed basis based on requests coming from specific port locations. Officials from the Office of Field Operations told us that they are planning to take some actions to meet its staffing targets for trade positions. In December 2016, these officials said that CBP would post general import specialist job announcements starting in December 2016 and renew the announcements as necessary throughout fiscal year 2017. CBP officials indicated in March 2017 that several announcements were posted in late December and that there were over 4,000 applicants for the external announcement pertaining to general import specialist. These officials also stated that CBP plans to track and facilitate the Office of Field Operations’ progress toward selecting a sufficient number of applicants by June 2017 to fill all import specialist vacancies, but made no mention of recruiting or retention strategies in general or in hard-to-fill locations. In addition, in February 2017, officials from the Office of Trade told us that they are planning to create a hiring plan for customs auditors and are seeking incentives, but did not provide any time frames. As an agency tasked with collecting revenue and identifying harmful and noncompliant imports, such as counterfeit products and goods that are misclassified to evade duties, CBP needs to ensure that it effectively enforces U.S. customs and trade laws while at the same time facilitating legitimate trade. In 2015, CBP officials processed more than $2.4 trillion in imports through more than 300 ports of entry and collected around $46 billion in revenue, making CBP the second-largest revenue collection agency in the United States. In 2016, Congress passed the Trade Facilitation and Trade Enforcement Act, which codified the establishment of CBP and highlighted the numerous units within CBP’s Office of Trade and Office of Field Operations that play a critical role in CBP’s trade enforcement process. CBP’s strategic and annual plans for its Priority Trade Issues are intended to help focus the agency’s actions and resources on high-risk issues and direct its trade facilitation and enforcement approach. These plans identify goals and contain some performance measures. However, these plans generally lack performance targets, contrary to leading management practices. Without performance targets, CBP cannot assess its actual performance against planned performance or the effectiveness of its trade enforcement activities. Congress passed the Homeland Security Act of 2002, directing CBP to maintain, among other things, a minimum level of staff and associated support staff in certain customs revenue functions. In 2006, Congress directed CBP in the SAFE Port Act to prepare a resource model to determine the optimal staffing levels that are required to carry out commercial operations, including inspection and release of cargo and the revenue collection and trade functions described in section 412(b) of the Homeland Security Act. In its model, CBP outlined optimal staffing levels for 15 positions needed to perform trade functions and adequately staff Priority Trade Issues; 9 of the 15 are congressionally mandated trade positions. Our analysis of CBP staffing data over the past 5 fiscal years shows that CBP has generally not reached the optimal and mandated staffing levels for some of the 15 trade positions that carry out trade enforcement and protect revenue, such as import specialists, CBP officers, and customs auditors. CBP officials cited several challenges to filling staffing gaps, including that hiring for trade positions is not an agency-wide priority. Contrary to leading practices in human capital management, CBP has not articulated how it plans to reach its staffing targets for trade positions over the long term. Without adequate numbers of staff to carry out its numerous trade enforcement activities, CBP faces challenges to effectively carrying out its mandated mission to enforce U.S. trade laws. To strengthen CBP’s trade enforcement efforts, we recommend that the Commissioner of CBP direct relevant CBP units to take the following two actions: The Office of Trade should include performance targets, when applicable, in addition to performance measures in its Priority Trade Issue strategic and annual plans. The Office of Trade and the Office of Field Operations should develop a long-term hiring plan that articulates how CBP will reach its staffing targets for trade positions set in the Homeland Security Act and the agency’s resource optimization model. We provided a draft of this report for review and comment to CBP and ICE. CBP provided technical comments for the sensitive but unclassified version of the report, which we also incorporated in this report, as appropriate. ICE also provided technical comments on this report, which we incorporated, as appropriate. CBP provided formal agency comments, which are reproduced in appendix V. In its comments, CBP concurred with both of our recommendations and identified actions it intends to take in response to the recommendations. In response to our first recommendation, CBP indicated that it will work to identify applicable performance measures with performance targets to include in the fiscal year 2018 annual and strategic plans for its Priority Trade Issues. In response to our second recommendation, CBP indicated that the Office of Trade and the Office of Field Operations will partner with the Office of Human Resources Management to identify stakeholders and define challenges that have resulted in hiring gaps in trade-related positions and to develop a long-term hiring and resource plan. We are sending copies of this report to the appropriate congressional committees, the Acting Commissioner of CBP, and the Acting Director of ICE. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8612 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VI. We identified a total of 24 reports issued between fiscal years 2011 and 2016 that are related to trade enforcement and contain recommendations to the Department of Homeland Security (DHS) or U.S. Customs and Border Protection (CBP). Nine of these reports were audits conducted by GAO, while 13 were audits and 2 were inspections conducted by the DHS Office of Inspector General (OIG). This appendix provides an overview of key findings and relevant recommendations from these reports, as well as the status of actions taken to implement the recommendations. We identified nine GAO audits, issued between fiscal years 2011 and 2016 and related to trade enforcement, that have recommendations to DHS or CBP. Of the nine audits, two reviewed antidumping and countervailing (AD/CV) duties; three reviewed targeting; two reviewed other topics; and two reviewed the agricultural quarantine inspection (AQI) program, a program that guards against harmful agricultural pests and diseases by inspecting imported agricultural goods, products, passenger baggage, and vehicles at ports of entry. Table 7 lists the nine GAO audits. These nine GAO audits contain a total of 46 recommendations, 33 of which concern trade enforcement and are addressed to DHS or CBP. Topics of these 33 recommendations fall into four general categories: AD/CV duties, agriculture (specifically the AQI program), targeting and cargo examination, and other (see fig. 10 for a breakdown of recommendations by topic). Some of these 33 GAO recommendations address weaknesses concerning adequate data reliability, guidance, or performance measures as they relate to the four topics. Recommendations for addressing data reliability include, among other things: improving the reliability of AQI data on arrivals, inspections, and interceptions across ports; developing a better national estimate of compliance with maritime cargo targeting policies by calculating compliance rate differently; and developing an enhanced methodology for selecting samples to check compliance with policies on examining high-risk shipments. Recommendations for addressing guidance include, among other things: revising guidance to include how targeters are to correctly enter data in targeting systems; updating guidance on how ports are to conduct work studies to determine the correct allocation of staff time; and updating guidance with requirements to establish time frames for issuing trade alerts for exclusion orders, which are notices that the International Trade Commission (ITC) issues to not allow certain imports that infringe upon certain intellectual property rights or other unfair import practices into the United States. Recommendations for addressing performance measures include, among other things: reviewing performance against established time frames; and identifying performance measures for monitoring effectiveness in targeting, AD/CV duty collection, and protecting U.S. agriculture from introduction of foreign pests and disease. In the nine reports issued between fiscal years 2011 and 2016, GAO made 33 recommendations to DHS or CBP, of which 24 have been closed as implemented, and 9 remain open (see table 8 for the status of GAO’s open recommendations, as of October 2016). Most of the open recommendations come from two GAO reports. Three of the nine open recommendations are found in GAO’s July 2016 report on $2.3 billion in unpaid AD/CV duty bills. In addition, another three open recommendations are found in GAO’s July 2013 report on U.S.-Mexico border wait times. This report on U.S.-Mexico border wait times stated that CBP’s Office of Field Operations (OFO) could help CBP better ensure that scarce staff resources are effectively allocated to fulfill mission needs across ports by improving transparency, that is, documenting the methodology and process OFO uses to allocate staff to land ports of entry on the southwest border. We identified 15 reports, which include 13 audits and two inspections issued between fiscal years 2011 and 2016 by DHS OIG related to trade enforcement that have recommendations to DHS or CBP. Six of the DHS OIG audits were independent auditors’ reports on CBP’s financial statements. Of the remaining nine reports, three reviewed targeting and cargo examinations, and the other six each reviewed different topics: the bonding process; the Office of Regulatory Audit; the penalty process; the workload staffing model; bonded facilities; and the Automated Commercial Environment, the central trade data collection system used for, among other things, receiving users’ standard data and other relevant documentation required for the release of imported cargo. See table 9 for a list of these DHS OIG reports. The 15 DHS OIG audits and inspections contain a total of 142 recommendations, 103 of which pertain to trade enforcement and are addressed to DHS or CBP. Eighty-five of these 103 recommendations originate from the six independent auditors’ reports on CBP’s financial statements and cover recurring topics. Topics of the recommendations fall into 11 general categories. For a breakdown of DHS OIG recommendations by topic, see figure 11. Some of these 103 DHS OIG recommendations addressed weaknesses concerning guidance or strategy or procedures, as they relate to the 11 topics. Recommendations addressing guidance include, among other things, providing guidance to ports and field offices to ensure that employees resolve reports during the in-bond process, review bonded warehouses and foreign trade zones facilities, and use the correct targeting criteria for rail shipments. Recommendations addressing strategy and procedures, among other things, include establishing written procedures or conducting assessment for developing, changing, and using the Workload Staffing Model, an Excel spreadsheet-based model that CBP uses to identify staffing needs for CBP officers at ports of entry; and determining whether resources are appropriately allocated to ensure effective penalty case management at Fines, Penalties and Forfeitures field offices. Of the 103 DHS OIG recommendations pertaining to trade enforcement and addressed to DHS or CBP, 85 have been closed and 18 remain open (see table 10 for a list of DHS OIG reports with open recommendations and their status as of October 24, 2016). DHS OIG officials report that the agency concurred with these open and closed recommendations. The 18 open recommendations originate from five reports. In one particular report, the independent public accounting firm KPMG LLP (KMPG) recommended that CBP implement additional training at ports and/or additional oversight controls to ensure that risk assessments for bonded warehouses and foreign trade zones (FTZ) are consistently performed in accordance with required guidelines. KMPG had found that CBP was unable to provide evidence of completing compliance reviews to support the assessed risk level for certain bonded warehouses and FTZ reviews, and improperly recorded the risk level of an FTZ based on the compliance review that was conducted. KMPG found that the problem was that CBP personnel did not consistently adhere to bonded warehouses’ and FTZs’ policies and procedures for completing compliance reviews. These conditions increased the risk that imported goods awaiting entry into commerce may not be secure, which could result in a loss of revenue. KMPG also recommended that CBP implement additional training at port locations on tracking in-bond entries for compliance. An in-bond entry allows for the movement of cargo through the United States without payment of duty or appraisement prior to entry into either domestic commerce or exportation to a foreign country, and CBP oversees in-bond compliance using the In-Bond Compliance Module. However, KMPG found that port personnel did not have a clear understanding of how to operate the compliance module, which, according to OIG-15-76, may result in missed opportunities for CBP to assess fines and penalties and collect the associated revenues. This report examines (1) U.S. Customs and Border Protection’s (CBP) structure for carrying out trade enforcement, (2) how CBP conducts trade enforcement across its high-risk issue areas and ensures that its enforcement activities are effective, and (3) the extent to which CBP meets its staffing needs for trade enforcement. We also provide information on audits related to CBP’s trade enforcement and the implementation status of any recommendations made in such audits. To examine CBP’s structure for carrying out trade enforcement, we reviewed organizational documents for CBP, specifically for the Office of Trade and the Office of Field Operations. We also spoke with CBP officials in headquarters representing directorates and branches that have a trade enforcement component within CBP to better understand their role and organizational structure. We interviewed CBP officials in the field who carry out trade enforcement to learn about their roles and responsibilities. The trade enforcement staff we spoke to in the field include the following positions: Customs and Border Protection officer; entry specialist; import specialist; national account manager; drawback specialist; customs auditor; international trade specialist; agriculture specialist; paralegal specialist; Fines, Penalties and Forfeitures specialist; and seized property specialist. We also interviewed officials at all 10 Centers of Excellence and Expertise (Centers) and asked most Centers questions pertaining to the Center’s organizational structure and resources, the benefits of having Centers, and any challenges that they have faced since the Centers became operational. To examine how CBP conducts trade enforcement across its high-risk issue areas and ensures that its enforcement activities are effective, we reviewed CBP documents pertaining to trade enforcement and Priority Trade Issues, as well as planning documents. We interviewed CBP officials who set policy and conduct enforcement activities for Priority Trade Issues in headquarters as well as CBP officials in the field who carry out trade enforcement activities. Specifically, we met with officials representing units within the Office of Trade and the Office of Field Operations: National Targeting Center; Regulatory Audit; Fines, Penalties and Forfeitures Office; and Trade Operations Division. We also spoke with CBP officials representing every National Targeting and Analysis Group and the Commercial Targeting and Analysis Center to learn about their role in trade enforcement by Priority Trade Issue and to understand their coordination with other units within CBP. We visited CBP ports and field offices in Baltimore, Maryland; Los Angeles/Long Beach, California; and New York, New York to observe trade enforcement activities and interviewed CBP officials located at the sea and air ports. We visited an international mail facility at John F. Kennedy airport to learn about trade enforcement in the mail environment and observed cargo being inspected at a DHL facility at the Los Angeles airport. We selected the CBP ports and field offices to visit based on a number of factors, including the volume of imports coming through the ports, the number of relevant trade enforcement-related units at the port, the port environment (locations with cargo arriving by air, sea, and through international mail), and the proximity of the port to our location. We interviewed officials at all 10 Centers of Excellence and Expertise and asked questions pertaining to the Centers’ coordination within and outside of CBP, strategy, and data collection. We did not visit and observe any land ports of entry because of limited time frames. However, during our interviews, CBP officials discussed trade enforcement processing and other activities that occur at the land borders. We spoke with Immigration and Customs Enforcement (ICE) Homeland Security Investigations (HSI) officials in Washington, D.C., and in New York to learn about ICE and HSI’s role in trade enforcement as well as to understand how HSI coordinates with various CBP components. We requested copies of CBP’s strategic or annual plans for each of the seven Priority Trade Issues. We initially received three plans, one of which was current and two that were outdated but still being used by CBP. In addition, in February 2017 we received seven Priority Trade Issue plans, of which one was final and six were still in draft form. We reviewed and compared all plans we received against leading practices for managing results, specifically those focused on performance planning. These leading practices noted that a plan should identify goals and measures covering each of its program activities and contain targets to assess progress toward performance goals. To examine the extent to which CBP meets its staffing needs for trade enforcement, we reviewed CBP documents and reports related to trade enforcement positions and staff levels, such as CBP’s Resource Optimization Model. We obtained information on trade enforcement staffing, budgeting, hiring process, and any challenges to hiring trade staff by interviewing officials from the Office of Trade’s Resource Management Division, the Office of Field Operations’ Human Capital and Budget offices, the Enterprise Services’ Finance division, and the Minneapolis Hiring Center. We also spoke to CBP officials in the field about the impact of staffing shortfalls and challenges to meeting optimal staffing levels. We requested copies of CBP staffing plans or strategies related to trade enforcement positions and discussed with CBP officials whether they had any hiring plans for trade positions. We reviewed leading practices in human capital management, which indicated that agencies, through strategic workforce planning, should address developing long-term strategies for acquiring, developing, and retaining staff to achieve programmatic goals. We requested staffing data covering fiscal years 2012-2016 for the trade positions that carry out trade functions identified in CBP’s Resource Optimization Model. Nine of the 15 trade positions were mandated in the Homeland Security Act. To identify positions with staffing shortfalls, we compared actual staffing data for the nine mandated trade positions against the minimum staffing levels set in the Homeland Security Act. In addition, we compared actual staffing data for fiscal years 2015 and 2016 to the optimal staffing levels identified in CBP’s Resource Optimization Model. Although the Homeland Security Act also required CBP to maintain minimum staffing levels for the associated support staff for the nine mandated trade positions, we did not assess CBP’s ability to meet these staffing targets because CBP officials could not provide any information on these positions or the staffing levels in response to our request for data. To identify staffing shortfalls for the model’s six nonmandated trade positions, we compared the actual staffing data CBP reported for these positions in October 2014 against optimal staffing levels identified in the Resource Optimization Model. While we asked for the percentage of time spent on trade functions performed by staff in the six nonmandated trade positions for fiscal years 2012-2016, CBP was able to report data only for the end of fiscal year 2014 because it does not track the actual staffing levels for these positions on an annual basis. We did not independently assess and validate the optimal staffing models’ ranges. To assess the reliability of the actual staffing levels reported by CBP, as of October 2014, for the six nonmandated trade positions we compared and corroborated staffing information provided in CBP reporting and spoke to CBP officials regarding the methodology used to determine the actual staffing levels for these positions. On the basis of the checks we performed, we determined these data to be sufficiently reliable for the purposes of indicating the staffing levels for the nonmandated positions. To assess the reliability of the staffing data for fiscal years 2012-2016 for the nine mandated trade positions, we compared and corroborated information provided by CBP with staffing information in the Congressional Budget Justifications for that time period and spoke to CBP officials regarding the processes they used to collect and verify the staffing data. On the basis of the checks we performed, we determined these data to be sufficiently reliable for the purposes of comparing actual to mandated staffing levels. To provide information on audits related to CBP’s trade enforcement and the implementation status of any recommendations made in such audits, we identified audits that (1) were published between fiscal years 2011 and 2016, (2) followed a professional auditing standard such as the generally accepted government auditing standards (GAGAS), (3) contained recommendations made to the Department of Homeland Security (DHS) or CBP, and (4) were related to trade enforcement. To identify these audits, we searched databases such as ProQuest, Lexis Market and Industry News, and the National Technical Information Service to include sources representing think tanks, academics, trade industry, and government. We also searched the Department of Homeland Security’s Office of Inspector General (DHS OIG)’s external website and GAO’s internal database, the Engagement Results Phase. By systematically narrowing down the search results from the DHS OIG external website and GAO’s Engagement Results Phase, we identified nine GAO audits and 15 DHS OIG reports that met our criteria. We corroborated our DHS report searches with the DHS Office of Inspector General. Based on our searches, we did not find that any other private or nonprofit entities or government agencies had published trade enforcement audits between fiscal years 2011 and 2016 that pertained to trade enforcement, adhered to a professional auditing or inspection standard, and contained recommendations addressed to CBP or DHS. We obtained the status of GAO audit recommendations from GAO’s external website and GAO’s Engagement Results Phase database, and the status of DHS OIG audit recommendations from DHS OIG officials. For both GAO and OIG reports, we only included trade-relevant recommendations made out to CBP or DHS—136 recommendations out of a total of 188 (33 out of 46 recommendations for GAO reports, 103 out of 142 recommendations for OIG reports). The sources of these audits and inspections varied; the sources for GAO and DHS OIG may have been work initiated under agency authority, congressional mandates, or congressional requests. Other sources of DHS OIG audits included annually conducted independent auditors’ reports on CBP’s consolidated financial statements or part of a series of audit, inspection, and special reports prepared as part of DHS OIG’s oversight responsibilities to promote economy, efficiency, and effectiveness within the department. To analyze the content of the recommendations, we coded each recommendation made in GAO and OIG audits by a topic that reflected our reporting objectives, information in the Trade Facilitation and Trade Enforcement Act of 2015 (the Act) related to GAO’s mandate, and/or recurring themes and pre-existing topics identified in the audit reports. We identified 13 topics in the recommendations made by the GAO and OIG audits. Recommendation topics covering both the GAO and OIG audits included: agriculture as well as targeting and cargo examination. Recommendation topics unique to GAO audits included: antidumping and countervailing duties and an “other” category that includes U.S.-Mexico border wait times. Recommendation topics unique to OIG audits included: bonds, bonded warehouses and foreign trade zones, drawback, entry reports, in-bond program, obligations, staffing, and trade compliance measurement. To ensure the consistency and accuracy of coding the recommendations according to these topics, an independent verifier coded some topics independently, and a supervisor reviewed coding for other selected recommendations. We conducted the performance audit on which this report is based from May 2016 to April 2017 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. We subsequently worked with CBP in May 2017 to prepare this public version of the original sensitive but unclassified report for public release. This public version was also prepared in accordance with these standards. The Office of Trade is composed of six directorates (see fig. 12). The Office of Field Operations is composed of seven directorates, each with a number of divisions (see fig. 13). Table 11 provides a status of U.S. Customs and Border Protection’s (CBP) strategic or annual plans by Priority Trade Issue, as of February 2017. In addition to the contact named above, Christine Broderick (Assistant Director), Andrea Riba Miller (Analyst-in-Charge), Debbie Chung, Martin De Alteriis, Neil Doherty, Joyce Kang, Grace Lui, and Edith Yuh made significant contributions to this report. | In fiscal year 2015, CBP processed more than $2.4 trillion in imports through more than 300 ports of entry, collecting around $46 billion in revenue. CBP facilitates legitimate trade coming into the United States and enforces U.S. trade laws. CBP is tasked with collecting revenue and identifying harmful and noncompliant imports, such as counterfeit goods and goods that evade duties. In February 2016, Congress passed an Act that included a provision for GAO to review the effectiveness of CBP's trade enforcement activities. In this report, GAO examines (1) CBP's structure for carrying out trade enforcement , (2) how CBP conducts trade enforcement across its high-risk issue areas and ensures that its enforcement activities are effective, and (3) the extent to which CBP meets its staffing needs for trade enforcement. GAO reviewed agency documents, interviewed agency officials, and conducted field work at ports in Baltimore, Maryland; Los Angeles/Long Beach, California; and New York, New York. GAO selected ports to visit based on factors including volume of imports and number of trade enforcement units at each port. Two offices within U.S. Customs and Border Protection (CBP) enforce U.S. trade laws and protect revenue. The Office of Trade develops policies to guide CBP's trade enforcement efforts, while the Office of Field Operations conducts a range of trade processing and enforcement activities at U.S. ports. CBP's previously port-centric approach to trade enforcement has shifted to a national-level, industry-focused approach with the establishment of the Office of Field Operations' 10 Centers of Excellence and Expertise. These Centers represent a shift in trade operations, centralizing the processing of certain imported goods on a national scale through a single Center rather than individual ports of entry. CBP conducts trade enforcement across seven high-risk issue areas using a risk-based approach, but its plans generally lack performance targets that would enable it to assess the effectiveness of its enforcement activities. Violations in the high-risk issue areas can cause significant revenue loss, harm the U.S. economy, or threaten the health and safety of the American people. CBP's trade enforcement activities reduce risk of noncompliance and focus efforts on high-risk imports, according to CBP. For example, CBP conducts targeting of goods, conducts audits and verifications of importers, seizes prohibited goods, collects duties, and assesses penalties. However, CBP cannot assess the effectiveness of its activities without developing performance targets as suggested by leading practices for managing for results. Over the past 5 fiscal years, CBP generally has not met the minimum staffing levels set by Congress for four of nine positions that perform customs revenue functions, and it generally has not met the optimal staffing level targets identified by the agency for these positions. Staffing shortfalls can impact CBP's ability to enforce trade effectively, for example, by leading to reduced compliance audits and decreased cargo inspections, according to CBP officials. CBP cited several challenges to filling staffing gaps, including that hiring for trade positions is not an agency-wide priority. Contrary to leading practices in human capital management, CBP has not articulated how it plans to reach its staffing targets for trade positions over the long term, generally conducting its hiring on an ad hoc basis. This is a public version of a sensitive but unclassified report that GAO issued in April 2017. Information that CBP deemed sensitive has been redacted. To strengthen its trade enforcement efforts, CBP should (1) include performance targets in its plans covering high-risk issue areas, and (2) develop a long-term hiring plan specific to trade positions that articulates how it will reach its staffing targets. CBP concurred with both recommendations. |
Preference clauses have existed throughout the history of federal power legislation and have been directed to a variety of customers and regions of the nation. The Congress has mandated preference in the sale of electricity by federal agencies in a number of power-marketing and land reclamation statutes. The idea of establishing public priority or preference in the use of public water resources dates back to the 1800s, when the Congress decided to keep navigable inland waterways free from state taxes, duties, and the construction of private dams. The Reclamation Act of 1906, which is also referred to as the Town Sites and Power Development Act of 1906, is generally considered the federal government’s entry into the electric power field. The act grants preference in the disposition of surplus hydroelectric power from federal irrigation projects for “municipal purposes,” such as street lighting. As the availability and sources of electricity have changed over time, the types of preference clauses the Congress has included in legislation have evolved. For example, with the Federal Power Act of 1920, preference began to evolve from serving “municipal purposes” to serving particular classes of users, such as public bodies and cooperatives. The 1920 act required the federal government, when faced with breaking a tie between competing equal applications, to give preference to states and municipalities in awarding licenses for hydroelectric plants owned and operated by nonfederal entities. The act defined a municipality as a city, county, irrigation district, drainage district, or other political subdivision or agency of a state competent under law to develop, transmit, utilize, or distribute power. One primary benefit that the Congress sought in giving priority to public utilities and cooperatives, which distribute power directly to customers without a profit incentive, was to obtain lower electricity rates for consumers. At that time, competitive rate setting was not used to provide lower electricity rates for service from regulated monopolies with dedicated service territories. The Congress has also provided preference to specified regions of the nation. The notion of providing public bodies and cooperatives with preference for federal hydropower rests on the general philosophy that public resources belong to the nation and their benefits should be distributed directly to the public whenever possible. Under the various preference clauses, preference customers are given priority over nonpreference customers in the purchase of power. In many cases, the preference provisions of federal statutes give the electric cooperatives, many of which are rural, and public bodies priority in seeking to purchase federally produced and federally marketed power. However, the courts have held that preference customers do not have to be treated equally and that all potential preference customers do not have to receive an allotment of federal power. Preference provisions come into play only when a potential customer that does not have preference (such as an industrial user or a commercial power company) and a preference customer (such as a municipally owned utility or a rural electric cooperative) want to buy federal power and not enough is available for both. The Congress initially granted preference in the sale of federal electricity to public bodies and cooperatives for several reasons. First, it was a way to ensure that the benefits of this power were passed on to the public at the lowest possible cost, using cost-based rates, because the preference customers generally were entities that would not incorporate a profit in their rates. Second, it was also meant to extend the benefits of electricity to remote areas of the nation using publicly and cooperatively owned power systems. Additionally, the Congress gave preference to public bodies and cooperatives to prevent the monopolization of federal power by private interests. The rates charged by such nonprofit entities could then serve as a yardstick for comparison with the rates charged by public and private utilities. For example, the Boulder Canyon Project Act of 1928 encouraged public nonprofit distributors to begin marketing power by allowing them a reasonable amount of time to secure financing in order to construct generation and transmission facilities. According to the House Committee on Irrigation and Reclamation, one of the committees that drafted the 1928 act, the allocation of power rights between the preference and nonpreference customers was expected to create competition among various entities, ensuring reasonable rates and good service. These entities included states, political subdivisions, municipalities, domestic water- supply districts, and private companies. The committee viewed the preference clause as a bulwark against the monopolization of power by private companies. Another, more recent embodiment of the premise that public resources should be provided to the public without an effort to profit from their sale is the Hoover Power Plant Act of 1984. This act gives preference primarily to municipalities and others for power generated at the Hoover Dam. It also authorizes the renewal of a preference power contract with an investor-owned utility, originally entered under the Boulder Canyon Project Act of 1928. At about the same time as the Congress was enacting the Rural Electrification Act of 1936 to encourage cooperatives and others to extend their electric systems into nearby rural areas, it enacted other statutes that affect how federally generated electricity is sold, especially to cooperatives. The Bonneville Project Act of 1937, along with the earlier (1933) Tennessee Valley Authority (TVA) Act, extended preference to include nonprofit cooperative organizations. The acts also authorized the construction of federal transmission lines to carry the power, thus minimizing regional reliance on private power companies. The two laws established a statutory framework of energy allocation policies in an era of extensive federal hydroelectric development. The 1937 Bonneville Project Act authorized the construction of federal power lines in order to transmit the federal power as widely as practicable. The act states that preference was provided to public bodies and cooperatives to ensure that the hydropower projects were operated for the benefit of the general public, particularly domestic (residential) and rural customers. The preference clauses in the Bonneville and TVA acts were both viewed as yardsticks for evaluating the rates charged by private utilities. Preference for public entities and cooperatives is also found in the Reclamation Project Act of 1939 and the Flood Control Act of 1944. The Reclamation Project Act of 1939, which provides guidance for projects operated by the Bureau of Reclamation, gives preference to municipalities, other public corporations or agencies, and cooperatives and other nonprofit organizations. The Bureau is an agency within the Department of the Interior whose projects generate much of the electricity sold by Bonneville and Western. The 1939 act limited preference for cooperatives to those financed at least in part by loans made under the Rural Electrification Act of 1936, as amended. The Flood Control Act of 1944, which gives guidance for projects operated by the U.S. Army Corps of Engineers, gives preference to public bodies and cooperatives. The Corps’ projects generate electricity sold by all four PMAs. The act requires that electricity be sold to encourage the most widespread use of power at the lowest rates to consumers consistent with sound business practices. The federal government was authorized to construct or acquire transmission lines and related facilities to supply electricity to federal facilities, public bodies, cooperatives, and privately owned companies. The legislative history indicates that priority was given to public bodies and cooperatives to expand rural electrification and to avoid monopolistic domination by private utilities. Subsequent statutes, while building on preference provisions provided by other federal power marketing laws, granted regional, geographic preference. The Pacific Northwest Power Preference Act, enacted in 1964, authorizes Bonneville to sell outside its marketing area, the Pacific Northwest region, surplus federal hydropower if there is no current market in the region for the power at the rate established for its disposition in the Pacific Northwest. The 1980 Northwest Power Act requires Bonneville to provide power to meet all the contracted-for needs of its customers in the Northwest, extending the regional preference provisions of the 1964 act to include not only hydropower but also power from Bonneville’s and customers’ other resources—including coal-fired and nuclear plants. As a result of this regional preference, Bonneville’s customers in the Pacific Northwest—including private utility and direct service customers as well as public utilities—have priority over preference customers in the Pacific Southwest. The act also requires Bonneville to generally charge lower rates to preference customers than to nonpreference customers. Such rates are based upon the cost of the federal system resources used to supply electricity to those customers. In September 2000, the 1980 Northwest Power Act was amended to allow Bonneville to sell preference power to existing “joint operating entities” (public bodies or cooperatives formed by two or more public bodies or cooperatives that were Bonneville preference customers by Jan. 1, 1999). As indicated in the legislative history of the amendment, the new entities could pool their members’ or participating customers’ power purchases from Bonneville, which could result in operating efficiencies and reductions in overhead costs for them, without reducing Bonneville’s receipts from the sale of power. The Congress also granted regional preference in the sale of electricity from federal projects to other parts of the country, such as the Northeast, that are not served by the PMAs or TVA. The 1957 Niagara Redevelopment Act establishes (1) a division of all power from the project into preference and nonpreference power, (2) a preference for public bodies and cooperatives, with an emphasis on serving domestic and rural consumers, and (3) a geographic preference for preference customers in New York and in neighboring states. Other statutes give geographic preference to entire states or portions of states for purchases of electricity generated in those areas. For example, the 1928 Boulder Canyon Project Act gives preference to customers in Arizona, California, and Nevada for purchases of excess power from the Boulder Canyon Project. This preference language distinguishes among preference customers, giving the states (e.g., California) a priority over municipalities (e. g., Los Angeles). Although we found no instances in which the statutory preference provisions themselves were challenged, specific applications of these provisions by the PMAs have been challenged in the courts and in administrative proceedings. The cases have included disputes among preference customers and between preference and nonpreference customers of the various PMAs. In some instances, the courts have directed a PMA to provide power to preference customers, and in other instances, they have supported a PMA’s denial of power to such customers. General principles that may be drawn from the various court interpretations and rulings are that (1) PMAs must act in favor of customers specifically provided preference and priority in purchasing surplus power when nonpreference customers are competing for this power, (2) PMAs have discretion in deciding how and to which preference customers they will distribute electricity when the customers are in competition with each other for limited power, and (3) preference customers do not have to be treated equally, nor do individual preference customers have an entitlement to all or any of the power. The Federal Energy Regulatory Commission has affirmed the application of preference clauses in its rulings, as has the Attorney General in an opinion interpreting the preference provision of the 1944 Flood Control Act. A list of the court cases and administrative rulings we reviewed, with a brief description of each, is included in appendix I. The characteristics of the electricity industry on a national and regional basis have changed over time and continue to change. For example, the issues and problems of the 1930s, when rural America was largely without electricity and private utilities were not extensively regulated, were not those that confronted the Congress in later decades or that confront the Congress now. The issue of preference in power sales by the PMAs was of continuing interest during the 106th Congress. Not only was the Northwest Power Act amended in September 2000, but also a bill was introduced in the Senate in April 2000 to amend the Niagara Redevelopment Act. This bill would have eliminated the geographic preference allocating up to 20 percent of the power from the Niagara Power Project to states neighboring New York. The preference status of sales to selected cooperatives and public bodies in those states, however, would not have been affected. In September 2000, a bill was introduced in the House of Representatives to eliminate all future sales of preference power; its provisions would have taken effect only as each existing power sale contract expired. In October 2000, another bill was introduced in the House of Representatives to authorize investor-owned electric utilities in California to purchase power directly from Bonneville at specified rates. The 106th Congress adjourned, however, without taking further action on these bills. As of January 30, 2001, no bills directly relating to preference power had been introduced in the 107th Congress, according to DOE officials. Over the last 20 years, competition has been replacing regulation in major sectors of the U.S. economy. New legislation at the federal and state levels and technological changes have created a climate for change in traditional electricity markets. The extent to which the federal government should participate in fostering retail competition has yet to be decided. Over the last several years, the Congress has deliberated on the restructuring of the electricity industry. As the Congress continues these deliberations, it is considering redefining existing federal roles, as well as how to more efficiently and equitably produce and distribute electricity to all customers. The way that the federal government generates, transmits, and markets federal preference power has not changed in the same manner as the industry surrounding it. In a March 1998 report, we noted that the Congress has options that, if adopted, would affect preference customers. Considering changes to the preference provisions would be consistent with the spirit of several of our testimonies before various Senate and House committees. Examining the legacy of existing federal programs in light of changing conditions can yield important benefits. At these hearings, we discussed the need to reexamine many federal programs in light of changing conditions and to redefine the beneficiaries of these programs, if necessary. In our testimony, we noted that as the restructuring of the electricity industry proceeds, the Congress has an opportunity to consider how the existing federal system of generating, transmitting, and marketing electricity is managed, including the role of preference in federal power sales. We provided DOE with copies of a draft of this report. We met with officials of DOE’s Bonneville Power Administration and DOE’s Power Marketing Liaison Office, which is responsible for the other three PMAs. The PMAs generally agreed with the information in our draft report. They also observed that the previous administration did not support the repeal of the “preference clause” as part of the restructuring of the electricity industry. That administration did not incorporate such provisions in its bill to restructure the industry because it believed that federal restructuring legislation should be designed to ensure that consumers in all states benefit and that those in certain parts of the nation not be adversely affected. They also stated that, consistent with applicable statutes and current contracts, they have continually evaluated their roles and policies in light of changes occurring in the electric utility industry. They agreed with us that the Congress has the latitude to reconsider all laws containing both customer and geographic preference in federal electricity sales. To examine the evolution of preference in the PMAs’ marketing, we reviewed statutes, federal court cases, rulings by the Federal Energy Regulatory Commission, and an Attorney General’s opinion on federally mandated preference in electricity licensing or sales by federal facilities. As requested, we performed detailed reviews of legislative histories for nine of these statutes. We also reviewed past GAO reports, testimonies, and other products that relate to preference in the PMAs’ electricity sales. We interviewed the staffs of the PMA liaison offices in Washington, D.C., as well as the General Counsels of each of the four PMAs. We reviewed various other preference-related documents, including relevant law review articles, issue briefs from trade associations, and the PMAs’ marketing plans. We performed our review from October 1999 through January 2001 in accordance with generally accepted government auditing standards. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 5 days after the date of this letter. At that time, we will send copies to appropriate House and Senate Committees and Subcommittees; interested Members of the Congress; Steve Wright, Acting Administrator and Chief Executive Officer, Bonneville Power Administration; Charles A. Borchardt, Administrator, Southeastern Power Administration; Michael A. Deihl, Administrator, Southwestern Power Administration; and Michael S. Hacskaylo, Administrator, Western Area Power Administration. We will also make copies available to others on request. If you or your staff have any questions or need additional information, please contact me or Peg Reese at (202) 512-3841. Key contributors to this report were Charles Hessler, Martha Vawter, Doreen Feldman, and Susan Irwin. Grants preference to certain classes of public users to surplus reclamation water from public lands. Act of April 16, 1906 (Reclamation Act of 1906 or Town Sites and Power Development Act of 1906) Establishes the first precedent for a municipality's preference to surplus hydropower generated at federal irrigation projects. Provides for the disposition of hydroelectric power from irrigation projects and requires the Secretary of the Interior to give a preference to power sales for municipal purposes. Provides the city and county of San Francisco with a right-of-way over public lands for the construction of aqueducts, tunnels, and canals for a waterway, power plants, and power lines for the use of San Francisco and other municipalities and water districts. Prohibits grantees of the right to develop and sell water and electric power from selling or leasing those rights to any corporation or individual other than another municipality, municipal water district, or irrigation district. Federal Water Power Act (1920) (Federal Power Act) Requires FERC (formerly the Federal Power Commission) to give preference to states and municipalities in issuing licenses for hydropower projects operated by nonfederal entities, if the competing applications are equally well adapted for water development. (Preference criteria in this act may be used in disposing of power to preference customers under the Boulder Canyon Act of 1928.) Gives preference to municipal purposes for surplus power from the Salt River Project in Arizona. Act of December 21, 1928 (Boulder Canyon Project Act) Requires the Secretary of the Interior to give preference within the policy of the Federal Water Power Act, i.e., to states and municipalities, when selling power from the project. Gives the states of Arizona, California, and Nevada initial priority over other preference customers. Requires TVA to give preference in power sales to states, counties, municipalities, and cooperative organizations of citizens or farmers that are organized or doing business not for profit but primarily for the purpose of supplying electricity to their own citizens or members. Authorizes TVA to construct its own transmission lines to serve farms and small villages not otherwise supplied with reasonably priced electricity and to acquire existing electric facilities used to provide power directly to these customers. Authorizes loans for rural electrification and grants preferences to states; municipalities; utility districts; and cooperative, nonprofit, or limited-dividend associations. Requires BPA to give preference and priority to public bodies (nonfederal government agencies) and cooperatives. Allows the people within economic transmission distance of the Bonneville project (Washington, Oregon, Idaho, and Montana) a reasonable amount of time to create public or cooperative agencies so as to qualify for the public power preference and secure financing. Act of May 18, 1938 (Fort Peck Project Act) Requires the Bureau of Reclamation to give preference and priority to public bodies and cooperatives. Reclamation Project Act of 1939 [53 Stat. 1187, 1194, 43 U.S.C. 485, 485h (c) Requires the government, when selling surplus power from its reclamation projects, to give preference to municipalities and other government agencies, and to cooperatives and other nonprofit organizations financed in whole or in part by loans from the Rural Electrification Administration. Authorizes water projects in the Great Plains and arid and semiarid areas of the nation. Gives preference in sales or leases of surplus power to municipalities and other public corporations or agencies; and to cooperatives and other nonprofit organizations financed in whole or in part by loans under the Rural Electrification Act of 1936. Act of June 5, 1944 (Hungry Horse Dam Act) Authorizes the construction of the Hungry Horse Dam in western Montana for uses primarily in the state of Montana. This “Montana Reservation” has been interpreted as a geographic preference requiring a calculated quantity of power (221 average megawatts) from Hungry Horse Dam to be offered first for sale in Montana to preference and nonpreference customers before the calculated amount of power is offered to other BPA customers, including preference customers in other states. Act of December 22, 1944 (Flood Control Act of 1944) Gives preference to public bodies and cooperatives for power generated at Corps of Engineers projects and authorizes transmission to federal facilities and those owned by public entities, cooperatives, and private companies. Act of March 2, 1945 (Rivers and Harbors Act of 1945) Provides for the distribution of power from the Snake River Dams and the Umatilla Dam in accordance with the preference provisions of the Bonneville Project Act. Act of July 31, 1950 (Eklutna Act) Gave preference to public bodies and cooperatives and to federal agencies in sales of power from the Eklutna project near Anchorage, Alaska. Provides for the sale or lease of power from the Palisades Dam in southeastern Idaho to bodies entitled to preference under federal reclamation laws. Requires the Secretary of the Interior to give preference in the sale of power generated at the Falcon Dam on the Texas/Mexico border to public bodies and cooperatives. Authorizes BPA to purchase power generated at the Priest Rapids Dam in Washington. Requires BPA to sell the power according to the preference provisions applicable to other sales of BPA power. Provides for preference to public bodies and cooperatives in the sale of power from the Department of Energy's nuclear production facilities; also provides preference to private utilities serving high-cost areas not serviced by public bodies and cooperatives. Act of August 12, 1955 (Trinity River Division Act) Reserves 25 percent of the power from the Trinity power plants for preference customers in Trinity County, California. Act of April 11, 1956 (Colorado River Storage Project Act) Provides for the sale of power from the Colorado River Storage Project and participating projects to bodies entitled to preference under reclamation laws. Niagara Redevelopment Act (1957) Sets out preference and allocation provisions required to be included in FERC's license to the state of New York for the sale of power generated from the Niagara River. Contains several allocation mechanisms: (1) a division of all project power into preference and nonpreference power, (2) a preference clause for public bodies and cooperatives, particularly for the benefit of domestic and rural customers, (3) a provision that preference power sold initially to private utilities is subject to withdrawal to meet the needs of preference customers, (4) a geographic preference (80 percent of the preference power is reserved for New York preference customers and up to 20 percent for neighboring states), and (5) an allocation of a specific amount of power to an individual nonpreference customer for resale to specific industries. Provides that a reasonable amount of power, up to 50 percent, from dams subsequently constructed by the Corps of Engineers on the Missouri River, shall be reserved for preference customers within the state in which each dam is located. Atomic Energy Commission Authorization Act (1962) Authorizes the sale of by-product energy from the Hanford New Production Reactor to purchasers agreeing to offer 50 percent of the electricity generated to private organizations and 50 percent to public organizations. (DOE has terminated the operation of this reactor.) Requires “first preference” for customers in Tuolomne and Calaveras Counties in California for 25 percent of the additional power generated by the New Melones project. Required preference for federal agencies, public bodies, and cooperatives in power sales from the Snettisham project near Juneau, Alaska. Requires the Secretary of the Interior to give preference in the sale of power generated at Amistad Dam on the Texas/Mexico border to federal facilities, public bodies, cooperatives, and privately owned companies. Authorizes the sale outside the Pacific Northwest of federal hydroelectric power for which there is no current market in the region or that cannot be conserved for use in the region. Provides that sales outside the Pacific Northwest are subject to termination of power deliveries if a BPA customer in the Pacific Northwest needs the power. Grants reciprocal protection with respect to energy generated at, and the peaking capacity of, federal hydroelectric plants in the Pacific Southwest, or any other marketing area, for use in the Pacific Northwest. Explicitly provides that the Hungry Horse Dam Act's geographical preference for power users in Montana is not modified by this act. Authorizes the purchase of nonfederal thermal power for the Central Arizona irrigation project. Authorizes, subject to the preference provisions of the Reclamation Project Act, the disposal of power purchased, but not yet needed, for the project. Explicitly retains the preference provisions of the Bonneville Project Act of 1937 and other federal power marketing laws. Requires BPA to provide power to meet all the contracted-for needs of its customers in the Northwest. As a result of this regional preference, BPA's public as well as private utility and direct service industry customers in the Pacific Northwest have priority over preference customers in the Pacific Southwest. Requires BPA to charge lower rates to preference customers than to nonpreference customers. Also requires BPA to offer initial 20-year power sale contracts to specific nonpreference as well as preference customers throughout the Pacific Northwest: (1) publicly owned utilities, (2) federal agencies, (3) privately owned utilities, and (4) directly served industrial customers. Gives preference power to municipalities, an investor-owned utility, and others for power generated at the Hoover Power Plant. Amends the Federal Power Act to provide that preference does not apply to relicensing. (Retains preference for original licenses.) For a 10-year period, reserves power that becomes available because of military base closures for sale to preference entities in California that are served by the Central Valley Project and that agree to use such power for economic development on bases closed or selected for closure under the act. Authorizes BPA to sell excess power outside the Pacific Northwest on a firm basis for a contract term not to exceed 7 years, if the power is first offered to public bodies, cooperatives, investor-owned utilities, and direct service industrial customers identified in the Northwest Power Act. Amends the Northwest Power Act of 1980 to allow BPA to sell preference power to joint operating entities' members who were customers of BPA on or before January 1, 1999. Arizona Power Pooling Association v. Morton, 527 F.2d 721, (9th Cir. 1975), cert. denied, 425 U.S. 911 (1976) The court applied the Reclamation Project Act of 1939's preference clause to governmental sales of thermally generated electric power from the Central Arizona Project. The court held that under the act's preference clause, the Secretary of the Interior must give preference customers an opportunity to purchase excess power before offering it to a private customer. The court also held that preference customers do not have entitlement to federal power. Arizona Power Authority v. Morton, 549 F.2d 1231 (9th Cir. 1977), cert. denied, 434 U.S. 835 (1977) The court held that the implementation of geographic preferences in the allocation of federal hydroelectric power under the Colorado River Storage Project Act in a manner that discriminated among preference customers was within the discretion of the Secretary of the Interior and not reviewable by the court. City of Santa Clara v. Andrus, 572 F.2d 660 (9th Cir.), cert. denied, 439 U.S. 859 (1978) The court held that the Secretary of the Interior could not sell federally marketed power to a private utility, even on a provisional basis, while denying power to a preference customer. Only if the available supply of power exceeds the demands of interested preference customers may power be sold to private entities. Preference means that preference customers are given priority over nonpreference customers in the purchase of power. However, preference customers do not have to be treated equally, nor do all potential preference customers have to receive an allotment. City of Anaheim v. Kleppe, 590 F.2d 285 (9th Cir. 1978); City of Anaheim v. Duncan, 658 F.2d 1326 (9th Cir. 1981) The court held that the preference clause of the Reclamation Project Act of 1939 was not violated by the sale of federal power to private utilities on an interim basis when preference customers lacked transmission capacity to accept such power within a reasonable time and did not offer to buy power when it was originally sold. As a result, there was no competing offer between a preference and a nonpreference customer. Aluminum Company of America v. Central Lincoln Peoples' Utility District, 467 U.S. 380 (1984), rev'g Central Lincoln Peoples' Utility District v. Johnson, 686 F. 2d 708 (9th Cir. 1982) The Supreme Court held that terms of contracts, which the Pacific Northwest Electric Power Planning and Conservation Act required BPA to offer to certain nonpreference customers, did not conflict with the applicable preference provisions. The preference provisions determine the priority of different customers when there are competing applications for power that can be allocated administratively. Here, however, the contracts in question were not part of an administrative allocation of preference power, and the power covered by the initial contracts was allocated directly by the statute. Since BPA was not authorized to administratively allocate this power, there could be no competing applications for the power, and the preference provisions did not apply to the transactions. ElectriCities of North Carolina, Inc. v. Southeastern Power Administration, 774 F.2d 1262 (4th Cir. 1985) A challenge to SEPA's 1981 allocation policy for the Georgia-Alabama power system, changing the location and list of preference customers, was denied. The court held that the allocation of preference power is discretionary and that the preference provision of the Flood Control Act is too vague to provide a standard for the court to apply to SEPA's actions. Greenwood Utilities Commission v. Hodel, 764 F. 2d 1459 (11th Cir. 1985), aff'g Greenwood Utilities Commission v. Schlesinger, 515 F. Supp. 653 (M.D. Ga. 1981) A challenge to sales of capacity without energy to investor-owned utilities was denied. The court held that the Flood Control Act's preference provision did not establish an entitlement to power or standards for eligibility for power. The statute is too vague to permit judicial review of sales and allocations decisions. Arvin-Edison Water Storage District v. Hodel, 610 F. Supp. 1206 (D. D.C. 1985) Irrigation districts' claim to an allocation of power ahead of other preference customers (super preference) for WAPA power was denied. The preference clause of the Reclamation Project Act does not provide a superpreference for irrigators; it only provides that public entities be given preference over private entities. The clause does not require that all preference customers be treated equally or that they even receive an allocation. The allocation decision is within an agency's discretion and cannot be reviewed by the court. Brazos Electric Power Cooperative, Inc. v. Southwestern Power Administration, 828 F.2d 1083 (5th Cir. 1987) The court upheld the dismissal of a challenge by an electric cooperative to an exchange arrangement between a SWPA customer and an investor-owned utility. The investor-owned utility's arrangement with preference customers does not violate the preference provision of the Flood Control Act. Even though the investor-owned utility receives some economic benefits, this is not a sham sale of preference power. ElectriCities of North Carolina, Inc. v. Southeastern Power Administration, 621 F. Supp. 358 (W.D.N.C. 1985) SEPA's decision to create two divisions and sell some power to nonpreference customers in its Western Division while excluding preference customers in its Eastern Division is not subject to challenge by those excluded, who have no right or entitlement to allocations of SEPA power. Salt Lake City v. Western Area Power Administration, 926 F.2d 974 (10th Cir. 1991) The court held that WAPA reasonably interpreted the preference provisions of the Reclamation Project Act of 1939 in determining that preference applied only to municipalities that operated their own utility systems, and not to every city or town that fit the act's definition of “municipality.” Municipal Electric Utilities Association of the State of New York v. Power Authority of the State of New York (PASNY), 21 FERC ¶ 61,021 (Oct. 13, 1982); PASNY v. FERC, 743 F.2d 93 (2d Cir. 1984) FERC held that in the Niagara Redevelopment Act, the Congress defined the term “public bodies” as those governmental bodies that resell and distribute power to the people as consumers. The appellate court affirmed that preference rights under the act accrue to public bodies and nonprofit cooperatives that are engaged in the actual distribution of power. In determining the ultimate retail distribution of the power sold to them, public entities could resell the power to industrial and commercial users, not just to domestic and rural customers. The court also described “yardstick competition,” a theory that underlies preference. The court stated that the Congress, while concerned with meeting the needs of rural and domestic consumers, believed that all interests could best be served by giving municipal entities the right to decide on the ultimate retail distribution of the preference power sold to them. This belief was founded on the so-called “yardstick competition” principle, which assumes that if the municipal entities are supplied with cheap hydropower, their lower competitive rates will force the private utilities in turn to reduce their rates, with resulting benefits to all, including rural and domestic consumers. Massachusetts Municipal Wholesale Electric Company v. PASNY, 30 FERC ¶ 61, 323 ( Mar. 27, 1985) FERC reaffirmed parts of an earlier decision interpreting the Niagara Redevelopment Act as providing allocations of preference power for states neighboring New York and clarified which states were included. FERC held that any public body or nonprofit cooperative in a state neighboring New York within economic transmission distance of the Niagara Power Project is entitled to an allocation of preference power. FERC also held that only publicly owned entities that are capable of selling and distributing power directly to retail consumers are public bodies entitled to preference under the act. Disposition of Surplus Power Generated At Clark Hill Reservoir Project, 41 Op. Atty Gen. 236 (1955) The Attorney General construed section 5 of the Flood Control Act of 1944, providing preference to public bodies and cooperatives, to mean that if there are two competing offers to purchase federal power, one by a preference customer and the other by a nonpreference customer, and the former does not have at the time the physical means to take and distribute the power, the Secretary of the Interior must contract with the preference customer on condition that within a reasonable time fixed by the Secretary, the customer will obtain the means for taking and distributing the power. If within that period the preference customer does not do so, the Secretary is authorized to contract with the nonpreference customer, subject to the condition that should the preference customer subsequently obtain the means to take and distribute the power, the Secretary will be enabled to deal with the preference customer. The Secretary's duty to provide preference power is not satisfied by the disposition of the power to a nonpreference customer under an arrangement whereby the nonpreference customer obligates itself to sell an equivalent amount of power to preference customers. Affected PMA(s) Water Conservation and Utilization Act (1940) Colorado River Storage Project Act (1956) Pacific Northwest Power Preference Act (1964) Pacific Northwest Electric Power Planning and Conservation Act (1980) The first copy of each GAO report is free. Additional copies of reports are $2 each. A check or money order should be made out to the Superintendent of Documents. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. Orders by mail: U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Orders by visiting: Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders by phone: (202) 512-6000 fax: (202) 512-6061 TDD (202) 512-2537 Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. Web site: http://www.gao.gov/fraudnet/fraudnet.htm e-mail: [email protected] 1-800-424-5454 (automated answering system) | Congress has enacted many statutes that designate types of customers or geographic areas for preference and priority in purchasing electricity from federal agencies. In general, the preference has been intended to (1) direct the benefits of public resources--relatively inexpensive hydropower--to portions of the public through nonprofit entities, (2) spread the benefits of federally generated hydropower widely and encourage the development of rural areas, (3) prevent private interests from exerting control over the full development of electric power on public lands, and (4) provide a yardstick against which the rates of investor-owned utilities can be measured. The applications of various preference provisions have been challenged several times in the courts, which have directed a power marketing administration (PMA) to provide power to preference customers. In other instances, they have supported the denial of power to such customers. The characteristics of the electricity industry have changed. During the last 20 years, competition has been replacing regulation in major sectors of the U.S. economy. Several proposals have come before Congress to restructure the electrical industry, including some that would encourage the states to allow retail customers a choice in selecting their electricity supplier. As it debates these proposals, Congress has continued to consider the role of preference in the PMA's sale of electricity. |
The Financial Recordkeeping and Currency and Foreign Transactions Reporting Act, commonly referred to as the Bank Secrecy Act, passed by Congress in 1970, requires that financial institutions file certain currency and monetary instrument reports and maintain certain records for possible use in criminal, tax, and regulatory proceedings. As a result, the BSA helps to provide a paper trail of the activities of money launderers for law enforcement officials in pursuit of criminal activities. Congress has amended the BSA a number of times to increase the effectiveness of the regulators’ efforts. For example, the initial BSA reporting system did not include provisions for separate money laundering charges against those who had not satisfied reporting requirements. Thus, Congress enacted the Money Laundering Control Act of 1986, which made money laundering a criminal offense separate from any BSA reporting violations. This act created criminal liability for individuals or entities that conduct monetary transactions knowing that the proceeds involved were obtained from unlawful activity and made it a criminal offense to knowingly structure transactions to avoid BSA reporting. The 1986 act also directed the regulators (1) to issue regulations that require the financial institutions subject to their respective jurisdiction “to establish and maintain procedures reasonably designed to assure and monitor the compliance of such institutions;” (2) to review such procedures during the course of each examination of such financial institutions; (3) to issue cease and desist orders to ensure compliance with the requirements; and (4) to assess civil money penalties for failure to maintain such compliance procedures. In 1992, Congress increased the penalties for institutions and their employees who violate the BSA and authorized the regulators to take additional supervisory actions for such violations. More specifically, the Annunzio-Wylie Anti-Money Laundering Act authorized the federal banking regulators to revoke an institution’s charter if it was convicted of money laundering and, in certain circumstances, to issue removal and prohibition orders against individuals charged with BSA offenses. As authorized by this act, in 1996, Treasury issued a rule requiring that banks and other depository institutions use a Suspicious Activity Report (SAR) form to report activities involving possible money laundering. Institutions file these forms with the Financial Crimes Enforcement Network (FinCEN) at Treasury. Congress amended the BSA again in 1994, with The Money Laundering Suppression Act, to require that financial regulators develop enhanced examination procedures and training to improve identification of money- laundering schemes at financial institutions under their supervision. Accordingly, the federal banking regulators adopted a core set of examination procedures to determine whether an institution has the necessary system of internal controls, policies, procedures, and auditing standards to assure compliance with the BSA and implementing regulations. The procedures also require examiners to review an institution’s internal audit function, procedures, selected workpapers, records, reports, and responses. Based on the results, examiners may conclude the examination or continue with expanded procedures, which might include transaction testing and review of related documentation. This act also directed the Secretary of the Treasury to delegate to appropriate federal banking regulatory agencies the authority to assess civil penalties for BSA violations. In May 1994, the Secretary delegated this authority to FinCEN but, to date, this delegation has not been made to the banking regulators. In October 2001, Congress again amended the BSA through passage of the USA PATRIOT Act, specifically through Title III of this act. The passage of the USA PATRIOT Act was prompted, in part, by the September 11, 2001, terrorist attacks in Washington, D.C. and New York City, which in turn enhanced awareness of the importance of combating terrorist financing through the U.S. government’s AML efforts. Title III expanded the scope of the BSA to include organizations not previously covered, such as securities brokers, insurance companies, and credit card system operators. Among Title III’s provisions are requirements that financial institutions covered by the act: Establish and maintain AML programs; Identify and verify the identity of customers who open accounts; Exercise due diligence and, in some cases, enhanced due diligence with respect to all private banking and correspondent accounts; Conduct enhanced scrutiny with respect to accounts maintained by or on behalf of foreign political figures or their families; and Share information relating to money laundering and terrorism with law enforcement authorities, regulatory authorities, and financial institutions. Title III also added activities that can be prosecuted as money laundering crimes and increased penalties for activities that were money laundering crimes prior to enactment of the USA PATRIOT Act. Examination procedures of the federal banking regulators are expected to conform to PATRIOT Act amendments to the BSA and regulations issued by the Treasury. In the last few years and as recently as last month, the federal banking regulators and the courts have taken actions against a number of depository institutions for significant BSA violations. In addition to deficiencies at the institutions themselves, issues raised in these cases included the timeliness of the identification of BSA violations and enforcement actions taken by the regulators. To illustrate, I will discuss three different cases at three different types of depository institutions. In the first case, a bank was charged with BSA violations of suspicious activity report requirements and received a deferred prosecution. In 2000, the U.S. Department of Justice (Justice) charged Banco Popular de Puerto Rico, a bank subsidiary of a diversified financial services company serving Puerto Rico, the United States, and Latin America, with failing to file SARs in a timely and complete manner—in violation of the BSA. According to Justice, from 1995 through 1998, an individual, who was later convicted of money laundering offenses, deposited approximately $21.6 million in cash into an account at Banco Popular. Justice indicated that a number of branch employees were aware of the suspicious activity, but that the bank failed to investigate the account for over 2 years from the date the account was opened, and also did not report the suspicious activity to FinCEN until 1998 as required by the BSA. Although the Federal Reserve Bank of New York (FRBNY) conducted four examinations of Banco Popular from 1995 through 1998, the examinations, based on procedures used at the time, did not contain any criticism of the bank’s BSA compliance policies or procedures. In 1999, 4 years after the individual first began laundering an undetermined amount of money through Banco Popular, FRBNY expanded the scope of the bank’s regularly scheduled safety and soundness examination as a result of information it received from a U.S. Customs Service drug investigation. Based on AML compliance problems identified during the examination, FRBNY developed a supervisory strategy that led to a written agreement containing numerous remedial actions. Banco Popular also entered into a deferred prosecution agreement with Justice, FinCEN, and the Federal Reserve; and agreed to a civil money penalty of over $20 million. In another instance, FinCEN assessed penalties against a credit union for currency transaction reporting violations. In January 2000, FinCEN assessed civil money penalties of $185,000 against the Polish and Slavic Federal Credit Union, located in Brooklyn, New York, for willful failure to file Currency Transaction Reports (CTR) and improperly granting an exemption from CTR filings in violation of the BSA. FinCEN determined that between 1989 and 1997, the Polish and Slavic Federal Credit Union willfully failed to file numerous CTRs for currency transactions in amounts greater than $10,000. FinCEN also reported that the credit union, through the actions of its former management and board of directors, improperly exempted one customer from CTR filings. The customer, the former chairman of the credit union’s board of directors and owner of a travel agency and money remitter business, did not qualify for the CTR filing exemption, according to FinCEN. The remitter made over 1,000 currency deposits in excess of $10,000 but no CTRs were filed. FinCEN further reported that the credit union, through its former general manager and former board, failed to establish and maintain (1) an adequate level of internal controls for BSA compliance, (2) an effective BSA compliance program, (3) BSA training for credit union employees, and (4) an effective internal audit function. NCUA, the regulator of the Polish and Slavic Federal Credit Union, took a series of enforcement actions against the credit union beginning in January 1997 to compel compliance with the BSA. However, FinCEN’s report also indicates that NCUA’s enforcement actions began about 8 years after the violations began. In April 1999, NCUA removed the credit union’s board of directors and imposed a conservatorship based on the credit union’s failure to establish adequate internal controls, including controls for BSA compliance. Last month, OCC and FinCEN assessed a $25 million civil money penalty against Riggs Bank, N.A. for numerous BSA violations, including failure to maintain an effective BSA compliance program and to monitor and report transactions involving millions of dollars by the embassies of Saudi Arabia and Equatorial Guinea in Washington, D.C. Since 1987, OCC has required each bank under its supervision to establish and maintain an AML compliance program and specified four elements that banks were required to satisfy. However, FinCEN reported that Riggs was deficient in all four elements required by the AML regulation. FinCEN found that Riggs willfully violated the suspicious activity and currency transaction reporting requirements and the AML program requirements of the BSA. Specifically, Riggs failed to establish and maintain an effective BSA compliance program because it did not provide (1) an adequate system of internal controls to ensure ongoing BSA compliance, (2) an adequate system of independent testing for BSA compliance, (3) effective training for monitoring and detecting suspicious activity, and (4) effective monitoring of BSA compliance by the BSA officer. In July 2003, OCC entered into a consent order with Riggs, in which Riggs was directed to, among other things, correct AML internal control deficiencies and referred the Riggs case to FinCEN. According to a Riggs’ filing with the Securities and Exchange Commission, in April 2004, OCC classified Riggs as being in a “troubled condition” for failing to fully comply with the July 2003 consent order. Due to additional BSA violations by Riggs National Corporation (the bank’s holding company), in May, OCC and the Federal Reserve, respectively, issued a supplemental consent order and a cease and desist order, requiring extra corrective actions. OCC and FinCEN cited the corporation for deficiencies in risk management and internal controls. Although OCC deemed Riggs to be systemically deficient in 2003 and the bank entered into a consent order with OCC, Riggs was not in full compliance with the consent order in 2004 and was subsequently assessed the penalty. In addition to the three cases discussed above, published reports of BSA violations at other banks have increased concerns about bank noncompliance with the BSA and timely oversight and enforcement by the federal banking regulators. For example, in 2003, the Department of Homeland Security’s Bureau of Immigration and Customs Enforcement (ICE) reported that the Delta National Bank & Trust Company pled guilty in U.S. District Court to charges that it failed to file a SAR in connection with a transaction made in 2000 between two accounts at the bank. As part of the plea agreement with the government, the bank agreed to forfeit $950,000. In 2002, Broadway National Bank pled guilty to three felony charges for failing to report suspicious banking activity in the 1990s, according to ICE. The prosecutors determined that more than $120 million was illegally moved through the bank. The bank was fined $4 million. Recent Treasury and FDIC IG reports assessing the regulators’ examination work and enforcement activities have raised questions about potential gaps in the consistency and timeliness of the regulators’ monitoring and follow-up on BSA violations. The Treasury’s IG issued a report in 2003 on BSA violations at depository institutions and has a number of related audits in its fiscal year 2004 work plan. In September 2003, the Treasury IG issued a report on its review of OTS enforcement actions taken against thrifts with substantive BSA violations. Among its findings, the report stated that examiners found substantive BSA violations at 180 of the 986 thrifts examined from January 2000 through October 2002. OTS had issued written enforcement actions against 11 of the 180 thrifts; however, in 5 of these actions, the IG reported that enforcement actions did not address all substantive violations found, were not timely, or were ineffective in correcting the thrifts’ BSA violations. The IG further reported that among 68 sampled cases, OTS relied on moral suasion and thrift management assurances to comply with the BSA. In 47 cases (69 percent), thrift management took the corrective actions, but in the other 21 cases (31 percent), thrift management was nonresponsive. BSA compliance worsened at some of the 21 thrifts, according to the IG. The IG made several recommendations including that OTS assess the need for additional clarification or guidance for examiners on when to initiate stronger supervisory action for substantive BSA violations and time frames for expecting corrective actions from thrifts. OTS concurred and stated that supplemental examiner guidance would be provided for the first quarter of 2004. The IG’s fiscal year 2004 annual plan lists several related audit projects including an assessment of OTS’ BSA examinations, including the new requirements under the USA PATRIOT Act. I am pleased to be on a panel with the FDIC Inspector General and would like to highlight some of his office’s work to illustrate issues recently raised regarding BSA examinations and enforcement. For example, in March 2001, the IG reported on its review of the FDIC Division of Supervision and Consumer Protection assessment of financial institutions’ compliance with the BSA. Among the IG’s findings were that FDIC did not adequately document its BSA examinations work; as a result, the IG was unable to determine the extent to which examiners reviewed regulated institutions’ compliance with the BSA during safety and soundness examinations. The IG made several recommendations, including that FDIC reemphasize to examiners and ensure that they follow (1) specific guidance related to the documentation requirements of scoping decisions, procedures, and conclusions reached during the pre-examination process when risk- focusing BSA examinations; and (2) policy and instructions on how to adequately document BSA examination decision factors and procedures. With regard to both recommendations, FDIC stated it would reemphasize its existing policies and guidance, specifically those policies requiring examiner responses to all of the BSA core decision factors at each examination. FDIC also stated that it had made revisions to its BSA examination module. In September 2003, the IG reported on its audit of FDIC’s implementation of examination procedures to address financial institutions’ compliance with provisions of Title III of the USA PATRIOT Act. The IG concluded that FDIC’s existing BSA examination procedures covered the AML subject areas required by the act to some degree and that its Division of Supervision and Consumer Protection had advised FDIC-regulated institutions of the new requirements. However, the IG reported that, for a number of reasons, the division had not issued guidance to its examiners on the act’s provisions that required new or revised examination procedures. One of the report’s recommendations was that the division issue interim examination procedures for those sections of the USA PATRIOT Act for which Treasury had issued final rules. The division agreed with the recommendation. In March 2004, the IG issued a report on its work to determine whether the FDIC adequately followed up on BSA violations reported in examinations of FDIC-supervised financial institutions to ensure that they take appropriate corrective action. Among the IG’s findings was that, in some cases, BSA violations were repeatedly identified in multiple examination reports before bank management took corrective action or FDIC took regulatory action to address the repeat violations. The IG concluded that FDIC needs to strengthen its follow-up processes for BSA violations and recommended that FDIC’s Division of Supervision and Consumer Protection (1) reevaluate and update examination guidance to strengthen monitoring and follow-up processes for BSA violations and (2) review its implementation process for referring violations to Treasury. The IG noted that FDIC has initiatives underway to reassess and update its policies and procedures. Although it did not concur with all of the IG’s findings, in its response, FDIC concurred with the recommendations. In recent years, we have done work addressing money laundering issues within the context of different activities and financial institutions such as securities broker-dealers, Russian entities, and private banking. We have also reviewed FinCEN’s regulatory role. In 1998, we issued two reports regarding FinCEN’s role in administering the BSA. In both of these reports, we discussed the Secretary of the Treasury’s mandate to delegate the authority to assess civil penalties for BSA violations to federal banking regulatory agencies and noted that this delegation had not been made. One purpose of this work was to update information on civil penalties for BSA violations. We reported that one of the issues under discussion at the time was whether violations would be enforced under BSA provisions or under the banking regulators’ general examination powers granted by Title 12 of the U.S. Code. At that time, FinCEN officials told us that they were concerned that the banking regulators might be less inclined to assess BSA penalties and instead use their non-BSA authorities under their own statutes. Also in 1998, we reported on the activities of Raul Salinas, the brother of the former President of Mexico. Mr. Salinas was allegedly involved in laundering money from Mexico, through Citibank, to accounts in Citibank affiliates in Switzerland and the United Kingdom. We determined that Mr. Salinas was able to transfer $90 - $100 million between 1992 and 1994 by using a private banking relationship structured through Citibank New York in 1992 and effectively disguise the funds’ source and destination, thus breaking the funds’ paper trail. The funds were transferred through Citibank Mexico and Citibank New York to private banking investment accounts at Citibank London and Citibank Switzerland. In October 2000, we reported on our work on suspicious banking activity indicating possible money laundering conducted by certain corporations that had been formed in the state of Delaware for unknown foreign individuals or entities. We first identified an agent that together with a related company created corporations for Russian brokers and established bank accounts for those corporations. We also reviewed SARs filed by three banks concerning transactions by corporations formed by this agent for Russian brokers. We then determined that from 1991 through early 2000, more than $1.4 billion in wire transfer transactions was deposited into over 230 accounts opened at two U.S. banks—Citibank and Commercial Bank. More than half of these funds were wired from foreign countries into accounts at Citibank and over 70 percent of the Citibank deposits for these accounts were wire-transferred to accounts in foreign countries. Further, both of these banks had violated BSA requirements regarding customer identification. We concluded that these transfers raised concerns that the U.S. banking system may have been used to launder money. In 2001, we issued a report on changes in BSA examination coverage for certain securities broker-dealers. At the time, there was no requirement that all broker-dealers file SARs; however, broker-dealer subsidiaries of depository institutions and their holding companies were required to file SARs and were examined by banking regulators for compliance. We determined that with the passage of the 1999 Gramm-Leach-Bliley Act, these broker-dealers were no longer being examined to assess their compliance with SAR requirements, although they were being examined for compliance with reporting currency transactions and other requirements Treasury had specifically placed on broker-dealers. However, with the passage of the USA PATRIOT Act and the issuance of a final rule that became effective on July 31, 2002, all broker-dealers were required to report such activity. In December 2003, the Chairman and Ranking Member of this Committee requested that we conduct a review of the regulators’ BSA examination procedures and enforcement actions. In requesting this work, you cited the Treasury and FDIC IG work that I discussed above. Among the major questions you raised were: How do the regulators design, target, and conduct BSA compliance examinations, including for the added provisions of the USA PATRIOT Act? How many BSA violations have federal banking regulators identified and taken action on over a several year time period? What consequences do the regulators’ risk-focused examinations have for identification and enforcement of BSA violations? What differences, if any, are there between enforcement of the BSA through the regulators’ general safety and soundness authorities and enforcement of the BSA under the terms of the BSA itself? Are BSA violations consistently interpreted among the regulators, Treasury, and depository institutions? How do BSA violations come to the attention of the regulators and what other agencies are involved in resolving the violations? What is the relationship between Treasury and the banking regulators in shaping examination policy and subsequent enforcement actions? Do the regulators have adequate resources for conducting BSA compliance examinations, including the BSA provisions of the USA PATRIOT Act? We have begun doing this work for the Committee. In general, the major objectives of our review are to determine: 1. How do the regulators’ risk-focused examinations of depository institutions assess BSA and AML program compliance? 2. To what extent do the banking regulators identify BSA and AML program violations and take supervisory actions for such violations? 3. How consistent are BSA examination procedures and interpretation of BSA violations across the banking regulators? 4. What resources do the federal banking regulators have for conducting examinations of BSA and PATRIOT Act compliance? As part of our review, and considering the IGs’ findings, we are examining the relevant BSA amendments and banking statutes, regulations, and policies that address the authorities under which the regulators and Treasury take supervisory action for BSA violations and violations of their AML program rules. We are reviewing current examination guidance and procedures that the regulators use for determining compliance with the BSA, and related requirements used during their regular and targeted examinations. We will also try to ascertain the implications of “risk- focused” examinations for BSA compliance and to determine whether and to what extent the regulators curtail such compliance reviews in their examinations. We are reviewing the reliability of the data systems used by banking regulators to track bank examinations, including BSA compliance examinations. We plan to obtain information on the bank examinations performed by each banking regulator over the past 4 years and then select a random sample to determine whether and the extent to which a BSA review was conducted or curtailed and the bases for these decisions. We also are obtaining information from the banking regulators on the number of BSA examinations done over the past 4 years and the number and nature of violations they identified. We plan to select and analyze samples of their BSA examinations and supporting workpapers to secure, in part, information on violations identified and the areas of operation covered during the examinations. Additionally, we plan to track supervisory actions taken by the regulators to correct the violations they identified. Our analyses in this area will include assessing the regulators’ examination procedures for BSA and AML compliance and the nature of violations and corresponding supervisory actions. We will also review the examinations in our sample to determine the extent to which the examinations reviewed policies and procedures and then tested transactions to see if the policies and procedures were implemented appropriately. We will also determine the extent to which banking regulators vary in the way they conduct their BSA examinations, cite banks for violations, and take enforcement actions. Key legal issues we will be examining are the ramifications, if any, of the lack of delegation of authority to assess BSA penalties by Treasury to the federal banking regulators, as mandated by statute in 1994. We will examine enforcement of the BSA through the regulators’ general safety and soundness authority and enforcement under the terms of the BSA itself to see whether there are differences, including circumstances under which the regulators make referrals to Treasury and law enforcement agencies. In addition, we will meet with government officials at the federal and state levels and from the banking and credit union industries to gain their perspectives on the risk-focused BSA examination process and post- examination follow-up activities. We have finished our initial meetings with the federal banking regulators; and officials at the Departments of Homeland Security, Justice, and Treasury, including FinCEN. We will have follow-on meetings with them as well as with state banking supervisors, and representatives from depository institutions of various sizes to gain their views on the consistency of examiner interpretation of potential BSA-related deficiencies and the regulators’ BSA examination procedures, and their own internal control activities. Mr. Chairman, this concludes my prepared statement. I would be happy to respond to any questions that you or Members of the Committee may have. For questions concerning this testimony, please call Davi M. D’Agostino at (202) 512-8678. Other key contributors to this statement were M’Baye Diagne, Toni Gillich, Barbara Keller, Kay Kuhlman, and Elizabeth Olivarez. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | The U.S. government's framework for preventing, detecting, and prosecuting money laundering has been expanding through additional pieces of legislation since its inception in 1970 with the Bank Secrecy Act (BSA). The purpose of the BSA is to prevent financial institutions from being used as intermediaries for the transfer or deposit of money derived from criminal activity and to provide a paper trail for law enforcement agencies in their investigations of possible money laundering. The most recent changes arose in October 2001 with the passage of the USA PATRIOT Act, which, among other things, extends antimoney laundering (AML) requirements to other financial service providers previously not covered under the BSA. GAO was asked to testify on its previous work and the ongoing work it is doing for the Senate Committee on Banking, Housing, and Urban Affairs on the depository institution regulators' BSA examination and enforcement process. In recent years, GAO has issued a number of reports dealing with regulatory oversight of anti-money laundering activities of financial institutions. In 1998, GAO issued a report regarding Treasury's Financial Crimes Enforcement Network's (FinCEN) role in administering the BSA, which updated information on civil penalties for BSA violations. One focus was the Secretary of the Treasury's 1994 mandate to delegate the authority to assess civil money penalties for BSA violations to federal banking regulatory agencies. GAO noted that this delegation had not been made and said that FinCEN was concerned that bank regulators may be less inclined to assess BSA penalties and may prefer to use their non-BSA authorities under their own statutes. Also in 1998, GAO reported on the activities of Raul Salinas, the brother of the former President of Mexico. Mr. Salinas was allegedly involved in laundering money from Mexico, through Citibank, to accounts in Citibank affiliates in Switzerland and the United Kingdom. GAO determined that Mr. Salinas was able to transfer $90 - $100 million between 1992 and 1994 by using a private banking relationship structured through Citibank New York in 1992 and effectively disguise the funds' source and destination, thus breaking the funds' paper trail. In 2001, GAO issued a report on changes in BSA examination coverage for certain securities broker-dealers. At the time, there was no requirement that all broker-dealers file Suspicious Activity Reports (SARs); however, brokerdealer subsidiaries of depository institutions and their holding companies were required to file SARs and were examined by banking regulators for compliance. GAO determined that with the passage of the 1999 Gramm-Leach-Bliley Act, these broker-dealers were no longer being examined to assess their compliance with SAR requirements. However, with the passage of the USA PATRIOT Act and the issuance of a final rule that was effective on July 31, 2002, all broker-dealers were required to report such activity. GAO is currently studying the depository institution regulators' BSA examination and enforcement process for the Senate Committee on Banking, Housing, and Urban Affairs. The objectives include determining how the regulators' risk-focused examinations assess BSA compliance, the extent to which the regulators identify BSA and AML violations and take supervisory actions, and the consistency of BSA compliance examination procedures and interpretation of violations across regulators. GAO plans to determine whether and to what extent regulators curtailed BSA compliance examinations and the bases for these decisions. GAO plans to track supervisory actions taken to correct violations identified. GAO will also examine the ramifications, if any, of the lack of delegation of authority to assess BSA compliance penalties by Treasury to the banking regulators, as mandated by statute. GAO will meet with government and industry officials to gain their perspective on the BSA compliance examination process. |
In 1996, bonds, AIP, and passenger facility charges provided about $6.6 billion of the $7 billion in airport funding. State grants and airport revenue contributed the remaining funding for airports. Table 1 lists these sources of funding and their amounts in 1996. The amount and type of funding vary considerably by the type of airport. The nation’s 71 largest (large and medium hub) airports, which accounted for almost 90 percent of all passenger traffic, had more than $5.5 billion in funding in 1996, while the 3,233 other national system airports had about $1.5 billion. As shown in figure 1, large and medium hub airports rely most heavily on airport bonds, which account for roughly 62 percent of their total funding. By contrast, the other 3,233 smaller national system airports obtained just 14 percent of their funding from bonds. For these smaller airports, AIP funding constitutes a much larger portion of their overall funding—about half. Airports’ planned capital development over the next 5 years may total as much as $10 billion per year, or $3 billion more per year than their 1996 funding. Figure 2 compares airports’ total capital development funding in 1996 with their annual planned development over the next 5 years. Funding for 1996 is shown by source. Planned spending for future years is shown by the relative priority of the projects, as follows: FAA’s highest priorities (shown as reconstruction and mandates) total $1.4 billion per year and are for projects to meet safety, security, and environmental requirements, including noise mitigation, and for projects that maintain the existing infrastructure (reconstruction). Other high-priority projects—primarily, those adding capacity—add another $1.4 billion per year. Other projects of a relatively lower priority—such as those bringing airports up to FAA’s design standards—add another $3.3 billion per year, for a total of $6.1 billion per year. Finally, airports anticipate another $3.9 billion per year in projects that are not eligible for AIP—such as those expanding commercial space in terminals and constructing parking garages. Although a sizable difference may exist in total, when a comparison of 1996 funding to planned future development is made, there is a much closer match if the comparison is restricted to comparing AIP funding and planned spending on FAA’s highest-priority projects (reconstruction and mandates). In the aggregate, the $1.372 billion in AIP funding in 1996 roughly equates to the $1.414 billion in estimated development planned for the highest priority projects. However, because about one-third of AIP funds are awarded to airports on the basis of the number of passengers enplaned and not necessarily on the basis of the project’s priority, the full amount of AIP funds may not be going to the highest-priority projects. The funding difference between current funding and planned development for smaller airports is bigger, in percentage terms, than for larger airports. Current funding at the 3,233 small, nonhub, other commercial service and at general aviation airports is a little over half of the estimated cost of their planned development, thus producing a difference of about $1.4 billion. (See fig. 3.) The difference might actually be even greater if it were not for $250 million in special facility bonding for a single cargo/general aviation airport. For this group of airports, the $782 million in 1996 AIP funding surpasses the annual estimate of $750 million for reconstruction, noise, and federally mandated projects. As a portion of total funding, the potential funding difference for the 71 large and medium hub airports is comparatively less than it is for their smaller counterparts. (See fig. 4.) However, because total expenditures for capital projects are so much greater for these airports, this potential dollar shortfall is $1.5 billion, or $87 million greater than other airports’ collective shortfall. Figure 4 also indicates that $590 million in AIP funding falls $74 million short of the estimated cost to meet FAA’s highest-priority development—meeting federal mandates and maintaining the current infrastructure. Evaluating the various proposals to provide additional funding for airport development involves the consideration of the trade-offs among the various funding types as well as the potential effect that each proposal would have on airports. Initiatives to increase funding for airport development include increasing AIP funding, raising the ceiling on PFCs, and other less conventional steps, such as FAA’s innovative finance and privatization pilot programs. In addition, we examined the potential benefits of state-administered revolving funds. Choosing to increase one source of airport funding instead of another involves making trade-offs because the current funding sources differ in several key characteristics. For example, increasing AIP funding increases the extent to which the government can specify the recipient, the project, and the amount of funds that will be awarded. However, because grant programs in general are relatively costly to administer, increasing funding in this manner would increase administrative costs more than some other funding mechanisms. Conversely, increasing PFCs reduces the extent to which the government or airlines can specify how funds are used. Finally, compelling airports to raise more funding through the bond markets limits governmental control over investments. The funding mechanisms also differ with respect to who bears the cost of airport financing. These differences affect the extent to which beneficiaries pay in proportion to the benefits they receive. For example, grants are funded through AIP, which is, in turn, funded primarily by the ticket tax. Thus, users pay for grants to airports. In contrast, part of the cost of tax-exempt bonds is borne by nonusers of airports because the interest earned by bondholders is exempt from federal income taxation. As a result, more of the cost of bond financing is borne by nonusers of airports than in the case of grants. However, it is uncertain whether using bonds to increase funding would improve or worsen the overall efficiency and equity of airport financing because nonusers may benefit from the local economy stimulated by airport development. Increasing total AIP funding would proportionately help smaller airports more than large and medium hub airports under the existing distribution formula. Increasing the level of AIP under the existing distribution formula appears to provide a slightly increasing share of AIP funds to the smaller airports and a concomitant decrease for the larger airports. AIP funding for fiscal year 1998 stands at $1.7 billion; large and medium hub airports get nearly 40 percent of this amount, and all other airports get about 60 percent. We calculated how this percentage split would be affected at funding levels of $2 billion and $2.347 billion. The National Civil Aviation Review Commission and the Air Transport Association (ATA), the commercial airline trade association, have recommended that future AIP funding levels be stabilized at a minimum of $2 billion annually. The level of $2.347 billion, which is the maximum amount authorized for fiscal year 1998, is supported by the airport trade groups—American Association of Airport Executives and Airports Council International-North America. Table 2 shows the results. Under existing funding formulas, the proportion of AIP funds going to smaller airports would rise. While the ATA has recommended a minimum $2 billion funding level for AIP, they also recommended redefining airport categories and the distribution formulas for AIP. ATA proposes that national system airports be grouped into four categories and that a specified portion of AIP funds be distributed to airports in each category. Under ATA’s proposal, a slightly higher portion of a $2 billion AIP would go to the larger airports and a slightly smaller portion to the smaller airports than under current categories and formulas. Increasing PFC-based funding would mainly help larger airports. Large and medium hub airports accounted for nearly 90 percent of all passengers in 1996. Large and medium hub airports are more likely to have an approved PFC in place. As of January 1, 1998, 264 commercial service airports—almost half of all such airports—imposed a PFC, but nearly three-quarters of the large and medium hub airports have a PFC. Finally, while the PFC program requires large and medium hub airports that impose a PFC to forgo a portion of their AIP funding so that these funds can be redirected to smaller airports, most of these larger airports are already returning their maximum amount, according to FAA officials, and, therefore, the amount returned would not appreciably increase if the PFC ceiling were raised or eliminated. If the airports currently charging PFCs were to increase them to $4, $5, or $6 per passenger instead of the current $3 limit, total collections would increase from the current $1.1 billion to $1.5 billion, $1.9 billion, and $2.2 billion, respectively, on the basis of 1996 enplanements and collection rates. The bulk of the increased collections would accrue to large and medium hub airports. Furthermore, if all 540 commercial service airports were to impose a PFC, collections could climb to as much as $2.9 billion, but again, most of this would accrue to large and medium airports. Increased PFC funding is likely to be applied differently than increased AIP funding. According to airport groups, airports require more PFC funding to reduce congestion at airports, especially for passengers trying to access the airport and moving through the terminal. For some airports, roadside and terminal congestion may be more severe than that on the airfield and harder to finance, according to airport groups, because airlines are not as supportive of nonairfield projects and because these projects are ineligible for or are a low priority for AIP funding. As a result, a majority of PFCs are dedicated to terminal and airport access projects and interest payments on debt. The outcome of two FAA experiments, while still uncertain, is not likely to be far reaching owing to the limited participation of airports. In recent years, FAA, with congressional urging and direction, has sought to expand airports’ available capital funding through more innovative methods, including more flexible application of AIP funding and attracting more private capital. The 1996 Federal Aviation Reauthorization Act authorized FAA to test three innovative uses for AIP funding—(1) permitting greater percentages of local matching for AIP funding, (2) paying interest costs on debt, and (3) purchasing bond insurance—for up to 10 projects. In addition, another innovative mechanism—using AIP funding to help fund state airport revolving funds—is not currently permitted but may hold some promise. Finally, the 1996 act authorized a pilot to test the benefits of airport privatization. Thus far, FAA has received 30 applications and approved 5 projects totaling $15.36 million for its innovative finance pilot. All five projects test the first innovative use of AIP funding—allowing local contributions in excess of standard grant match amounts, which for most airports and projects is otherwise fixed at 10 percent. FAA and state aviation representatives generally support the concept of flexible matching because it means that projects that otherwise might not get under way because of a lack of FAA funding can get started sooner; in addition, flexible funding may ultimately increase funding to airports. Applicants, however, have shown less interest in the other two options, which according to FAA and investment banking officials, do not offer new or substantial benefits for airports. Another innovative concept, not currently permitted, would be to use AIP funding to help capitalize states’ revolving loan funds. Currently, FAA cannot use AIP funds to capitalize a state’s loan fund because AIP construction grants can go only to a designated airport and project. However, some federal transportation, state aviation, and airport bond rating and underwriting officials believe that state revolving loan funds would help smaller airports obtain additional financing. State revolving loan funds have been successfully employed to finance other types of infrastructure projects, such as waste water projects and, more recently, drinking water and surface transportation projects. While loan funds can be structured in various ways, basically they use federal and state moneys to capitalize the fund, from which loans are then made. Interest and principal payments are recycled to provide additional loans. Once established, a loan fund can expand by issuing bonds using the fund’s capital and loan portfolio as collateral. These revolving funds do not create any contingent liability for the U.S. government because they would be under state control. Declining airport grants and broader government privatization efforts spurred interest in airport privatization as another innovative means to bring more capital to airport development, but thus far, efforts have shown only limited results. As we previously reported, the sale or lease of airports in the United States faces many hurdles, including legal and economic constraints. As a way to test privatization’s potential, the Congress directed FAA to establish a limited pilot program under which some of these constraints would be eased. Starting December 1, 1997, FAA began accepting applications from airports to participate in the pilot program on a first-come, first-served basis for up to five airports. Thus far, two airports have applied to be part of the program. In summary, Mr. Chairman, I would like to reiterate a point that bears on whether the federal government should take action to increase or reallocate funding for airports. We believe the difference between the $10 billion in planned development and the $7 billion in current funding for airports is not as important as the disparity between larger and smaller airports’ capacity to finance their development. As we have said, current funding for the 71 large and medium hub airports is more than three-fourths of their planned development. For the other 3,233 smaller national system airports, however, current funding is only about half of their planned development and even less for some categories of these airports. Moreover, these smaller airports have more limited access to bond financing and, therefore, mostly rely on federal and state grants. The Airport Improvement Program is a more significant source of funding for smaller airports than for larger ones. Therefore, a decision to increase PFCs to help finance the development of larger airports, by itself, does little to correct the imbalance between the financial capacity of larger and smaller airports. Such a move would need to be coupled with reallocating AIP funding in favor of smaller airports as well as considering other measures designed to help smaller airports, such as funding for state revolving funds. Mr. Chairman, this concludes our prepared statement. We would be happy to respond to any questions that you or the members of the Subcommittee may have. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | GAO discussed airport funding issues, focusing on: (1) how much airports are spending on capital development, and where the money is coming from; (2) whether current funding levels will be sufficient to meet airports' planned development; and (3) what effect will various proposals to increase airport funding have on airports' ability to fulfill capital development plans. GAO noted that: (1) in 1996, the 3,304 airports that make up the national airport system obtained about $7 billion for capital development; (2) more than 90 percent of this funding came from three sources: (a) airport and special facility bonds; (b) the Airport Improvement Program (AIP); and (c) passenger facility charges paid on each airline ticket; (3) the magnitude and type of funding varies with each airport's size; (4) the nation's 71 largest airports accounted for nearly 80 percent of this funding; (5) as a group, these airports received only about 10 percent of their funding from AIP; (6) by contrast, the remaining 3,233 smaller airports that complete the national system rely on AIP for half of their funding; (7) airports planned as much as $10 billion per year in development for the years 1997 through 2001, or $3 billion per year more than they spent in 1996; (8) about $1.4 billion per year of that development is planned for safety, security, environmental, and reconstruction projects--the Federal Aviation Administration's highest priorities; (9) another $1.4 billion per year of that development is planned for other high-priority projects, primarily adding airport capacity; (10) other projects of a relatively lower priority, such as bringing airports up to FAA's design standards, add another $3.3 billion per year; (11) airports anticipate another $3.9 billion per year for projects that are not eligible for funding from AIP, such as expanding commercial space in terminals and constructing parking garages; (12) the difference between current funding and planned development is especially acute for smaller commercial and general aviation airports; (13) their 1996 funding would cover only about half of their total planned development; (14) several proposals to increase airport funding have emerged in recent years; (15) these include increasing the amount of funding for AIP, raising or eliminating the ceiling on passenger facility charges, and better leveraging of existing funding sources; and (16) these proposals vary in the degree to which they help specific types of airports. |
The Army Guard is the oldest component of any of the uniformed services. It traces its roots to the colonial militia and claims a “birth” of 1636. Today, the Army Guard exists in 54 locations that include all 50 states, the District of Columbia, and three territories: Guam, the Virgin Islands, and Puerto Rico. There are Army Guard facilities in more than 2,800 communities and over 350,000 Army Guard members. During peacetime, each Army Guard unit reports to the adjutant general of its state or territory, or in the case of the District of Columbia, to the Commanding General. Each adjutant general reports to the governor of the state or territory, or in the case of the District of Columbia, to the mayor. At the state level, the governors have the ability, under the Constitution of the United States, to call up members of the Army Guard in times of domestic emergency or need. The Army Guard’s state mission is perhaps the most visible and well known. Army Guard units battle fires or help communities deal with floods, tornadoes, hurricanes, snowstorms, or other emergency situations. In times of civil unrest, the citizens of a state rely on the Army Guard to respond, if needed. During national emergencies, the President has the authority to activate the Army Guard, putting them in federal duty status. When ordered to federal active duty by the President in accordance with the provisions of Title 10, United States Code, the units answer to the Combatant Commander of the theatre in which they are operating and, ultimately, to the President. When called to perform duty in accordance with the provisions of Title 32, United States Code, units answer to the adjutant generals and ultimately to the governors. When Army Guard units are performing duty under Title 10 or Title 32, the federal government provides funds for reimbursement of authorized travel expenses. The Army Guard is a partner with the active Army and the Army Reserve in fulfilling the country’s military needs. The National Guard Bureau (NGB), which assists the Army Guard in the partnership, is a joint bureau of the Departments of the Army and the Air Force and is charged with overseeing the federal functions of the Army Guard and the Air National Guard (Air Guard). In this capacity, NGB helps the Army Guard and the Air Guard procure funding and administer polices. NGB also acts as a liaison between the Departments of the Army and Air Force and the states. All Army forces are integrated under DOD’s “total force” concept. DOD’s total force concept is based on the premise that it is not feasible to maintain active duty forces sufficient to meet all possible war contingencies. Consequently, DOD’s active and reserve components are to be blended into a cohesive total force to meet a given mission. DOD reported that over 186,500 Army Guard soldiers and 111,800 Army Reserve soldiers were mobilized from September 14, 2001, through September 30, 2004, for Operations Noble Eagle, Enduring Freedom, and Iraqi Freedom. As of September 30, 2004, Army Guard soldiers accounted for over 40 percent of the total reserve components mobilized in response to the terrorist attacks on September 11, 2001. The federal missions established in response to the September 2001 national emergency were categorized into three operations: Operation Enduring Freedom, Operation Noble Eagle, and Operation Iraqi Freedom. In general, missions to fight terrorism and direct combat outside the United States were categorized under Operation Enduring Freedom and Operation Iraqi Freedom, while missions to provide domestic defense were categorized as Operation Noble Eagle. For example, Army Guard soldiers participated in antiterrorist and direct combat activities in Afghanistan and Iraq under Operation Enduring Freedom and Operation Iraqi Freedom, respectively. Additionally, in support of Operation Enduring Freedom, Army Guard soldiers provided enhanced security in other countries. U.S. homeland security missions, such as guarding the Pentagon, airports, nuclear power plants, domestic water supplies, bridges, tunnels, and other military assets, were conducted under Operation Noble Eagle. Army Guard soldiers called to active service are entitled to be reimbursed for authorized travel expenses incurred. DOD provides a soldier traveling on official business with transportation, lodging, and food, or reimburses the soldier for reasonable and necessary authorized expenses if the soldier purchases them. In October 2001, the Army issued personnel policy guidance (PPG) for Operation Noble Eagle. In September 2002, consolidated PPG was issued covering both Operations Noble Eagle and Enduring Freedom. This guidance, which is revised on an ongoing basis, ultimately was expanded to include Operation Iraqi Freedom and now applies generally to all active service personnel who are mobilized and/or deployed in support of contingency operations. The PPG guidance covers topics ranging from general mobilization guidance to specific travel entitlements. The two primary sources of guidance used by both Army Guard soldiers and travel computation office personnel for information on travel entitlements were the Army’s PPG and DOD’s Joint Federal Travel Regulation (JFTR). The term per diem allowance refers to a daily payment instead of reimbursement for actual expenses for (1) lodging, (2) meals, and (3) related incidental expenses. There are many factors that go into the per diem authorization and calculation, including the availability of government quarters and meal facilities. Generally, soldiers mobilized for Operations Noble Eagle, Enduring Freedom, and Iraqi Freedom must use government meal facilities to the maximum extent practicable when they are sent to government installations with dining facilities. Because they incur no actual expenses while living in government housing and eating in government facilities, they are not authorized the meal and lodging components of the per diem allowance. However, they are entitled to receive the incidental component of per diem. The daily government incidental expense allowance for fiscal year 2004 was $3.00 within the continental United States (CONUS) and $3.50 outside the continental United States (OCONUS). When the installation commander determines that government lodging and/or mess facilities are not available, the PPG directs that Army Guard soldiers be provided with SNAs to authorize the lodging and/or meal components of per diem in addition to the incidental expense component of per diem. DOD regulations further provide that when government lodging and mess facilities are generally available, but an authorizing official determines that soldiers must occasionally miss meals due to mission requirements, proportional per diem is authorized. Table 1 shows the various components of CONUS and OCONUS per diem and the fiscal year 2004 range of dollar amounts an Army Guard soldier may be entitled to receive under the PPG and the JFTR. Additionally, the PPG provides that regardless of whether an Army Guard soldier is authorized the meal component of per diem, basic allowance for subsistence (BAS) will not be reduced. BAS is included in the Army Guard soldier’s compensation and is not a travel entitlement. More specifically, BAS is a continuation of the military tradition of providing room and board (or rations) as part of a service member’s pay. The monthly BAS rate is based on the price of food and is readjusted yearly based upon the increase of the price of food as measured by the Department of Agriculture’s food cost index. As of January 2004, BAS ranged from $175.23 a month for officers to $262.50 a month for enlisted service members. The current DOD travel reimbursement process for Army Guard soldiers operates at travel computation offices around the country, including DFAS CTO, the travel computation office at DFAS Indianapolis, 54 United States Property and Fiscal Offices (USPFO) servicing each of the Army Guard locations, and several other DFAS sites. Travel voucher processing consumes the resources of hundreds of personnel, reviewing thousands of pieces of paper every day. As illustrated in figure 1, the travel and reimbursement process consists of three phases: (1) authorizations and travel; (2) travel voucher preparation, submission, review, and transmission; and (3) computation office review, reimbursement computation, and payment. In the first phase of the travel and reimbursement process, various travel orders and other authorizations are produced and are provided to and/or acquired by soldiers, and soldiers incur travel expenses. Following the President’s mobilization order, the Secretary of Defense, in consultation with members of the Joint Chiefs of Staff, (1) determines specific unit personnel requirements and (2) issues a unit mobilization order to various affected units and organizations within the Army. USPFO officials use the Automated Fund Control Order System (AFCOS) to produce individual mobilization orders for soldiers. Individual mobilization orders usually contain general travel information, such as authorized methods of transportation; directions regarding the use of government food and lodging facilities; and authorizations for the soldier to travel from the unit’s home station, a permanent duty station, to an active Army installation (the mobilization station) for further processing and training. After completion of mission-related training at the mobilization station, the unit is certified for deployment, and soldiers are assigned duty stations. Army commands use word processing applications to produce temporary change of station (TCS) orders to give soldiers authorization to travel from their mobilization stations to long-term temporary assignments at other locations. During the deployment period, the Army may also issue TDY orders and other authorization statements, such as SNAs. The Army issues a TDY order to authorize a soldier’s travel from one location to another location for generally less than 45 days. The Army issues SNAs to soldiers when government lodging and/or meals are not available to the soldier. Following the completion of a tour, the Army issues each federal active duty soldier a Release from Active Duty (REFRAD) order from the transition processing system and a Certificate of Release or Discharge from Active Duty (DD Form 214). The second phase of the travel and reimbursement process begins with the soldier’s preparation and submission of a travel voucher, and the review of the voucher by the unit reviewer prior to the transmission of the voucher to the travel computation office, typically either a DFAS or USPFO location. According to Army Guard and DFAS guidance, all travel vouchers for Army Guard soldiers who are mobilized under Title 10 are to be sent to DFAS CTO for processing, while travel vouchers associated with Title 32 mobilizations and other nonmobilized travel are generally processed by USPFO and other travel computation offices. Final completion of the voucher occurs following the calculation of actual reimbursable amounts by a travel computation office. The soldier begins the reimbursement process by manually: 1. preparing a travel voucher (DD Form 1351-2) (the soldier provides required information, such as name, rank, Social Security number, itinerary, and authorized reimbursable expenses); 2. attaching all supporting DOD-generated documentation (e.g., mobilization orders, TCS orders, TDY orders, SNAs, REFRAD orders, DD Form 214); 3. attaching original lodging receipts and all receipts for reimbursable expenses of $75.00 or more; 4. signing and dating the voucher; and 5. delivering the entire voucher package—the travel voucher and all supporting documentation—to a unit supervisor for review. DOD’s Financial Management Regulation (FMR) requires that travel vouchers be submitted to the unit reviewer within 5 working days of the end of travel, or in the case of travel that extends beyond 30 days, within 5 days after the end of every 30-day travel period. In addition, according to The Citizen-Soldier’s Guide to Mobilization Finance, soldiers who have government quarters and meals provided to them may opt to file for the incidental portion of their per diem entitlement on a quarterly, semiannual, or annual basis since the amount due the soldier is nominal. The unit reviewer—required by DFAS policy to be the soldier’s supervisor/ commander or designee—is responsible for ensuring that the voucher claim is complete, proper, and complies with the intent of the order. On completion of the review, the unit reviewer signs and dates the voucher and forwards it and the supporting documentation to a travel computation office via regular mail, e-mail, or fax. The third phase of the travel and reimbursement process begins when the travel computation office receives the voucher package. The travel computation office reviews the voucher package, calculates the reimbursement amount, and processes the reimbursement to pay the soldier and/or the government travel card company, generally through direct deposit of the funds to their respective banks. The travel computation office is responsible for the accuracy and propriety of voucher payments. DFAS CTO and USPFO personnel perform an initial screening of voucher packages. If the basic information—signatures, dates, and orders—is present, a more detailed review of the voucher is performed. Detailed travel voucher data are then manually entered into the Integrated Automated Travel System version 6.0 (WINIATS), which calculates the amount of the reimbursement. Attempts are made to contact the soldier if any problems are noted during the initial screening, the detailed review, or the data entry. Failing contact with the soldier, DFAS or USPFO personnel mail the voucher package to the address on the voucher for correction by the soldier. If DOD fails to reimburse soldiers for travel claims within 30 days of submission of proper travel vouchers, DOD must pay the soldiers late payment interest and fees pursuant to TTRA. The paper-intensive process used by DOD to reimburse Army Guard soldiers for their travel expenses was not designed to handle the dramatic increase in travel vouchers since the terrorist attacks of September 11, 2001, and the subsequent military activity. The increased operational tempo resulted in backlogs in travel voucher processing as DFAS CTO struggled to keep up with both the increased volume and complexity of the travel vouchers submitted. For example, the monthly volume of travel vouchers being submitted to DFAS CTO increased from less than 3,200 in October 2001 to over 50,000 in July 2003 and remained at levels over 30,000 through September 2004. To its credit, to address the large volume of vouchers received and the unprocessed backlog, DFAS increased its staffing by over 200 new personnel and reported an average processing time of 8 days for its part of the process in September 2004. However, our case studies of selected units and data mining of individual vouchers identified numerous soldiers who experienced significant problems getting accurate, timely, and consistent reimbursements for travel expenses. Guard soldiers told us about a number of problems they and their families endured due to delayed or unpaid travel reimbursements, including debts on their personal credit cards, trouble paying their monthly bills, and inability to make child support payments. As discussed later, we found that these reimbursement problems were associated with process, human capital, and automated system deficiencies. Overall, as shown in figure 2, we found that a paper-intensive, manual, error-prone process exists to reimburse travel expenses to mobilized Army Guard soldiers. The primary responsibility for ensuring a travel voucher is properly prepared rests with the soldier. As illustrated in figure 2, the soldier is responsible for obtaining paper documents that include various authorizations and receipts for all expenses $75 and over, in addition to a manually prepared and signed paper travel voucher. Each time DFAS CTO receives a voucher and determines that it is not complete, either the soldier is contacted in an attempt to get the needed information or the entire voucher is rejected and returned to the soldier. The difficulty in assembling a complete and acceptable voucher package on the first try is demonstrated by the 11, 12, and 18 percent return rates reported by DFAS CTO for fiscal years 2002, 2003, and 2004, respectively. That is, of approximately 930,000 travel vouchers received during this period, DFAS CTO rejected and returned about 139,000. The soldier must then obtain the missing documentation or make the necessary corrections and return the voucher to DFAS for processing again. This repeated churning of vouchers further increases the volume of claims, which, as discussed in the next section, quickly overwhelmed DFAS CTO’s resources. In addition, returned vouchers contribute to delays in payment, increasing soldiers’ frustration. While this inefficient process may have offered some capability to process travel vouchers during periods of low activity when relatively few Army Guard members were mobilized, the current increased operational tempo has strained the process beyond its limits. The volume of Army Guard and Reserve travel vouchers being submitted to DFAS CTO increased from less than 3,200 in October 2001 to over 50,000 in July 2003. As shown in figure 3, the monthly travel voucher volume has remained above 30,000 since the July 2003 peak. In addition to the rising volume, the increased complexity of the vouchers received further slowed down the process. As military activity increased for Operation Iraqi Freedom and Army Guard, Army Reserve, and active Army soldiers were preparing for duty, not all of the installations to which Army Guard soldiers were assigned had available government housing. As a result, the soldiers were housed off-post in commercial hotels or apartments. This created a number of novel situations that were not specifically addressed in regulations, as discussed later. For example, the Virginia 20th Special Forces Army National Guard unit was mobilized to Fort Bragg, North Carolina, in January 2002. The unit was initially housed in World War II era barracks—with free meals in the mess hall—that were in such poor condition that the company commander requested and received off-post housing. The hotel was over 10 miles from the nearest Fort Bragg dining facility, and with many of the soldiers assigned to duties that required odd or extended hours that precluded use of the dining facility, the soldiers found that they were paying for at least two meals per day out of their own pockets. When members of the Virginia 20th Special Forces eventually submitted their proportional meal per diem vouchers to DFAS CTO, some were paid over $2,000 for 4 months of meal expenses and some were not, due in part to confusion over the meal per diem entitlements in this situation. As a result, some soldiers had to obtain additional documentation and resubmit their vouchers, further adding to the volume of vouchers. As of May 2004, 14 soldiers still had not received the majority of their proportional meal per diem entitlements, ranging from about $1,600 to over $3,500 per soldier for a mobilization that occurred over 2 years ago. During our review, we brought this matter to the attention of the Virginia 20th Special Forces provisional finance officer and DFAS CTO, and in June 2004, DFAS CTO processed vouchers for 10 of the 14 soldiers and made final payments for meal expenses they incurred during their Fort Bragg duty. The remaining 4 soldiers had not been paid at the completion of our audit. During this time frame, DFAS CTO staffing levels were not keeping pace with the rising volume of vouchers. However, while DFAS CTO employed less than 50 personnel in October 2001, this number more than doubled by February 2003 and was increased further to about 240 in June 2003, including 83 Army Guard and Army Reserve soldiers, as shown in figure 4. A DFAS CTO official told us that the office was not properly staffed to process travel vouchers at the beginning of 2003 when the volume started to increase. Inadequate staffing and the time necessary to train new staff created a backlog of travel vouchers at DFAS CTO, ballooning to over 18,000 vouchers in March 2003. In one case, an Army Guard specialist prepared a voucher on December 8, 2002, and his supervisor approved it the same day. It took 124 days before the voucher was stamped as received by DFAS CTO and another 66 days for DFAS to pay the soldier. In addition, although this payment should have included late payment interest, it did not because, as discussed later in this report, DFAS did not have the means to automatically identify those soldiers who should have received interest and other fees on their late payments. To its credit, with its increased staffing levels in place, DFAS CTO reported an average processing time of 8 days for its part of the process as of September 2004. Our case studies of selected units and data mining of individual vouchers identified numerous soldiers who experienced significant problems getting accurate, timely, and consistent reimbursements for travel expenses. As discussed in this report, these problems related to process, human capital, and systems deficiencies. Major factors contributing to inconsistent, inaccurate, or late reimbursements experienced by these soldiers were that requirements for authorizing and supporting per diem reimbursements for meal expenses were not always known by the mobilized soldiers nor were they well understood by local base personnel, and the authorizations were not documented on their mobilization orders or travel orders. While our work was not intended to and we did not attempt to quantify the financial impact of inaccurate and late reimbursements on individual soldiers, we found a number of soldiers who were frustrated and concerned with the process and the amount of time they spent attempting to navigate it. For example, one individual responsible for submitting his unit’s vouchers to DFAS CTO told us that he called the process “the travel voucher lottery” because “you never knew whether, or how much, you might get paid.” Frustrated soldiers sought help from their United States senators or representatives in obtaining what they believed they were owed for out-of-pocket travel expenses. The following are excerpts from three of those letters. Sergeant First Class (NY)-- “Since being released I have submitted for payment of travel pay and storage authorized by my orders. I have yet to receive the pay due. The forms have had to be resubmitted two other times, without changes. The previous submissions have either been misplaced or lost after arriving at defense finance. ... For me, it has become a hardship. I was laid off from in May of 2002 just before our activation. This was due to downsizing. I am currently on unemployment, attending BOCES for welding. I had counted on this money to cover my medical insurance and vehicle payment. At this time I am 2 months behind on the medical premiums and vehicle payment. Chase Bank has said that after next week they will submit the vehicle for repossession. ...These types of problems are a very good reason to leave the National Guard.” Sergeant First Class (NC)-- “Trying to get my final travel pay for active duty for Operation Enduring Freedom. I submitted my travel voucher in December 02 and it was sent to DFAS Indianapolis in January 03 and I still have not received payment [as of April 29, 2003]. ... I understand the workloads due to the war on terrorism, but over 5 months is extreme.” Staff Sergeant (KS)-- “Below please find an e-mail that I received last night from DFAS informing me that they have now deleted my voucher from February and I must start all over again. In the last year and a half when this has happened, although they say it is expedited, in practice and reality it goes to the bottom of the pile and takes 3-6 weeks. I’m at my wits and financial end. I have already placed approx $3500 of money owed to me by the Army on my personal credit cards and cannot afford to do it anymore.” The majority of soldiers in our 10 case study units reported problems related to reimbursements for meal expenses that included late payments, underpayments, and overpayments resulting in debts to some soldiers in excess of $10,000. For example, we estimated that about $324,000 was paid more than a year late to 120 soldiers for meal expenses based on the proportional meal rate for their locality. As discussed in detail later in this report, these issues were caused by weaknesses in the process used to pay Army Guard travel reimbursements; the human capital practices in this area, including the lack of adequate training; and nonintegrated automated systems. Table 2 summarizes the experiences of Army Guard soldiers in 10 units. We referred 8 of these units that at the end of our audit included soldiers who were unpaid, partially paid, or in debt to appropriate DOD officials to resolve any amounts owed to the Army Guard soldiers or to the government. The following provides more details on the experiences of several of these units. The 114th Military Police Company, Clinton, Mississippi, mobilized the first time in January 2002 and performed around-the-clock shift work at Fort Campbell, Kentucky, for approximately 5 months before being sent to Cuba. While at Fort Campbell, the soldiers could not always avail themselves of the base dining facilities and therefore had to pay out-of- pocket for some of their meals. The soldiers were not informed that they were eligible for a proportional meal rate until they returned from Cuba in November 2002. Due to various other delays, it took over 14 months for the soldiers to be reimbursed about $2,700 each based upon the proportional meal rate for Fort Campbell. The unit commander informed us that the delays put considerable strain on the finances of some of his lower graded soldiers. This unit’s problems were compounded when its soldiers were mobilized a second time in February 2003 and went to Fort Hood, Texas. Even though they experienced the same conditions, they were denied compensation for their out-of-pocket expenses. Soldiers told us their duty hours were similar to those they worked at Fort Campbell. Using the information from their experiences with the reimbursement process at Fort Campbell, the unit commander contacted Fort Hood officials to obtain authorization for reimbursement for costs his soldiers were incurring due to the inability to use the dining facilities at Fort Hood for all meals. The unit commander’s attempts to get authorization for the proportional meal rate were unsuccessful. At the time of our audit, we estimated that none of the 76 soldiers in the unit had been reimbursed for about 10 months, totaling approximately $6,000 per soldier. The Pennsylvania 876th Engineer Battalion was mobilized in support of Operation Enduring Freedom to perform installation security and force protection duties at Bad Aibling Station, Germany, from July 2002 to February 2003. All deployed members were entitled to the identical per diem for meals and incidental expenses applicable to their location. Although the unit’s administrative officer submitted identical vouchers for each soldier at the end of each month as required, the soldiers received varying reimbursement amounts each month. For example, following the August 2002 submission, 4 soldiers in the unit received what they believed to be the correct reimbursement of $1,718. The remaining 33 soldiers received payments ranging from $371.20 to $1,485.00. These types of inconsistencies occurred month after month. The commanding officer indicated in a memorandum that “This is a hugely demoralizing and frustrating action.” The administrative officer, who sent detailed spreadsheets to DFAS CTO, wrote in one e-mail to DFAS officials, “I have E3’s who are owed over $2,000. These soldiers deserve better.... If I call your department three times and ask three different people the same question I will receive three differing answers.… Is there or is there not a single standard for paying soldiers travel pay?” During our audit of selected travel vouchers, some that were paid as much as 500 days after travel ended, we found that significant delays frequently occurred when soldiers had to submit travel vouchers multiple times to travel computation offices. Travel computation offices routinely returned improperly prepared and inadequately reviewed vouchers that did not contain basic required signatures, dates, and travel orders. Further, DFAS staffing shortfalls contributed to some of the delays that we noted. Table 3 shows examples of the extent of delays experienced by soldiers in obtaining payment for travel expenses. The following provides more details for the experiences of some of the soldiers with payment delays. A corporal with the California 49th Military Police Company was frustrated by the repeated recycling of his voucher eight times through the travel reimbursement process, which caused his reimbursement of travel expenses to be delayed for about 17 months. His story exemplifies process and human capital flaws. For example, (1) the reviewing official approved the voucher even though it lacked supporting documentation, (2) DFAS CTO did not know that faxed vouchers were not being printed, and (3) customer service was weak as evidenced by piecemeal requests for information. According to the California unit’s reviewing official, the voucher, along with others, was initially faxed to DFAS CTO in August 2002. When not all soldiers received notification that DFAS CTO had received the vouchers, the unit official again faxed the vouchers. The corporal told us that he later received an e-mail from DFAS CTO requesting his DD Form 214, Certificate of Release or Discharge from Active Duty. He submitted the DD Form 214, but he then received his whole travel voucher package back from DFAS CTO with a note saying that the DD Form 214 was missing. He checked the package and found that the DD Form 214 he had previously sent was in the materials returned by DFAS CTO. DFAS CTO returned the voucher in February 2003 because it was incorrectly completed and again in October 2003 because it lacked mobilization orders. DFAS eventually paid the corporal $779 in December 2003. A sergeant with the Utah 142nd Military Intelligence Battalion experienced an approximate 22-month delay in receiving full reimbursement for his travel expenses. Delays for this voucher were caused by (1) fax problems, (2) missing documents, and (3) DFAS CTO errors in reimbursing the sergeant for properly supported expenditures. The sergeant told us that he faxed his voucher to DFAS CTO soon after his travel ended in October 2002. When DFAS CTO claimed it had not received the voucher, he refaxed it in January 2003. Because he remobilized in January 2003, he did not learn until he returned in May 2003 that DFAS CTO did not have a record of his January resubmission. He resubmitted his voucher in May 2003, and DFAS CTO returned it because he had not attached his DD Form 214, documenting his discharge from active duty. After he resubmitted the paperwork in August 2003, DFAS paid him only $1,269, which did not include all of his lodging costs or any of his meal expenses while at the TDY location. He told us that DFAS CTO could not explain why his meal expenses were not paid. Following his resubmission in March 2004, DFAS paid him $189.78, which was the outstanding balance on his lodging receipts. DFAS did not pay the remaining balance of $572.00 for his meals until after GAO inquired about payment of the voucher in August 2004. A sergeant with the Texas 141st Infantry Company had to wait 6 ½ months for reimbursement of his travel expenses because of (1) miscommunication about his unit’s responsibilities and (2) subsequent inadequate unit supervisory review. The sergeant told us he had been informed that his unit in Guantonamo, Cuba, would prepare and submit his voucher when his tour of duty ended in December 2002. About 113 days elapsed before he discovered that his unit in Cuba did not prepare a voucher on his behalf. At that point, he asked his home unit administrator in Texas to help him prepare and submit his voucher to DFAS CTO. However, DFAS CTO returned that voucher because it lacked supervisory signature. The sergeant believed he needed supervisory approval from his unit and sent the voucher back to Cuba for approval. After it was returned from Cuba, he resubmitted it to DFAS CTO, but for some reason unknown to him he still did not get paid. He resubmitted his voucher to DFAS CTO in late June 2003, and DFAS paid him $682 in July 2003 approximately 82 days after supervisory approval. Policies and guidance, the foundation of the process for authorizing travel entitlements and reimbursements, were sometimes unclear to the Army Guard soldiers who were called from their civilian lives to military service since the September 11, 2001, terrorist attacks. Not since World War II had so many Army Guard soldiers been mobilized for extended periods and essentially placed in travel status for as long as 2 years. Prior to September 11, 2001, most travel guidance addressed relatively routine travel for brief periods and was not always clearly applicable to situations Army Guard soldiers encountered, particularly when they could not avail themselves of government-provided meals due to the nature of their duty assignments. In October 2001, the Army issued new guidance that was intended to address travel entitlements unique to Army and Army Guard soldiers mobilized for the war on terrorism. However, the lack of clarity in this guidance created problems not only for Army Guard soldiers but for numerous other personnel involved with authorizing travel entitlements and contributed to inaccurate, delayed, and denied travel reimbursements. Furthermore, inappropriate policy and guidance on how to identify and pay soldiers entitled to late payment interest and fees because of late travel reimbursement meant that DOD continued to be noncompliant with TTRA. As a result, as discussed in the next section, although DOD paid no late payment interest or fees to Army Guard soldiers through April 2004, we found a number of cases in which soldiers should have been paid late payment interest and indications that thousands more may be entitled to late payment interest. GAO’s Standards for Internal Control in the Federal Government state that internal control is an integral component of an organization’s management that provides reasonable assurance that objectives of the agency are being achieved, including effectiveness and efficiency of operations and compliance with laws and regulations. We found that a key factor contributing to delays and denials of Army Guard reimbursements for out-of-pocket meal expenses was a lack of clearly defined guidance. We noted that the existing guidance (1) provided unclear eligibility criteria for reimbursement of out-of-pocket meal expenses, (2) lacked instructions for including meal entitlements on mobilization orders, and (3) contained inadequate instructions for preparing and issuing SNAs. Two primary sources of guidance used by both Army Guard soldiers and travel computation office personnel for information on travel entitlements were the Army’s personnel policy guidance (PPG) for military personnel mobilized for Operations Iraqi Freedom, Enduring Freedom and Noble Eagle and DOD’s Joint Federal Travel Regulation (JFTR). We found that both Army Guard soldiers and travel computation personnel had difficulty using these sources to find the information necessary about the rules regarding travel-related entitlements. A DFAS CTO official and users told us that the guidance was legalistic and not user friendly. Army Guard soldiers and DFAS CTO examiners had trouble at times interpreting the guidance, and as a result, soldiers experienced travel reimbursement problems. Table 4 shows the sources of common problems related to meal expense reimbursements experienced by soldiers in our case studies. Unclear eligibility criteria. We found that guidance did not adequately address some significant conditions that entitled a soldier to reimbursement of authorized meal expenses. For example, although the JFTR entitled soldiers to reimbursement for meal expenses when transportation was not reasonably available between government meal facilities and place of lodging, the term “reasonably available” was not defined. The PPG directed the maximum use of installation facilities, and if not feasible, then “multi-passenger vehicles should be used” to transport soldiers to installation facilities. However, the PPG is silent regarding what constitutes adequate transportation, particularly when transportation to government meal facilities is necessary for Army Guard soldiers who cannot be housed in government facilities. As discussed in one of our case studies, we found disagreements between the soldiers and their command officials about the adequacy of transportation to government meal facilities and their entitlement to get reimbursed for eating at commercial facilities closer to their lodgings. Without clear guidance on these issues, Army decisions will continue to appear arbitrary and unfair to soldiers. The following illustrates the experiences of the Army Guard soldiers with the Maryland 115th Military Police Headquarters/Headquarters Company, their perceptions of unfair and inconsistent treatment, and apparent confusion between basic allowance for subsistence (BAS) compensation entitlements and meal entitlements while in TCS status. For example, BAS is included in the Army Guard soldier’s compensation and is not a travel entitlement. Case Study Illustration: Soldiers Claim Inadequate Transportation Should Have Justified Reimbursement of Out-of-Pocket Meal Expenses Soldiers with the Maryland 115th Military Police Headquarters/Headquarters Company were mobilized to Fort Stewart, Georgia, in October 2001, to perform force protection duties. As shown in figure 5, the 107 soldiers in the unit were housed in a government- contracted hotel approximately 3 to 4 miles from the base because the on-base housing was overcrowded. Soldiers told us that transportation from the hotels to the government dining facilities was inadequate. They explained that while military buses and vans took soldiers to and from the base for their shift work duties, the soldiers not on shift, including those on their days off, had to find their own transportation to the government dining facilities. Approximately 3 weeks into the unit’s mobilization, the battalion commander allowed some soldiers (E-7 and above) to use their privately owned vehicles, but many other soldiers were still without vehicles or other means to get to the dining facility. Two soldiers told us that they purchased bicycles to get to the base. Several soldiers claimed that because they could not get to the dining facilities, they either walked to local restaurants, or bought groceries and cookware and cooked meals in their rooms using hot plates. The soldiers told us they were also aware that other soldiers at Fort Stewart were similarly housed in hotels, but were paid per diem for meals based on the locality rate for the area. We confirmed that Florida’s 3220th U.S. Army Reserve Garrison Support Unit was housed in hotels and was paid per diem for meals. Several soldiers told us they discussed several issues with their chain of command and company commanders, including inadequate transportation, having to eat on the economy, the inconsistencies in their treatment compared to other soldiers, and their eligibility for per diem. The company commanders discussed these issues with the battalion commander. The battalion commander told us, “The soldiers had a contracted hotel and laundry services and didn’t need per diem. In addition to that, they had access to the mess hall and were getting BAS.” When a battalion personnel officer incorrectly told one soldier that he was not entitled to per diem because he was receiving BAS he stated, “Then take away my $8 in BAS and give me per diem because I can’t live on $8 a day.” The garrison commander told us that he was unaware of the unit’s transportation problems, and had he known of the problems, he would have issued more vehicles to the unit. Using the proportional meal rate, we estimated that the soldiers could be due approximately $1,260 each for a total of approximately $135,000 for the period October 2001 to January 2002. As of September 2004, the soldiers had not received any reimbursements for meal expenditures. Case Study Illustration: Mississippi Army Guard Soldiers Question Army’s Decision to Deny Reimbursement for Out-of-Pocket Meal Expenditures Army Guard soldiers of the Mississippi 114th Military Police Company were called up in January 2002 for their first mobilization and reported to Fort Campbell for military police guard duty. While at Fort Campbell, the 114th performed 24-hour, 7-day shift work providing force protection services for the 101st Airborne Division and Fort Campbell and could not always avail themselves of the free meals at the mess hall. Consequently, these Army Guard soldiers purchased their meals from commercial sources. After soldiers learned of their potential eligibility for reimbursement of meal costs, they requested and received authorization from the division commander for reimbursement and, although considerably late, were eventually reimbursed. However, while at Fort Hood, the location of their second active duty tour beginning in February 2003, installation command officials did not authorize reimbursement of their out-of-pocket costs for meals. According to the Army Guard soldiers, the conditions at Fort Hood—24-hour shift work and the lack of 24-hour mess halls—were similar to what they encountered at Fort Campbell. Fort Hood officials told us that they justified their decision because free meals were available from the mess hall and noncommissioned officers could get this food to soldiers who were having problems and by stating that the basic allowance for subsistence was adequate compensation for any of the soldiers’ out- of-pocket expenditures for meals. Fort Hood officials did not document their unfavorable decision or justification for that decision. Because an official from the Mississippi Guard Finance Office told them it was a “dead issue,” the unit chose not to contact the Inspector General’s office. The official told us that he informed the soldiers that they could not get reimbursed without the approval of the Fort Hood officials. As a result, we estimated, based on the proportional meal rate for Fort Hood, that these 76 Army Guard soldiers were not reimbursed for approximately $6,000 each, totaling about $456,000, for their meals from February 2003 to January 2004, when they were demobilized. In another case, Georgia Army Guard soldiers were frustrated by large debts when DFAS CTO retroactively disallowed the locality meal rate authorized by command officials. Case Study Illustration: Decisions on Meals Eligibility Result in Overpayments and Large Debts The soldiers from Georgia’s 190th Military Police Company reported to Fort Benning, Georgia, in October 2001 for mobilization processing. The unit of 101 soldiers experienced problems regarding per diem for meals when they were subsequently deployed from October 2001 through March 2002 to Fort McPherson, Georgia, which was near the homes of many of the soldiers. Soldiers told us there was confusion over who would be entitled to per diem for meals because Fort McPherson had not established a local commuting area. Certain soldiers were granted SNAs, signifying their eligibility for per diem for meals, based on the locality meal rate for the area. Relying on the determinations, soldiers requested and received per diem for meals, some in excess of $10,000, for the period they were at Fort McPherson. In August 2002, Fort McPherson established a local commuting area. When DFAS CTO processed the soldiers’ final settlement vouchers at the end of the deployment, DFAS CTO used the newly established commuting area to determine eligibility for per diem for the whole period. This resulted in DFAS CTO retroactively disallowing per diem and creating debts totaling approximately $200,000 for 32 soldiers, several in excess of $10,000. Debt collections were initiated against the soldiers’ pay while they were in Iraq on a second mobilization in 2003, creating adverse financial impacts on the soldiers and their families. The 190th unit commander was able to get the debt collections suspended until they returned from Iraq. Following demobilization, debt collections were reinstituted and are currently being made against the soldiers’ monthly drill pay. During our audit, many of these soldiers still had unresolved and unpaid debts on their records, two in excess of $10,000. We requested that DFAS CTO review all of the records to determine whether the soldiers were properly reimbursed for travel entitlements, and whether the debt amounts were correct. DFAS CTO officials agreed and at the conclusion of our audit were in the process of determining who was entitled to per diem while at Fort McPherson. These cases illustrate the effects of guidance that does not clearly identify eligibility criteria and leaves meal eligibility determinations to the interpretation of individual commands. Although it would not be practical to develop guidance for every possible travel scenario, we noted that the JFTR included useful situational examples to assist decision makers in determining nonavailability related to lodging and meals, while the PPG lacked similar specific contingency travel examples. Lack of specific entitlements on orders. Army and Army Guard policies and procedures do not provide for mobilization orders issued to Army Guard soldiers to clearly state that these soldiers should not be required to pay for meals provided to them at government dining facilities. In the case study units we reviewed, we found several instances in which mobilization orders either stated nothing about meal entitlements or stated, “Government mess will be used,” or “Government quarters and dining facilities are available and directed.” As a result, we noted instances in which mobilized soldiers arrived at government mess halls carrying mobilization orders that did not specifically state that the soldiers could eat free of charge and were inappropriately required to pay for their meals. The PPG states, “TCS soldiers who are on government installations with dining facilities are directed to use mess facilities. These soldiers are not required to pay for their meals.” In addition, the PPG states, “Basic Allowance for Subsistence will not be reduced when government mess is used for soldiers in a contingency operation.” However, the PPG does not provide guidance addressing the content of mobilization orders for Army Guard soldiers. As a result, unless the orders contain the appropriate statements about meal entitlements, installations sometimes inappropriately charge Army Guard soldiers for their meals. In response to questions we posed to officials representing the Mississippi Adjutant General’s office regarding why mobilization orders did not include adequate provisions about food entitlements, they explained that the individual mobilization orders that are prepared by the Adjutant General’s staff are very basic and include only the travel allowances and actions that are necessary to get the individual from the home station to the mobilization station. The Adjutant General’s office received no guidance on what should be stated in the orders with respect to soldiers eating free of charge at government installations or any other conditions that may entitle Army Guard soldiers to per diem to compensate them for their out- of-pocket meal costs. We discussed the problem of unclear mobilization orders with Army officials during our audit. In response to our concerns, Army officials agreed to modify the guidance on what to include in mobilization orders with respect to meals and lodging entitlements. As a result of the unclear orders, many Guard soldiers had to inappropriately pay for meals and were unable to obtain reimbursement for their out-of-pocket costs in a timely manner. The following example shows the effects of that problem. Case Study Illustration: Guard Unit Required to Pay for Meals at Army Mess Hall When the Mississippi 20th Special Forces Group (SFG) reported to its mobilization station at Fort Carson, Colorado, on January 10, 2002, the soldiers were assigned to on- post housing at the Fort Carson Colorado Inn. They were told that the government dining facility nearest to their lodging was approximately 1 mile away. When the soldiers went to eat at the dining facility, they were told that they had to pay for their meals because their orders did not indicate they could eat free. Consequently, the soldiers either (1) continued to pay for their meals at the government dining facility, (2) purchased groceries and cooked in their rooms, or (3) ate at local restaurants. The dining facility manager told us that if the soldier’s orders did not specifically indicate government meals at no cost, then his staff was instructed to charge the soldier for meals. The administrative noncommissioned officer for the Mississippi 20th SFG learned from a soldier in a different group, that he was being reimbursed the locality rate (which at that time was $36 per day) for meals purchased at his own expense. In July 2002, the unit’s administrative noncommissioned officer raised the meals issue with the unit’s chain of command, but it was not until October 2002 that Headquarters, 10th SFG, Fort Carson issued amended orders for the 20th SFG. The orders retroactively authorized meal per diem of $11 per day to the soldiers of the 20th SFG, allowing them to be partially reimbursed for their out-of-pocket expenses from January 2002 to October 2002. The orders also authorized the soldiers to eat in the dining facility at no charge beginning in October 2002. As of the end of our audit, the administrative noncommissioned officer estimated that $150,000 was still to be paid to 75 soldiers. In another instance, Army Guard soldiers called to federal duty under the authority of Title 32 for security missions in late 2001 and early 2002 experienced significant delays in getting reimbursed for travel expenditures. The soldiers were provided lodging but not meals and were not authorized per diem for meals on their orders. Many months elapsed during which the Army Guard Adjutant General for each state command with authority over the respective soldiers and Army Guard officials worked to obtain and provide the proper authorization to reimburse all the soldiers’ travel expenses. In the interim, Army Guard soldiers experienced financial hardships. The following case study chronicles one story about these soldiers’ experiences. In March 2002, the Colorado National Guard HQ 140th Signal Company received orders to provide security at the Denver International Airport, Denver, Colorado. For mission related reasons, the soldiers were required to remain overnight at their duty station for an extended period in government-provided housing, but their orders did not authorize per diem for meals. Although the government provided housing, meals were not included, and the soldiers had to obtain meals from commercial establishments. In July 2002, the Adjutant General of the Colorado Army National Guard sent a letter to the Director of the Army National Guard Financial Services Center, requesting that actions be taken to resolve the per diem and other related issues. In December 2002, the letter was forwarded by the Army National Guard Financial Services Center to the Army National Guard Readiness Center for “consideration and action.” In June 2003, the Army National Guard Readiness Center issued a memorandum for the financial managers of all states, Guam, Puerto Rico, the Virgin Islands, and the District of Columbia, which communicated the receipt of approval from the Assistant Secretary of the Army – Manpower and Reserve Affairs to provide retroactive redress of the per diem and other issues affecting the Colorado and other Guard soldiers. In September 2003, 18 months after the soldiers incurred out-of-pocket expenses averaging over $1,400 each, the Colorado USPFO began paying reimbursements to the soldiers. Some of these soldiers suffered adverse financial impact resulting from the delays in reimbursement. For example, one soldier told us his government travel card was canceled due to nonpayment, another soldier’s family had to rely on the spouse’s salary to pay bills, and another’s child support payments were late or less than the minimum required payments. Confusing, nonstandard SNAs. Lack of standardization and changing guidance has resulted in SNAs of various form and content, signed by officials at different levels of authority. Consequently, travel computation office reviewers were unable to consistently determine the validity of SNAs. Our case studies identified travel computation reviewers who have rejected soldiers’ requests for reimbursements even though they were supported by valid SNAs. The most recent PPG guidance authorizes the installation commander to determine whether to issue an SNA based on each unit’s situation and the availability of government housing. The guidance states that when government or government-contracted quarters are not available, soldiers will be provided certificates or statements of non-availability for both lodging and meals to authorize per diem. However, the guidance does not specify the form and content of the SNAs. Consequently, at several case study units, we found that the form of the SNA and the content of the information on the form varied at the discretion of the issuing command. For example, one installation stamped the soldiers’ orders and handwrote an SNA identification number in a block provided by the stamp. Another location provided a written memo that stated that the meal component of per diem was authorized because there were no food facilities at the government installation. Another provided a single SNA with a roster attached that listed the names of the soldiers who were authorized per diem. The variety of SNA formats can cause confusion for the soldier, who does not know what documentation is needed for reimbursement and whether the travel computation office will accept it. The travel computation office personnel can also be confused about the criteria for a valid SNA, as illustrated by the following case study. Case Study Illustration: SNAs for Meal Reimbursement Not Consistently Accepted Sixty-five soldiers in B Company, 20th Special Forces of the Virginia Army National Guard received orders to mobilize to Fort Bragg, North Carolina in early 2002. After about 3 weeks, the unit moved to government contracted quarters off-post and soldiers were authorized proportional per diem for two meals a day during their Fort Bragg duty period. After returning from overseas duty, 51 soldiers prepared and submitted their final travel vouchers, with identical SNA documents attached, to DFAS CTO in November and December 2002. Some of the soldiers’ meal component per diem claims were approved and paid by DFAS; others were not. Inconsistent recognition and acceptance of identical SNAs resulted in 24 soldiers receiving timely reimbursements and 12 soldiers receiving late reimbursements after having to resubmit their vouchers with additional documentation to receive their proportional per diem. Furthermore, at the time we completed our audit of B Company’s travel vouchers in May 2004, approximately 22 percent of the soldiers (14 of 65) still had not received the majority of their proportional per diem entitlement. Travel reimbursements, ranging from about $1,600 to over $3,500, had not been made to these 14 soldiers. In June 2004, DFAS CTO processed vouchers for 10 of the 14 soldiers and made final payments for meal expenses they incurred during their Fort Bragg duty. The remaining 4 soldiers had not been paid at the completion of our audit. Our work found instances in which installation commands denied soldiers’ requests for SNAs. In response to our inquiries, we found that commands do not generally document their rationale for denying SNAs and there is no requirement for them to do so. This lack of documentation can leave soldiers even more confused and frustrated when seeking answers as to why their requests for per diem were denied. GAO’s Standards for Internal Control in the Federal Government require the maintenance of related records and appropriate documentation that provides evidence of execution of control activities. Inappropriate policy and guidance, issued by DFAS Indianapolis, combined with the lack of systems or processes designed to identify and pay late payment interest and fees, leave DOD in continued noncompliance with TTRA. As a result, through at least April 2004, DFAS Indianapolis had made no required payments of late payment interest and/or late payment fees to soldiers for travel reimbursements paid later than 30 days after the submission of a proper voucher. For example, of 139 individual vouchers we selected to determine why these took a long time to process, we identified 75 vouchers that were properly submitted by Army Guard soldiers that should have received late payment interest totaling about $1,400. Some of these vouchers may also have warranted a late payment fee in addition to the late payment interest. In addition, DFAS data showed indications that thousands of other soldiers may be due late payment interest. For example, during the period October 1, 2001, through November 30, 2003, dates in the DFAS Operational Data Store showed that about 85,000 vouchers filed by mobilized Army Guard soldiers were paid more than 60 days after the date travel ended. If the dates on these vouchers were correct, the soldiers who submitted proper vouchers within 5 days of the date travel ended would be entitled to late payment interest if they were not paid within the 30-day limit. TTRA and federal travel regulations require the payment of a late payment fee consisting of (1) late payment interest, generally equivalent to the Prompt Payment Act Interest Rate, plus (2) a late payment fee equivalent to the late payment charge, which could have been charged by the government travel card contractor. Late payment interest and fees are to be paid to soldiers if their reimbursements are not paid within 30 days of the submission of a proper voucher. In our 2002 report on Army travel cards we reported DFAS noncompliance with TTRA due to the lack of procedures and necessary systems and data to make the required computations. In response to our recommendations in that report, DFAS revised its procedures in April 2003. Until that time, DFAS required individual soldiers to submit requests for late payment interest and fees if they believed their vouchers were paid late. According to DOD’s FMR, the traveler was required to submit a supplemental voucher through his or her supervisor/approving official requesting the payment. The 2003 guidance issued by DFAS Indianapolis stated that Army travel computation offices would identify vouchers for late payment interest and fees rather than require individual soldiers to take the initiative to file claims for late payment interest and fees. However, DFAS’s interpretation of the guidance limited the payment of late payment interest and fees to only the final settlement travel voucher for all travel under a particular travel order. This practice contributed to continued noncompliance with the law because it effectively excluded large numbers of monthly or accrual vouchers from consideration of late payment interest and fees. In response to our inquiries, DFAS officials told us that as of April 2004, they had not paid any late payment interest or fees to soldiers because no final settlement vouchers were paid late. We questioned DFAS officials about their decision to exclude accrual vouchers from potential payment of late payment interest and fees. As a result, DFAS issued new guidance dated May 2004 to clarify that all travel voucher reimbursements are subject to late payment interest and fees. However, the provision in DOD’s FMR pertaining to this issue continues to require that individual soldiers request the late payment interest and fees. Furthermore, due to automated systems issues discussed later, DFAS does not have the capability to automatically identify late vouchers or calculate the late payment interest and fees. Consequently, travel computation offices have to use manual procedures to identify late vouchers and make manual calculations. Additionally, the new guidance directs reviewers to sign travel vouchers on the same day that they are submitted and then establishes the reviewers’ signature date as the date of submission of a proper voucher. We are unaware of any control procedure to monitor that reviewers are complying with the requirement. Subsequent to DFAS’s dissemination of the new guidance, we found numerous late vouchers for which DFAS did not pay late payment interest and fees. For example, the final vouchers for 63 soldiers with the Georgia Army National Guard’s 190th Military Police Company were processed late in April 2004 without payment of late payment interest or fees, even though they were covered by DFAS guidance issued in 2003. The vouchers were approved by unit reviewers on February 6, 2004, and were submitted to the Georgia USPFO on February 10, 2004, for additional review to identify any deficiencies that may cause the vouchers to be rejected. Due to the USPFO’s workload and the unavailability of appropriate personnel to review the vouchers, the vouchers remained at the USPFO from February 10, 2004 until April 2, 2004. DFAS CTO eventually received the vouchers on April 9, 2004, and paid them on April 27, 2004. The payments were made a total of 81 days after the supervisory signatures, thus making the payments 51 days over the 30 days allowed for payment. According to a DFAS official, DFAS’s manual procedures did not detect the vouchers as needing late payment interest and fees. Travel clerks were supposed to review dates of supervisory signatures to determine if the 30-day limit was exceeded and thus require the payment of late payment interest and fees. We notified DFAS officials of the oversight and they subsequently made the interest payments. A DFAS official also informed us that additional changes to DFAS’s manual procedures were being made to ensure that late vouchers are properly identified and late payment interest and fees paid. Because these changes in procedure were so recent, we could not evaluate their effectiveness. We found that weaknesses related to human capital contributed to travel reimbursement problems. These weaknesses include (1) a lack of leadership and oversight and (2) a lack of adequate training provided to Army Guard soldiers and travel computation office examiners. GAO’s Standards for Internal Control in the Federal Government state that effective human capital practices are critical to establishing and maintaining a strong internal control environment. Specifically, management should take steps to ensure that its organization can promptly identify problems and respond to changing needs, and that appropriate human capital practices are in place and operating effectively. Without an overall leadership structure in place, neither the Army nor the Army Guard had developed and implemented processwide monitoring and performance metrics necessary to promptly identify and resolve problems causing late- paid travel vouchers. We also found that lack of adequate soldier training was a contributing factor to some travel voucher processing deficiencies. For example, several Army Guard soldiers with whom we spoke told us that they had received either inadequate or no training on travel voucher preparation and review. In addition, a DFAS CTO official told us that the on-the-job training provided to its new personnel in early 2003 initially proved to be inadequate in the wake of the hundreds of thousands of travel vouchers that flooded their offices subsequent to the mobilization surge during this period. To its credit, during fiscal year 2004, DFAS CTO enhanced its training program for voucher examiners. No one office or individual was responsible for the end-to-end Army Guard travel reimbursement process. The lack of overall leadership and fragmented accountability precluded the development of strong overarching internal controls, particularly in the area of program monitoring. Neither the Army nor the Army Guard were systematically using performance metrics to gain agencywide insight into the nature and extent of the delays to measure performance and to identify and correct systemic problems. Our Standards for Internal Control in the Federal Government require agencies to have internal control procedures that include top-level reviews by management that compare actual performance to expected results and analyze significant differences. As shown in figure 6, internal reports prepared by DFAS CTO show that missing travel orders was the primary reason why it did not accept vouchers for payment. DFAS CTO reported that it rejected 104,000, or approximately 17 percent, of 609,000 vouchers during the period July 2003 through September 2004, with missing travel authorizations accounting for over half of the rejected vouchers. While this churning process appeared to be a primary factor in payment delays and soldier frustration, DFAS CTO, Army, or Army Guard offices had not performed additional research to determine the root cause of this and other voucher deficiencies. Similarly, our analysis of a selection of individual travel vouchers also disclosed that some vouchers were returned to soldiers because of missing documentation or the lack of required signatures. However, neither DOD management officials nor we could determine the root cause of all instances of missing information. Some soldiers told us that DFAS CTO lost documentation that they had submitted. DFAS CTO also experienced problems with faxed vouchers, which caused vouchers and supporting documentation not to be printed and processed in some cases. According to a DFAS CTO official, DFAS was unaware that faxed vouchers were not printing until a soldier complained that DFAS was not receiving his faxes. DFAS did not monitor incoming faxes, even though it reported that faxed travel vouchers account for approximately 60 percent of the total mobilized Army Guard and Reserve travel vouchers it received. These problems obstructed the normal handling of a number of those vouchers. In an effort to resolve this problem, DFAS CTO, in March 2004, ceased relying on an automatic print function of the fax system software and began manually printing vouchers. As shown in figure 7, our audit of a nonrepresentative selection of 139 travel vouchers (69 computed by DFAS CTO and 70 by USPFOs) found significant delays occurred between the date of the reviewer’s signature and the date that the travel computation office accepted the voucher. Some of these delays were caused by the time needed to correct vouchers that were deficient and resubmit them to DFAS CTO or another USPFO travel computation office. We determined that the travel computation office rejected 32 of the 72 travel vouchers delayed for more than 3 days because of missing documentation or the lack of required signatures and sent them back to the soldiers for corrections. A lack of documentation or other information prevented us from determining the reason for delays of more than 3 days for the remaining travel vouchers. In one case, an Army Guard soldier from Texas waited over 9 months to be paid. The soldier prepared and submitted a travel voucher for $765 and signed it on August 28, 2002. His unit supervisor signed the voucher as reviewed on the same day. The travel computation office rejected the voucher and sent it back to the soldier because the proper documentation was not attached. The travel computation office returned the voucher to the soldier a second time because it did not have the necessary signatures. A complete travel voucher was finally received and accepted by the travel computation office on April 25, 2003, 240 days after the unit’s initial review and was not paid until mid-June 2003. The Army’s lack of processwide oversight, including monitoring of the rejection and return of vouchers by DFAS CTO and other travel computation offices, resulted in undetected delays in reimbursement, leading to unnecessary frustration with the Army’s travel and reimbursement process and potential financial difficulty for the soldier. Further, without establishing and monitoring program metrics, management had no assurance that it had identified where the breakdowns were occurring and could not take the appropriate steps to resolve any identified problems. For example, although the Army relied on the individual unit reviewer for assurance that travel vouchers were properly reviewed and transmitted promptly to the travel computation offices, the Army did not establish and monitor performance metrics to hold these reviewers accountable for their critical role in the process. DFAS CTO officials told us that they have taken several steps to reduce the number of vouchers being returned to the soldiers due to missing signatures and missing mobilization orders. DFAS and the National Guard Financial Services Center—a field operating agency of the Chief, National Guard Bureau, that performs selected financial services—entered into a Memorandum of Agreement effective February 2004 whereby DFAS will obtain the assistance of the National Guard to address problems with certain vouchers that would otherwise be returned to soldiers. According to DFAS CTO data, since the implementation of the agreement through the end of fiscal year 2004, 13,523 travel vouchers were coordinated with the National Guard in this manner rather than initially being sent back to the soldiers for correction. However, we did not assess the effectiveness of these changes in reducing the number of vouchers that ultimately are returned to soldiers or in reducing the time necessary to process and pay vouchers. Although metrics were available on the average time DFAS CTO took to pay travel vouchers after receipt, the Army did not have statistical data on supplemental vouchers that could help provide additional insight into the extent and cause of processing errors or omissions by voucher examiners, unit reviewers, or Army Guard soldiers. Several of our case studies indicate that accuracy may be an important issue. For example, one method DFAS CTO uses to correct a voucher error or omission is to process a supplemental voucher. According to DFAS data, DFAS CTO processed about 251,000 vouchers related to Army Guard soldiers mobilized during the period October 1, 2001, through November 30, 2003, of which over 10,600 were supplemental vouchers. However, DFAS CTO officials could not tell us how many of these were due to errors or omissions by DFAS examiners or other factors. Our audit of 69 supplemental vouchers for the California 185th case study unit showed that 41 were due to DFAS CTO errors and the remaining 28 were due to errors or omissions on the part of the soldiers. Because DFAS CTO has not analyzed or tracked the extent or cause of supplemental vouchers to establish performance benchmarks, it has missed an opportunity to help identify recurring problems and solutions as well as measure improvements or deterioration in the effectiveness of the travel reimbursement program over time. Finally, we noted that although DFAS CTO established a toll-free number (1-888-332-7366) for questions related to Army Guard and Reserve contingency travel, DFAS did not have performance metrics to identify problem areas or gauge the effectiveness of this customer service effort. For example, DFAS did not systematically record the nature of the calls to the toll-free number. According to DFAS data, this number, staffed by 30 DFAS employees, received over 15,000 calls in June 2004. By monitoring the types of calls and the nature of the problems reported, important information could have been developed to help target areas where training or improved guidance may be warranted. Further, DFAS had not established performance metrics for its call takers in terms of the effectiveness of resolved cases or overall customer service. Although Army regulations specify the responsibilities of soldiers, they do not require that soldiers be trained on travel entitlements and their role in the travel reimbursement process. Some of the Army Guard soldiers that we spoke with told us that they had received either inadequate or no training on travel voucher preparation and review. In addition, a DFAS CTO official told us that the on-the-job-training provided to its new personnel in early 2003 initially proved to be inadequate. To its credit, during fiscal year 2004, DFAS CTO enhanced its training program for voucher examiners. Army Guard soldiers in our case studies told us that they asked DFAS representatives or used the Internet in attempts to find, interpret, and apply DFAS guidance, which by itself proved to be insufficient and required many trial and error attempts to properly prepare travel vouchers. As a result, many soldiers did not receive their travel payments on time. The lack of well-trained personnel can undermine the effectiveness of any system of travel expense reimbursement. Well-trained and informed personnel, conscientiously performing their assigned duties, are especially essential in the paper-driven, labor-intensive, manual, error-prone environment of the Army’s current travel authorization and reimbursement process. Army Guard soldiers. Army Guard soldiers in our case studies told us that they were confused about their responsibilities in the travel voucher reimbursement process because they had not been sufficiently trained in travel voucher processes related to mobilization. For example, prior to September 11, 2001, most travel guidance addressed the criteria for single trips or sequential trips and was not always clearly applicable to situations in which Army Guard soldiers could be authorized short intervals of travel for temporary duty at different locations within their longer term mobilization. This “overlapping travel” proved to be problematic for Army Guard soldiers trying to understand their travel voucher filing requirements and travel computation office examiners responsible for reviewing travel vouchers. In addition, we found indications that some soldiers were not aware of DOD’s requirement to complete a travel voucher within 5 days of the end of travel or the end of every 30-day period in cases of extended travel. For example, as shown in figure 8, in our selection of 139 vouchers, 99 (71 percent) of the Army Guard soldiers did not meet the 5-day requirement. Fifty-two Army Guard soldiers submitted their vouchers more than 1 year late. Of the 59 Army Guard soldiers that we could locate and interview, 23 said that they lacked understanding about procedures, or lacked knowledge or training about the filing requirements. Eight Army Guard soldiers said that they procrastinated or forgot to file their travel vouchers on time. The remaining 28 said that they could not remember anything about the specific voucher we asked about or did not respond to our inquiries. Several soldiers offered their perspectives on their lack of understanding about certain requirements. Case Study Illustration: Lack of Understanding of SNA Requirements A soldier with the Pennsylvania National Guard 1st Battalion, 103rd Armor, whose active duty was extended to perform duties at Fluck Armory in Freidens, Pennsylvania, after his overseas duties were completed, was unaware that he needed an SNA to justify reimbursement of his out-of-pocket costs. He told us that he assumed the Army would have given him the documentation he needed to support his travel voucher. When his voucher was not paid, he contacted DFAS CTO to determine the reason for the delay. DFAS CTO claimed that it had not received his voucher. After several resubmissions, he received a payment in July 2003 that was about $1,500 less than what he expected. He also received an e-mail notification from DFAS CTO that stated, “SNA needed for lodging at Friedens, PA.” The soldier told us that he did not know what SNA meant or how to obtain an SNA. Eventually, the soldier was able to obtain an SNA from the commanding officer of the Pennsylvania 876th Engineers. In February 2004, about 11 months after he completed his assignment at Fluck Armory, DFAS paid the soldier about $1,600 for his lodging and meal expenses. The following example illustrates a unit administrator’s experience and frustration in having to duplicate his efforts to obtain a single month’s travel expenses for 37 soldiers in his unit. Case Study Illustration: Lack of Understanding of Documentation Requirements A unit administrator with the Pennsylvania National Guard’s 876th Engineer Battalion told us that he was unaware that he needed to attach a copy of the mobilization order and TCS order to each travel voucher before he submitted vouchers to DFAS CTO for each of the 37 soldiers in his unit. He explained that there was only one block on the travel voucher form to insert a single order number. He attached the TCS order, incorrectly assuming that the DFAS CTO examiner would know that the soldiers, being in an Army Guard unit, could not have been on TCS duty in Germany performing installation security and force protection duties without having been mobilized. As a result, he was concerned when he and other soldiers were reimbursed for the 1 week of their travel expenditures incurred in Germany, but not for the 3 weeks of expenses incurred during their initial duty at Fort Dix, New Jersey, where soldiers participated in mobilization training and other activities prior to overseas deployment. These expenses included transportation to Fort Dix, New Jersey, and daily incidental expenses. DFAS CTO asked him to submit new travel vouchers for this 3-week period with the mobilization order attached, which he did. The soldiers in this unit were collectively paid $7,400 about 4 weeks later, which represented the balance due on their initial travel vouchers. DFAS CTO personnel. DFAS CTO also had challenges training its examiner staff. The increase in mobilizations since September 11, 2001, and resulting increase in travel voucher submissions put a strain on DFAS CTO’s ability to make prompt and accurate travel reimbursements to Army Guard soldiers. As discussed previously, DFAS CTO hired more than 200 staff from October 2001 through July 2003, which brought the total number of staff to approximately 240. The training of these new employees was delivered on-the-job. Training time depended on the individual and type of work. For example, according to a DFAS CTO official, it took from 1 to 3 months for a voucher examiner to reach established standards. The DFAS CTO official told us that, in some cases, on-the-job training proved to be inadequate and contributed to travel reimbursement errors during this period. Two of our case studies indicated that mistakes by DFAS CTO contributed to reimbursement problems. For example, our California case study indicated that 33 soldiers were initially underpaid a total of almost $25,000 for meals, lodging, and incidental expenses when personnel at DFAS CTO selected an incorrect duty location and a corresponding incorrect per diem rate. Although these soldiers eventually received the amounts they were due, the corrections took months to resolve. Another example, described next, shows inconsistencies and errors in the payment of meal and incidental expense per diem to soldiers in a Pennsylvania Guard unit. Pennsylvania Army National Guard soldiers from Company C, 876th Engineer Battalion were deployed to Bad Aibling Station, Germany, in late July 2002, to augment active duty forces that were providing enhanced security at this installation. The 37 soldiers in Company C were authorized to purchase their meals because mess facilities were not available. Each month, the unit administrator prepared and submitted vouchers to DFAS CTO for reimbursement of meal and incidental expenses. Although each of the 37 soldiers should have received, in any given month, the same reimbursement amount, the actual payments to soldiers were not identical. For the first 3 months of their deployment, soldiers’ travel reimbursements varied significantly. For example, for the September 2002 vouchers, payments ranged from $105 up to $1,655. Additionally, 3 soldiers did not receive payment on their September vouchers for several months. The unit administrator told us he contacted DFAS CTO numerous times to discuss the inconsistencies with soldiers’ reimbursements, and DFAS CTO representatives provided specific payment amounts that soldiers should expect to receive. However, when incorrect payments continued, he said he was not sure that DFAS CTO knew what to pay the soldiers. This situation led to soldiers’ confusion and frustration with the travel reimbursement process. The travel reimbursement errors that occurred throughout the deployment affected 36 of the 37 soldiers. Of the 36, 12 soldiers experienced at least one payment error and the remaining 24 soldiers experienced multiple payment errors. When DFAS CTO attempted to correct payment errors to soldiers by processing additional payments, the additional payments resulted in overpayments to 35 soldiers because DFAS CTO examiners made errors in determining the daily meal and incidental expense per diem rate. DFAS CTO caught some of these overpayments and processed collections on final vouchers. Fifteen soldiers received collection notices from DFAS on their final vouchers and, although most of these soldiers had debt amounts ranging from $200 to $300 dollars each, 1 soldier had almost $1,350 deducted from his final voucher payment. Although overpayments were collected from 15 soldiers in this unit, most of the remaining soldiers also received overpayments that have yet to be addressed. From the deployment period of late July 2002 to February 2003, the unit as a whole was reimbursed about $360,000. Of this amount, we determined that outstanding overpayments of over $11,200 remain. During fiscal year 2004, DFAS CTO worked toward improving staff training opportunities. For example, DFAS CTO used computer-based training to provide new personnel an initial overview of WINIATS and voucher computation procedures. In addition, a DFAS CTO official told us that a 40- hour course, which was designed specifically to address the types of vouchers received by DFAS CTO, has been established to train new employees. Further, according to the official, one benefit of the classroom instruction compared to on-the-job training is that it does not affect the productivity of experienced examiners, who previously were tasked with providing immediate on-the-job-training to new hires in addition to their primary duties. Coupled with the process flaws and human capital issues previously addressed in this report, the lack of systems integration and automation along with other systems deficiencies contributed significantly to the travel reimbursement problems we identified. The lack of integrated and automated systems results in the existing inefficient, paper-intensive, and error-prone travel reimbursement process. These problems are also a major factor in the churning issue discussed previously—the thousands of vouchers that are rejected and returned for missing documentation. Specifically, the Army does not have automated systems for some critical Army Guard travel process functions, such as preparation of travel vouchers, SNAs, and TCS orders, which precludes the electronic sharing of data by the various travel computation offices. Further, system design flaws impede management’s ability to comply with TTRA, analyze timeliness of travel reimbursements, and take corrective action as necessary. The key DOD systems involved in authorizing and reimbursing travel expenses to mobilized Army Guard soldiers are not integrated. In January 1995, the DOD Task Force to Reengineer Travel issued a report stating that this was a principal cause of the inefficient travel system. As we have reported and testified, decades-old financial management problems related to the proliferation of systems, due in part to DOD components receiving and controlling their own information technology investment funding, result in the current fragmented, nonstandardized systems. Lacking either an integrated or effectively interfaced set of travel authorization, voucher preparation, and reimbursement systems, the Army Guard must rely on a time-consuming collection of source documents and error-prone manual entry of data into a travel voucher computation system, as shown in figure 9. With an effectively integrated system, changes to personnel records, such as mobilization orders, would automatically transfer to the travel pay system. While not as efficient as an integrated system, an automatic personnel-to-travel pay system interface can reduce delays caused by the return of vouchers for missing travel authorizations. Without an effective interface between the personnel and travel pay systems, we found instances in which travel vouchers were returned to soldiers due to missing travel authorizations, causing significant time delays. For example, DOD took almost 500 days to pay a California Army Guard soldier his travel pay. This extensive delay was due in part to the soldier not submitting a paper copy of his mobilization order. If the system that created the mobilization order had interfaced with the travel voucher computation system, a portion of Army Guard and Army Reserve vouchers returned by DFAS CTO—a significant problem as discussed previously—could have been eliminated. This, in turn, would increase the efficiency and effectiveness of the process by reducing paper, reducing the return voucher workload at DFAS CTO, and decreasing the time to reimburse the soldiers. Further, the lack of an integrated travel system and consequent “workarounds” increase the risk of errors and create the current inefficient process. As noted previously, several separate WINIATS systems at DFAS and the USPFOs can process travel vouchers for mobilized Army Guard soldiers. These databases operate on separate local area networks that do not exchange or share data with other travel computation offices to ensure travel reimbursements have not already been paid. Instead, as shown in figure 9, multiple WINIATS systems transmit data to the DFAS Operational Data Store (ODS)—a separate database that stores disbursement transactions. As a result, when a soldier submits a voucher, voucher examiners must resort to extraction and manual review of data from ODS. Next, voucher examiners research and calculate previous payments— advances or interim payments—made by other Army WINIATS systems. This information is then manually entered into WINIATS for it to compute the correct travel reimbursement for the current claim. In addition to being time consuming, this manual workaround can also lead to mistakes. For example, a Michigan soldier was overpaid $1,384 when two travel computation offices paid him for travel expenses incurred during the same period in August and September 2002. This overpayment was detected by DFAS CTO when the soldier filed his final voucher in August 2003. Lack of Automated Systems DOD lacks an automated system for preparing travel vouchers, which hinders the travel reimbursement process. As shown in figure 9, soldiers manually prepare their paper travel vouchers and attach many paper travel authorizations and receipts and distribute them via mail, fax, or e-mail to one of the travel computation offices. The lack of an automated system increases the risk of missing documents in voucher submissions, which results in an increased number of vouchers rejected and returned by DFAS CTO. Another consequence of this inefficient process is the need for additional staff to process the vouchers, as discussed earlier in this report. In addition, the Army currently lacks a centralized system to issue uniquely numbered and standard formatted SNAs regarding housing and dining facilities for mobilized soldiers. The lack of centralized standard data precludes electronic linking with any voucher computation system and the reduction of paperwork for individual soldiers, as they must obtain and accumulate various paper authorizations to submit with their vouchers. Further, the Army lacks an automated system for producing TCS orders. As illustrated at the top of figure 9, the various mobilization stations use a word processing program to type and print each individual TCS order to move a soldier to such places as Afghanistan and Iraq. Similar to the process for SNAs, mobilization stations maintain separate document files for each TCS order issued. The absence of a standard automated system used by each of the mobilization stations prevents the Army from electronically sharing TCS data with other systems, such as a voucher computation system. Consequently, the process will remain vulnerable to delays for returned voucher submissions as mobilized Army Guard soldiers continue to receive paper SNAs and TCS orders. Finally, even if the Army automates the TCS, SNA, and voucher preparation processes, as discussed previously, these new automated systems would need to be either integrated or interfaced with a voucher computation system to decrease the amount of time from initiation of travel to final settlement of travel expenses. In addition to being stand-alone, nonintegrated systems that do not have the capability to exchange/share information, the over 60 separate WINIATS systems at DFAS and the USPFOs that can process travel vouchers for mobilized Army Guard soldiers do not consistently capture critical dates useful for management oversight and tracking. As a result, complete and accurate information is not transmitted to ODS—the separate DFAS database that stores disbursement transactions—and is not available for a variety of management needs. Specifically, many Army Guard USPFOs were not populating key data fields in WINIATS, such as the voucher preparation date, supervisor review date, and the travel computation office receipt date. According to our Standards for Internal Control in the Federal Government, information should be recorded and communicated to management and others within the entity who need it, in a form and within a time frame that enables them to carry out their responsibilities. These dates are key in providing DOD management with the information necessary to comply with TTRA, which requires DOD to reimburse soldiers for interest and fees when travel vouchers are paid late. In addition, these dates are essential in providing management with performance information that can help DOD improve its travel reimbursement process. A March 2003 report by DFAS Internal Review noted 39 percent of the claims it audited did not have the date of receipt in the travel computation office or the date the supervisor approved the voucher recorded in WINIATS. In April 2003, DFAS Indianapolis directed all travel computation offices using WINIATS to input the key dates of preparation, review, and receipt by a travel computation office. Our analysis of 622,821 Army Guard travel voucher transactions filed from October 1, 2001, through November 30, 2003, and processed by DFAS CTO and the USPFOs found that at least one of these key dates was not recorded in ODS for 453,351, or approximately 73 percent, of the transactions. Further, when we questioned the 54 USPFOs in March 2004, 33 of the 41 that responded told us that they were not capturing all of these critical dates. Many respondents were unaware that WINIATS could collect these dates. In cases in which the key dates necessary to perform the evaluation were being captured, incorrect entries were not detected. A WINIATS representative told us that the system was not designed with certain edit checks to detect data anomalies such as those caused by erroneous data entry. We found that 52 of 191 in our nonrepresentative selection of travel vouchers filed by soldiers had incorrect dates recorded in ODS (e.g., the date of supervisory review predated the date of travel ended by nearly a year) and that these data entry errors were not detected. Without system edit checks to detect data anomalies, the accuracy and reliability of the data are questionable, and consequently, management cannot carry out its oversight duties. DOD’s current plan—deployment of the Defense Travel System (DTS)—to automate its paper-intensive, manual travel reimbursement process will not resolve key flaws we found in reimbursement of travel expenses to mobilized Army Guard soldiers. DOD recognized the need to improve the travel reimbursement process in the 1990s and has been developing and implementing DTS. However, DTS is currently not able to process mobilized travel authorizations (e.g., mobilization orders, TCS orders, and SNAs) and vouchers and, therefore, does not provide an end-to-end solution for paying mobilized Army Guard soldiers for travel entitlements. According to DOD, DTS will provide this capability when the Defense Integrated Military Human Resources System (DIMHRS) is implemented. Currently, DOD plans to deploy DIMHRS to the Army Guard in March 2006. In addition, DTS does not identify and calculate late payment interest and fees required by law. Furthermore, DFAS auditors have reported additional problems with DTS. Given DOD’s past failed attempts at developing and implementing systems on time, within budget, and with the promised capability, and that the effort has already been under way for about 8 years, it is likely that the department will be relying on the existing paper- intensive, manual system for the foreseeable future. In July 1994, the DOD Task Force to Reengineer Travel was formed to study the existing travel system. In January 1995, this task force concluded that the existing travel system was fragmented, inefficient, expensive to administer, and occasionally impeded mission accomplishment. It recommended new travel policies and procedures, simplified entitlements, and recommended the travel process take advantage of automation to become traveler friendly and efficient. In December 1995, the Program Management Office-Defense Travel System (PMO-DTS) was established to implement these recommendations and acquire reengineered travel services from commercial vendors. At the end of fiscal year 2003, DOD reported investing nearly 8 years and about $288 million in DTS. In 2003, PMO-DTS estimated an additional $251 million was needed for DTS to be fully operational at the end of fiscal year 2006, resulting in an estimated total development and production cost of over 10 years and $539 million. This cost estimate does not include deploying DTS to the majority of the Army Guard USPFOs. Although the Army Guard supplies most of the mobilized soldiers in support of the global war on terrorism, DTS deployment to the 54 USPFOs is not scheduled to begin until fiscal year 2006. The Army is expected to fund the majority of the costs to field the program to the USPFOs, where mobilized Army Guard travel begins. The DTS total life cycle cost estimate, including the military service and Defense agencies, is $4.39 billion. While DTS purports to integrate the travel authorization, voucher preparation, and approval and payment process for TDY travel, it does not integrate travel authorizations and reimbursements for mobilized Army Guard soldiers. DOD officials have stated that currently DTS cannot process mobilized Army Guard travel reimbursements involving various consecutive and/or overlapping travel authorizations. As discussed earlier, mobilized Army Guard travel involves various travel authorizations, most with overlapping dates. DOD officials acknowledged that DTS would not produce the various travel authorizations related to mobilization travel, because DOD is presently designing a pay and personnel system, DIMHRS, to accomplish this task. DOD’s current strategy is for DTS to electronically capture the travel authorization information from DIMHRS, after which a soldier would use DTS to prepare and submit a travel voucher. This would require that DIMHRS have the capability to electronically capture the various authorizations applicable to Army Guard travel, such as mobilization and temporary change of station orders, and that SNAs are generated from a standard, automated system that can effectively interface with DTS. DOD officials do not plan to implement DIMHRS at the Army Guard until March 2006. As a result, the timing and ability of the Army Guard to process mobilization travel vouchers through DTS appears to hinge on the successful development and implementation of DIMHRS and its interface with DTS. DTS is not being designed to identify and calculate travelers’ late payment interest and fees in accordance with TTRA. As discussed earlier in this report, DOD’s current travel computation system does not automatically identify and calculate the TTRA late payment interest and fees. Furthermore, no controls are in place to ensure that the manual calculation is performed and that the interest and fee amounts are entered into the system for payment. According to DTS officials, DOD has not directed that DTS be designed to include such a feature. As a result, as currently designed, DTS provides no assurance that late payment interest and fees will be paid to travelers as required pursuant to TTRA. Further, the DTS design does not meet the expectation set out in the DOD Financial Management Regulation, that DTS will automatically determine if a late payment fee is due. A DFAS Kansas City Statistical Operations and Review Branch report identified several significant problems with the current DFAS implementation. Specifically, for the first quarter of fiscal year 2004, DFAS reported a 14 percent inaccuracy rate in DTS travel payments of airfare, lodging, and meals and incidental expenses. This report cited causes similar to those we identified in the areas of traveler preparation of claims and official review of claims. In addition to these deficiencies, DFAS noted errors in DTS calculations for meals and incidental expenses. Another DFAS Internal Review report, dated June 15, 2004, indicated that improvements were needed in DTS access controls to prevent or detect unauthorized access to sensitive files. DFAS Internal Review reported that the PMO-DTS had not established standard user account review and maintenance procedures. This leaves DTS potentially vulnerable to (1) prior DTS users retaining access to the system and (2) current users having improper access levels. The DFAS Internal Review report concludes that without conducting periodic account maintenance procedures and detecting unauthorized access, DTS is vulnerable to unauthorized individuals gaining access to the system and confidential information, resulting in potential losses to DOD employees and the government. The report also noted that DTS was not adequately retaining an audit trail of administrative and security data, leaving management unable to investigate suspicious activities or research problem transactions. At the conclusion of our audit work, PMO-DTS officials informed us that they have taken or plan to take steps to address the problem areas in the two reports discussed above. We were unable to evaluate the potential effectiveness of those actions in time for release of this report. As Army Guard soldiers heed the call to duty and serve our country in vital and dangerous missions both at home and abroad, they deserve nothing less than full, accurate, and timely reimbursements for their out-of-pocket travel expenses. However, just as we recently reported for Army Guard and Reserve pay, our soldiers are more often than not forced to contend with the costly and time-consuming “war on paper” to ensure that they are properly reimbursed. The process, human capital, and automated systems problems we identified related to Army Guard travel reimbursement are additional examples of the broader, long-standing financial management and business transformation challenges faced by DOD. Similar to our previously reported findings for numerous other DOD business operations, the travel reimbursement process has evolved over years into the stove- piped, paper-intensive process that exists today and was ill-prepared to respond to the current large and sustained mobilizations. Without systematic oversight of key program metrics, breakdowns in the process remain unidentified and effective controls cannot be established and monitored. Finally, DOD’s long-standing inability to develop and implement systems solutions on time, within budget, and with the promised capability appears to be a critical impediment in this area. While immediate corrective actions can be taken in some areas, the problems we identified with DOD’s longer term automated systems initiatives—DIHMRS and DTS—raise serious questions of whether and when mobilized soldiers’ travel reimbursement problems will be resolved. We recommend that the Secretary of Defense direct the Secretary of the Army, in conjunction with the Under Secretary of Defense (Comptroller) and the Under Secretary of Defense (Personnel and Readiness), to take the following 23 actions to address the issues we found with respect to the controls over processes for payment of travel entitlements to mobilized Army Guard personnel. Modify existing policies and procedures to require that mobilization and related travel orders clearly state meal entitlements. Such orders should specify that mobilized soldiers are not required to pay for meals in government dining facilities. Develop and implement guidance to standardize the form and content of statements of non-availability for soldiers on contingency operations. The guidance should establish an acceptable basic SNA form (e.g., written, memo, stamp, number), and should address, but not be limited to, the following elements: the period(s) covered, the type of per diem (e.g., housing, meals), the rationale for acceptance (e.g., shift work, inadequacy of transportation) or denial, the applicable meal rates (e.g., locality meal rate, proportional meal the required authorization levels and signatures. Clarify existing guidance in the PPG for contingency operations by including situational examples based on laws and regulations similar to those in the JFTR to assist decision makers in making determinations of nonavailability related to quarters and meals. Enhance efforts to ensure compliance with TTRA, through the payment of late payment interest and fees to soldiers for late travel reimbursements. Such efforts should include, at a minimum (1) updating DOD’s Financial Management Regulation provisions concerning the payment of late payment interest and fees; (2) developing metrics pertaining to the payment of late payment interest and fees under TTRA and monitoring to ensure compliance; (3) considering the feasibility of identifying and paying those soldiers who were entitled to TTRA payments but, because DFAS made no such payments prior to February 2004, did not receive them; and (4) paying the soldiers who we determined were due late payment interest and any appropriate late payment fees. Consider appointing an agencywide leadership position or ombudsman with accountability for resolving problems Army Guard soldiers encounter at any point in the travel authorization and reimbursement process. Develop and monitor programwide performance metrics to accomplish identify the root causes of travel vouchers that are rejected and returned by DFAS CTO and USPFO travel computation offices, including the reasons why individual soldiers fail to timely and properly prepare and submit travel vouchers; provide assurance that unit review of travel vouchers accomplishes the purposes of that review, including verifying that the required documents are attached and all needed signatures are included; monitor and analyze supplemental voucher data to help identify recurring problems and solutions as well as the quality of the travel reimbursement program over time; and document the types and frequency of travel-related problems reported to the DFAS toll-free number and measure the effectiveness of this customer service effort to help target areas where training or improved guidance may be warranted. Evaluate the adequacy and frequency of training provided to mobilized Army Guard soldiers that teaches them to accurately prepare and timely submit travel vouchers, including procedures for obtaining and submitting authorizing documentation for per diem entitlements. Review the outstanding travel payment problems we identified at the 10 case study units to identify and resolve any remaining travel-related issues for the affected soldiers. Develop enhanced policies and accountability mechanisms to use the current WINIATS system to comply with the requirements to identify late payments and reimburse soldiers for late payment interest and fees required pursuant to TTRA. Specific actions include reiterating or enhancing current policies requiring the capture of critical dates for management oversight and compliance with TTRA, including steps required to activate the WINIATS Liaison Screen, and providing training and guidance to the USPFOs on the use of WINIATS capabilities to capture traveler, reviewer, and travel computation office receipt dates, and to upload that information to DFAS’s Operational Data Store system. Develop and implement WINIATS system edit checks to ensure the accuracy of manual entries into WINIATS for the period of travel, the dates the traveler and reviewer signed the voucher, and the date the travel computation office received the voucher. Develop an automated, centralized system for SNAs covering potential non-availability issues experienced by mobilized guard soldiers. As part of the effort currently under way to reform DOD’s travel (DTS) and pay and personnel systems (DIMHRS), incorporate a complete understanding of the Army Guard travel reimbursement problems as documented in this and related reports into the requirements development for these systems, including automation of critical travel process functions such as travel vouchers and TCS orders; integration or interface of automated travel vouchers, SNAs, TCS orders, mobilization orders, and other relevant systems; and capabilities to identify, calculate, and pay late payment interest and fees required pursuant to TTRA. In written comments on a draft of this report, which are reprinted in appendix II, DOD concurred with 21 of our 23 recommendations. DOD partially concurred with our recommendations regarding (1) development of an automated, centralized system for SNAs, covering potential nonavailability of government meals or lodging for mobilized Army Guard soldiers, and (2) incorporation into requirements development for DTS a complete understanding of the Army Guard travel reimbursement problems including late payment interest and fees pursuant to TTRA. The actions proposed by DOD to these two recommendations do not ensure that SNA problems we identified will be corrected or that DOD will have a travel system in place that will comply with TTRA, thus continuing the risk that soldiers will not receive all payments they are entitled to receive. The department also requested the inclusion of additional responsible DOD officials in the recommendations section of this report, which we have added as appropriate. Concerning our recommendation that DOD develop an automated, centralized system for SNAs, DOD responded that the Office of the Under Secretary of Defense (Personnel and Readiness) (OUSD (P&R)) and the DTS Program Management Office are working closely to ensure that functional requirements for military travel processes are incorporated in the development of DTS and DIMHRS. DOD also pointed out that DIMHRS is tentatively scheduled to be deployed to the Army National Guard in March 2006, and that therefore, it is not feasible to develop an “interim” automated, centralized system for SNAs. DOD also stated that in the interim, OUSD (P&R) will work with the military services’ lodging communities to establish a standard SNA, and that a centralized process will be developed for all military services. Based on our understanding of the planned and existing functionalities of DTS and DIHMRS and the problems identified during our audit, we do not agree with DOD’s reasons for not resolving the stated weaknesses. Specifically, according to a DOD OUSD (P&R) official in the Requirements and Reengineering Division, Joint Requirements and Integration Office, DIMHRS is not currently being designed to issue SNAs. Further, although DOD stated that it plans to develop a standard SNA form and centralized SNA process, this response does not provide for the development of an automated system, which could be incorporated into the development of DIMHRS or DTS either as an integrated capability or an interoperable interface. By planning to work with lodging communities to establish a standard SNA, it appears that DOD is taking an initial step toward addressing the plight of mobilized Army Guard soldiers who have had problems regarding SNAs. However, DOD needs to ensure that all information regarding nonavailability of both lodging and meals is available to appropriate decision-makers in the SNA approval process. The majority of problems experienced by Army Guard soldiers in our case studies related to whether meals were determined to be adequately available to soldiers in various circumstances such as the irregular hours required by guard duty, distance to meal facilities, inadequate transportation to meal facilities, lack of 24-hour mess halls, or other circumstances. Proper consideration of these meal issues in addition to those related to lodging, generally includes review through the soldier’s chain of command and installation commander and should result in timely decisions to either (1) issue SNAs that authorize per diem for meals, enabling Army Guard soldiers to be reimbursed for food from commercial locations or (2) ensure that Army Guard soldiers receive government- provided meals free of charge. Because DOD’s comments do not provide solutions to the range of problems we observed with the SNA process, we continue to recommend that an automated, centralized system for SNAs be developed, which addresses the variety of nonavailability issues experienced by mobilized Army Guard soldiers. In regard to DOD’s partial concurrence with our recommendation related to DTS and TTRA compliance, DOD stated that this programming feature was not required because the travel reimbursement process is completed within 5 days of the traveler entering pertinent data into DTS. DOD’s internal processes stipulate that before a travel voucher entered into DTS can be paid, it must be reviewed and approved. However, our report documented significant delays in the review and approval process in the current paper intensive system, and DOD did not provide any support for its claim that DTS reimbursements will be made within a 5-day period. We continue to see the need for full DOD implementation of this recommendation because the review and approval process is a human capital function that DTS will not replace. Further, the likelihood of late payment of travel vouchers processed through DTS remains because of potential factors such as (1) excessive work loads, (2) questions during document reviews, (3) inadequate attention to reviewer responsibilities, and (4) other unforeseen delays in the process. To ensure that Army Guard soldiers and others are paid late payment interest and fees required pursuant to TTRA, DTS would need to include capabilities to identify, calculate, and pay such late payment interest and fees. Such capability would also allow DOD to conduct ongoing monitoring of the timeliness of travel reimbursements made through DTS. Finally, regarding the 21 recommendations with which DOD concurred, DOD indicated that the stated action(s) taken were complete with respect to the need for (1) mobilization and related travel orders to clearly state meal entitlements, (2) standardization of the form and content of SNAs for contingency operations, (3) development and monitoring of late payment interest and fee metrics, (4) appointment of an agencywide leadership position or ombudsman, (5) identification of root causes for untimely and improperly prepared and submitted travel vouchers, and (6) evaluation of the adequacy and frequency of travel voucher preparation training provided to mobilized Army Guard soldiers. While the actions DOD described in commenting on our report appear responsive to our recommendations, we have not evaluated the effectiveness of their implementation and, therefore, cannot determine whether these measures will resolve the problems we identified. As agreed with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from its date. At that time, we will send copies of the report to interested congressional committees. We will also send copies of this report to the Secretary of Defense, the Under Secretary of Defense (Comptroller), the Secretary of the Army, the Director of the Defense Finance and Accounting Service, the Director of the Army National Guard, and the Chief of the National Guard Bureau. We will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions regarding this report, please contact Gregory D. Kutz at (202) 512-9095 or [email protected], John J. Ryan at (202) 512-9587 or [email protected], or Mary Ellen Chervenic at (202) 512-6218 or [email protected]. Major contributors to this report are acknowledged in appendix III. To obtain an understanding and assess the design of process, personnel (human capital), and system controls used to provide assurance that mobilized Army National Guard (Army Guard) soldiers were timely reimbursed for travel expenses and per diem entitlements, we reviewed applicable laws, regulations, policies, procedures, and observed the travel authorization, review, approval and reimbursement interviewed cognizant agency officials. With respect to applicable laws, regulations, policies and procedures, we obtained and reviewed the Travel and Transportation Reform Act of 1998 (TTRA) (Pub. L. No. 105- General Services Administration's (GSA) Federal Travel Regulation; Department of Defense’s (DOD) Joint Federal Travel Regulation; DOD’s Financial Management Regulation, Volume 9, Travel Policies Army National Guard Financial Services Center’s The Citizen-Soldier’s Guide to Mobilization Finance; and Department of the Army’s Personnel Policy Guidance of Operations Noble Eagle (ONE), Enduring Freedom (OEF), and Iraqi Freedom (OIF). We also reviewed the following Defense Finance and Accounting Service – Indianapolis (DFAS-IN), Travel Technical Messages (TTM)— policy implementation messages relating to late payment fees and interest: TTM 00-08, May 2000, which implemented the provisions of TTRA and provided that its terms applied to “settlement vouchers;” TTM 01-01, November 2000, which provided that late payment interest and fees be calculated and submitted by the traveler on the final payment amount only; TTM 03-04, April 2003, which removed the requirement that the late payment interest calculation was to be calculated and submitted by the traveler; and TTM 04-10, May 2004, which directed that late payment interest and fees be calculated on all travel vouchers, not just final vouchers, as well as directed reviewers to sign a travel voucher on the same day it is submitted so that DFAS-IN can apply TTRA requirements using the reviewer’s signature date as a surrogate for the submission date. We also used the internal controls standards provided in the Standards for Internal Control in the Federal Government. We interviewed officials from the National Guard Bureau (NGB); United States Property and Fiscal Offices (USPFOs); Army and National Guard pay centers; unit, duty station, and mobilization station officials; and individual soldiers to obtain an understanding of their experiences in applying and complying with these policies and procedures. In addition, as part of our audit, we performed a review of certain process and system controls. Specifically, we obtained information and documentation and/or performed walk-throughs of travel voucher processing through the Integrated Automated Travel System, Version 6.0 (WINIATS) at DFAS-IN and one Army National Guard USPFO. During those walk-throughs, we observed the operation of control activities over the review, approval, and timely and accurate payment of travel vouchers. We obtained documentation and performed walk-throughs regarding soldier readiness checks—the Army’s mobilization station process to ensure that Army Guard units have and know what they need. We reviewed existing guidance for determining and authorizing nonavailability of housing and meals at duty stations. Because the systems that produce individual Army Guard soldiers’ travel orders are decentralized and not integrated with travel reimbursement systems, we did not conduct walk- throughs of them. However, we interviewed officials from Army National Guard USPFOs, NGB, DFAS-IN, and the Army Finance Command to augment our documentation and walk-throughs. Because our preliminary assessment determined that current authorization, request, review, and approval processes used to pay travel reimbursements to mobilized Army Guard soldiers relied extensively on paper-intensive, nonintegrated systems and error-prone manual transaction entry, we did not statistically test current processes and controls. The lack of accurate and complete centralized data on Army Guard travel also precluded statistical testing. Instead, we used case study and data mining to provide a more detailed perspective of the design of controls and the nature of deficiencies in the key areas of processes, people (human capital), and systems. We focused on how these key areas were at work in the three phases of the travel and reimbursement process: (1) authorizations; (2) travel voucher preparation, submission, unit review, and transmission of reimbursement claims; and (3) travel computation office review, reimbursement computation, audit, and payment. For our case studies, we gathered available data and analyzed the pay experiences of Army Guard units mobilized in support of Operations Iraqi Freedom, Noble Eagle, and Enduring Freedom during October 2001 through November 2003. We audited the following 10 Army Guard units as case studies of the design of controls ensuring consistent, and accurate determination, authorization, communication, and documentation of per diem entitlements for soldiers assigned to those units: Alabama 20th Special Forces, California 19th Special Forces, Georgia 190 th Military Police, Louisiana 239th Military Police, Maryland 115th Military Police, Mississippi 114th Military Police, Mississippi 20th Special Forces, Pennsylvania 876th Engineer Battalion, and Virginia 20th Special Forces. In selecting these 10 units for our case studies, we sought to obtain the travel reimbursement experiences of units assigned to Operation Iraqi Freedom, Operation Enduring Freedom or Operation Noble Eagle. We limited our case study selection to those units mobilized during the period from October 1, 2001, through November 30, 2003. From our preliminary assessment of this population, we determined that military police and special forces units were experiencing problems related to per diem. We used mobilization data supplied by NGB to assist us in identifying military police and special forces units. From the 231 military police and special forces units in the NGB database, and our prior work on Army Guard military pay (GAO-04-89), we selected 4 special forces and 4 military police units experiencing problems related to per diem for case studies. Two other units were selected from a review of data furnished to us by DOD from its Remedy Tracking System. These 10 case studies were audited to provide a more detailed view of the types and causes of problems experienced by these units as well as the financial impact of these problems on individual soldiers and their families. To obtain diverse perspectives on the nature of the reported per diem problems, we interviewed selected guard and duty station commanders (where the per diem problems were experienced) and selected individual soldiers experiencing travel reimbursement problems for our case study units. We also obtained and reviewed relevant individual travel vouchers and supporting documentation for soldiers in selected units. In addition, we used available data to estimate underpayments, overpayments, late payments, and meal entitlement amounts that Army Guard soldiers expected to receive. We referred eight units, which, at the end of our audit included Army Guard soldiers that were unpaid, partially paid, or in debt, to appropriate DOD officials to resolve any amounts owed to the Army Guard soldiers or to the government. For our individual voucher data mining, we obtained a database from DFAS’s Operational Data Store (ODS) of travel voucher reimbursement transactions for travel that began during the period October 1, 2001, through November 30, 2003. The data contained approximately 6 million civilian, Army, Army Reserve, and Army Guard travel voucher transactions paid through the DFAS-IN disbursing station symbol number 5570. These travel vouchers accounted for $3.8 billion in reimbursements. The ODS database did not uniquely identify mobilized Army Guard travel vouchers. In order to identify Army Guard vouchers, we obtained a database extract of Army guard soldiers paid during the period October 1, 2001, through November 30, 2003, from the Defense Joint Military Pay System-Reserve Component. We did not verify the accuracy or completeness of either of these databases. Using dates of mobilized Army Guard service contained in the payroll database, we extracted the Social Security numbers for Army Guard soldiers with periods of active service of 30 days or greater. We matched the Social Security numbers from the Army Guard payroll database to the ODS travel reimbursement transaction database. There were approximately 623,000 travel voucher transactions processed for Army Guard for the period October 1, 2001, through November 30, 2003, totaling $389 million. We then sorted the 623,000 travel vouchers by the number of days it took to get reimbursed from the date travel ended. We identified 26,414 travel vouchers that took over 120 days to get reimbursed. We made a nonrepresentative selection of transactions from the 26,414 grouping reimbursed after 120 days, along with travel vouchers selected from the unit case studies, and audited 191 travel vouchers. Our analysis of the 191 vouchers found that 52 had incorrect dates entered into the database and, in fact, were paid timely. We performed no further audit on these 52 travel vouchers and concentrated our analysis on the remaining 139 travel vouchers. We obtained or requested copies of the travel vouchers and supporting documentation for each potential late reimbursement transaction selected—primarily the travel voucher, travel order(s), special authorizations such as certificates or statements of non-availability and missed meals, and receipts for other reimbursable expenses. We reviewed these data and used a data collection instrument to collect the information necessary and compared for accuracy the dates and amounts in the transaction database to the dates and amounts on the supporting documentation; calculated the days elapsed between (1) the date travel ended, (2) the date the traveler signed the travel voucher, (3) the date the unit reviewer signed the travel voucher, (4) the date received by the processing center, and (5) the date of payment to the soldier, and identified where the significant delay(s) occurred for each voucher; and attempted contact with the soldier, and, as appropriate, the unit reviewer, and the travel computation office to determine the reason(s) for all significant delays occurring between the end of travel and the date of reimbursement. The scope of our review did not include verification of the accuracy of travel voucher payments. For the purpose of determining the reasons for late reimbursements, we used available documentation supplemented with follow-up inquiries where possible with soldiers, unit supervisory reviewers, and the cognizant travel computation office personnel to gain insight into the facts, circumstances, and points of view of all relevant parties. In our case studies, which focused on per diem problems, and in our vouchers, which focused on late reimbursements, we attempted to determine the issues surrounding soldiers’ questions of accurate per diem reimbursements, and the reasons delays occurred in reimbursing soldiers for travel entitlements and expenses. As such, our audit results only reflect the problems we identified. Soldiers in our late pay and case study units may have experienced additional travel reimbursement problems that we did not identify. In addition, our work was not designed to identify, and we did not identify, any fraudulent travel reimbursement request or payment by any Army Guard soldiers. Because we could not contact all individual soldiers and unit supervisory reviewers, we likely did not identify all of the effects on soldiers or the reasons for inaccurate or questioned per diem in our case study units and delays in the overall process resulting in late payments of travel reimbursements to Army Guard soldiers. We reviewed TTRA and federal travel regulations. TTRA requires the payment of a late payment fee as prescribed by 41 C.F.R. § 301-71.210 requiring the payment of a late payment fee consisting of (1) late payment interest, generally equivalent to the Prompt Payment Act Interest Rate, plus (2) a late payment charge equivalent to the late payment charge, which could have been charged by the government travel card contractor. This late payment penalty and interest is to be paid to soldiers if their reimbursements are not paid within 30 days of the submission of proper vouchers. As part of our audit, we determined if any of the 139 individual vouchers we selected were timely and properly submitted by Army Guard soldiers and whether they had received late payment interest owed to them. We identified 75 vouchers that were properly submitted by Army Guard soldiers who should have received late payment interest totaling about $1,400. Some of these soldiers may also have been entitled to a late payment fee in addition to the late payment interest. We referred the names of the affected soldiers to applicable DOD officials to resolve amounts owed to these soldiers. Another source of information was the Remedy Tracking System, which DOD uses to track all controlled correspondence, such as congressional complaint letters. We asked DOD to provide all correspondence relating to travel-related expense issues. We analyzed the results from DOD for additional leads into travel voucher problems. We did not audit nor did we determine if the database provided to us by DOD was complete. In our analysis of DOD’s Defense Travel System (DTS), we (1) interviewed DTS program management office personnel and other DOD officials; (2) obtained demonstrations of the user interface with DTS; (3) reviewed a DOD IG audit report and DFAS post payment audit report on DTS; and (4) obtained cost information on DTS from DOD’s, fiscal year 2005 Budget Estimate, Information Technology/National Security Systems Budget Exhibit dated February 2004. We briefed DOD and Army officials, NGB officials, and DFAS officials on the details of our audit, including our findings and their implications. We received written DOD comments and have summarized those comments in the “Agency Comments and Our Evaluation” section of this report. DOD’s comments are reprinted in appendix II. We conducted our audit work from November 2003 through September 2004 in accordance with U.S. generally accepted government auditing standards, and we performed our investigative work in accordance with standards prescribed by the President’s Council on Integrity and Efficiency. Staff making key contributions to this report include Paul S. Begnaud, Norman M. Burrell, Francine M. DelVecchio, C. Robert DeRoy, Lauren S. Fassler, Dennis B. Fauber, Wilfred B. Holloway, Patty P. Hsieh, Charles R. Hodge, Jason M. Kelly, Julia C. Matta, Sheila D. Miller, Bennett E. Severson, Robert A. Sharpe, Patrick S. Tobo, and Jenniffer F. Wilson. The Government Accountability Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through GAO’s Web site (www.gao.gov). Each weekday, GAO posts newly released reports, testimony, and correspondence on its Web site. To have GAO e-mail you a list of newly posted products every afternoon, go to www.gao.gov and select “Subscribe to Updates.” | GAO was asked to determine (1) the impact of the recent increased operational tempo on the process used to reimburse Army Guard soldiers for travel expenses and the effect that travel reimbursement problems have had on soldiers and their families; (2) the adequacy of the overall design of controls over the processes, human capital, and automated systems relied on for Army Guard travel reimbursements; and (3) whether the Department of Defense's (DOD) current efforts to automate its travel reimbursement process will resolve the problems identified. GAO selected and audited 10 case study units that were identified in a preliminary assessment as having a variety of travel reimbursement problems. Mobilized Army Guard soldiers have experienced significant problems getting accurate, timely, and consistent reimbursements for out-of-pocket travel expenses. These weaknesses were more glaring in light of the sustained increase in mobilized Guard soldiers following the terrorist attacks of September 11, 2001. To its credit, the Defense Finance and Accounting Service (DFAS) hired over 200 new personnel to address travel voucher processing backlogs and recently upgraded their training. However, Guard soldiers in our case study units reported a number of problems they and their families endured due to delayed or unpaid travel reimbursements, including debts on their personal credit cards, trouble paying their monthly bills, and inability to make child support payments. The soldier bears primary responsibility for travel voucher preparation, including obtaining paper copies of various types of authorizations. DFAS data indicate that it rejected and asked soldiers to resubmit about 18 percent of vouchers during fiscal year 2004--a churning process that added to delays and frustration. Also, existing guidance did not clearly address the sometimes complex travel situations of mobilized Army Guard soldiers, who were often housed off-post due to overcrowding on military installations. Further, DOD continued to be noncompliant with a law that requires payment of late payment interest and fees when soldiers' travel reimbursements are not timely. With respect to human capital, GAO found a lack of oversight and accountability and inadequate training. Automated systems problems, such as nonintegration of key systems involved in authorizing and paying travel expenses and failure to automate key processes, also contributed to the inefficient, error-prone process. DOD has been developing and implementing the Defense Travel System (DTS) to resolve travel-related deficiencies. However, DTS will not address some of the key systems flaws. For example, DTS is currently not able to process mobilized soldier travel authorizations and vouchers and identify and calculate late payment interest and fees. |
The Chief of Naval Operations is responsible to the Secretary of the Navy for the command, utilization of resources, and operating efficiency of the operational forces of the Navy and of the Navy’s shore activities. The shore establishment provides support to the operating forces (known as the fleet), including facilities for the repair of machinery and electronics, ships, and aircraft, and for the storage of spare parts. The Naval Supply Systems Command provides naval forces with supplies and services through a worldwide, integrated supply system. Its Naval Inventory Control Point exercises centralized control over different line items of repair parts, components, and assemblies for ships, aircraft, and other weapons systems. Supplying spare parts to deployed ships requires coordination between the supply command and the Naval operating forces. The operating forces report to the Chief of Naval Operations and provide, train, and equip naval forces. The operating forces also report to the appropriate Unified Combatant Commanders. As units of the Navy enter one of the designated worldwide areas of Naval responsibility, they are operationally assigned to the appropriate numbered fleet. All Navy units also have an administrative chain of command with the various ships reporting to the appropriate ship type commander: aircraft carriers, aircraft squadrons, and air stations are under the Commander, Naval Air Force; submarines come under the Commander, Submarine Force; and all other ships fall under the Commander, Naval Surface Forces. Normally, the type commander controls the ship during its primary and intermediate training cycles, and then it moves under the operational control of a fleet commander. The Navy determines what kinds of spare parts to carry on board deployed ships by identifying the kinds of equipment that are installed (the ship’s configuration) and the types and quantities of repair parts and any special tools, test equipment, or support equipment needed to do preventive and corrective maintenance during extended and unreplenished periods at sea. Specifically, the Navy identifies maintenance requirements and uses them to develop a list of allowable parts for the equipment. For parts on the list, the Navy uses predicted failure rates, which it updates using actual demand for parts data in inventory allowance models. The office of the Chief of Naval Operations approves these models. Although the Navy revised its instruction for determining spare parts supply effectiveness in October 1999, it continues informally to use the supply-system performance goals that were established in 1983. These performance goals measure a ship’s ability to fill all of the repair part requisitions that it receives. Two important goals are: (1) that gross availability of 65 percent of repair parts required by ships and aircraft carriers are to be filled from onboard inventories and (2) that the average customer wait-time for the delivery of high-priority parts from ships’ supply inventories and off-ship sources is to occur within 135 hours (or about 5.6 days) for ships outside of the continental United States. This average customer wait-time is the supply system’s response time from the date an order for a required part is issued until it is received by the customer. The Navy is in the process of revising its supply performance goals but it has not yet completed this work. The Navy’s annual budgets contain about $750 million for ships’ spare parts, including about $200 million for initial spares and about $525 million for replenishment spares. However, the Navy also identifies requirements for spare parts that have not been funded. For example, it identified $200 million in unfunded requirements in the fiscal years 2002 to 2004 budgets to increase safety-level stock for repairable items. Only about 54 percent of spare parts requisitions for ships in 6 battle groups in the Atlantic and Pacific fleets deployed in fiscal years 1999 and 2000 could be filled from onboard sources—a supply effectiveness rate that fell below the Navy’s goal of 65 percent. When priority parts were not on board, ships had to wait an average of 18.1 days, more than 3 times the Navy’s wait-time goal of 5.6 days for ships outside the continental United States. The Navy has fallen short of meeting its ship supply performance goals for more than 20 years. Our analysis of ships in 6 selected Atlantic and Pacific fleet battle groups deployed in fiscal years 1999 and 2000 showed that on average they were able to supply about 54 percent of the spare parts that were requisitioned from onboard inventories. As table 1 shows, this average supply effectiveness rate ranged from 51 to 61 percent for different battle groups during that period. The rates fell short of the Navy’s supply system performance goal of 65 percent for surface ships and aircraft carriers, which it has used informally since 1999. These supply rates for the deployed battle groups are consistent with fleetwide historical data available from Navy reports. These data show that from 1982 to 2000 Navy ships in both deployed and nondeployed status were, on average, able to fill about 55 percent of their parts requisitions from onboard inventories. These rates have not varied much over the past 20 years, indicating that little overall progress has been made in meeting the Navy’s 65 percent goal. These findings were further reinforced by our analysis of Navy data for Pacific Fleet surface ships in amphibious readiness groups and ships in Marine Corps expeditionary forces. These groups, which included a total of 42 ships, showed an average availability of about 54 percent of spare parts requisitioned during deployments in calendar years 1999 to 2001, although individual ships reported a wide range of supply rates. For example, a destroyer in one Marine expeditionary force group reported an average supply rate of about 31 percent during deployment, whereas a ship used to transport and land Marines and their equipment and supplies in a deployed amphibious readiness group averaged 62 percent. When requisitioned parts were not on board ship, the Navy maintenance crew had to wait far longer than the Navy’s stated wait-time goals to obtain the needed parts from off-ship sources. The wait-time goal for critical, high-priority items for ships outside the continental United States is 5.6 days. The Navy’s data for these ships, which were deployed between fiscal year 2000 and February 2003, showed that when needed high-priority parts were requisitioned, maintenance crews had to wait an average of 18.1 days—more than 3 times the Navy’s wait-time goal—to receive the parts. The average wait-times for all spare parts, not just priority items, are even longer. For the six Atlantic and Pacific battle groups deployed in fiscal years 1999 and 2000 that we analyzed, repair crews experienced an overall average wait-time of about 25.6 days, with a range of 16.2 to 32.5 days. Table 2 shows the wait-times for spare parts supplied both from off-ship sources, as well as from onboard supplies. Navy supply officials said they are concerned about the lengthy average wait-time data being reported and are analyzing how this response time can be shortened. They were especially concerned that the number of days required for getting the parts to do the repair work seemed higher than what would be reasonable. The best of the Navy’s wait-time performance is for parts that are needed to repair high-priority, mission-critical equipment. Navy supply officials said that wait-times of about 12 to 14 days for these critical parts are about the best the Navy is achieving because it uses expeditors to locate the parts and it employs premium transportation to deliver the parts to the ships. For example, a ship will send a requisition for a critical part to a shore-based team whose job is to determine quickly if the part is available anywhere in the military supply system or elsewhere, and identify the fastest mode of transportation available (usually commercial overnight delivery) to an overseas point. The Navy will then pick up the part for final delivery to the ship while it is either in port or at sea. Our analysis identified two key problems that contribute to the Navy’s inability to achieve its supply goals for deployed ships: inaccurate ship configuration records and incomplete, outdated, or erroneous historical parts demand data. The Navy uses these data in models that estimate the types of parts (range) and the number of each part (depth) that should be stocked on board a ship during its deployment. However, because of data inaccuracies, the ships may stock all of the parts they are allowed to carry but still find they cannot fill a large number of parts requisitions from onboard inventories, thus failing to meet the Navy’s supply performance goals. Navy headquarters and fleet officials acknowledge that the accuracy of ship configuration data is a serious concern. Specifically, they said that (1) ship configuration records are not always updated in a timely manner when equipment or weapons systems are modified and (2) required configuration audits are not conducted regularly to ensure that configuration data correspond with the equipment or weapons systems on board. The Navy identifies current and accurate configuration data as the cornerstone of logistics support to its ships. Configuration records provide a detailed description of the characteristics, including dimensions and technical information, of each piece of equipment or weapon system on board the ship. This information is used in allowance models to prepare a Coordinated Shipboard Allowance List (COSAL). The allowance list identifies the individual spare parts related to each piece of equipment or weapon system on board. Ships depend on accurate configuration records to ensure that, among other things, the right spare parts and special tools, along with the proper manuals and other documentation, are available on board ship. Navy officials said that while it is difficult to attribute any one cause to spare parts shortages on board, inaccurate ship configuration records are a major problem. If inaccurate configuration records are used in allowance models, the resulting allowance lists may identify some parts that should be stocked but that do not match the equipment that is actually on board. As a result, repair crews could requisition a part for a failed piece of equipment but find that the part is not on the allowance list and, thus, not in stock. The requisitions data from our sample of 6 battle group deployments showed that about 17.3 percent of the 60,365 unfilled requisitions were for parts that were not on the ships’ allowance parts lists (see app. III). One reason that ship configuration records are not current or accurate is that they are not updated or changed, as required, when equipment or systems are installed, removed, or modified. This problem can occur on both new and older ships. According to Navy supply and fleet officials, the allowance lists for new ships are often based on the configuration of the first ship to be built in the production line, and subsequent changes to follow-on ships’ configurations are not always documented. Thus, a ship’s actual configuration could change—and the records not be modified—even before the ship is delivered from the shipbuilder. On older ships, the equipment and systems are frequently upgraded or replaced without properly updating configuration data because the procedures in place to change configuration records as equipment is changed are not always followed. For example, when equipment is installed, removed, or modified by contractors, ship personnel do not always promptly or accurately enter these changes into the ship’s configuration database in order that the spare parts required to support the altered equipment can be ordered. Moreover, the Navy has not performed the configuration audits it has identified as needed to ensure that configuration data for equipment and weapons systems on board are accurate. According to Navy officials, these audits are supposed to be done periodically but none were conducted between 1995 and 2000 because of budget constraints. Officials said they are beginning to perform configuration audits again and are developing an audit program, but its implementation will depend on the funding available and whether funding is earmarked specifically for audits. The officials estimated that a viable program might cost about $500,000 a year. Without these audits, the extent of the configuration records’ accuracy will remain unclear. While audits have not been conducted for a period of time, validations— which are more in-depth than audits—of ships’ configuration data have revealed problems with their accuracy. The Navy performs validations to establish the precise configuration of critical systems and equipment that is experiencing problems and corrects the configuration data (e.g., items are added or deleted) to reflect what is actually found on board the ships. Seven Pacific Fleet validations completed between October 2002 and January 2003 identified inaccuracies averaging 37 percent of the records reviewed. For example, Navy Pacific Fleet officials provided us with information about a configuration record validation of a new ship delivered to the fleet. The validation identified 901 errors (588 added and 313 deleted records) in the selected systems and equipment, or about 39 percent of the 2,337 configuration records that were reviewed. On an older aircraft carrier, a January 2003 validation identified 3,712 errors (1,790 added and 1,922 deleted records) in the selected systems and equipment, or about 43 percent of 8,555 configuration records reviewed. In addition to inaccurate ship configuration information, the Navy frequently uses incomplete, outdated, or erroneous historical demand data in its parts allowance models. This can lead to incorrect estimates of the number of parts needed during a deployment period and result in unmet supply goals. Historical parts demand data provides the projected failure rates or actual replacement rates for spare parts over a long period of time. Each repair part listed on the allowance list is expected to fail at some point in normal ship operations during deployment and is a potential allowance item. However, only those parts with sufficiently high projected failure rates or actual replacement rates, along with items required for planned maintenance or for safety measures, will normally be authorized as onboard repair parts. According to Navy officials, data on parts’ failure rates are supposed to be accurately, promptly, and continuously updated, but this updating does not always happen. In some cases, ship or shore personnel may not report that a particular spare part has been used and, thus, the information does not get into the supply system database. As a result, the Navy’s parts allowance list will be based on incomplete, outdated, or erroneous historical failure-rate data and the ship will stock too few or too many spare parts of a particular type. Our analysis of the requisitions on board deployed battle group ships revealed that about 38 percent of the 60,365 unfilled requisitions were mainly for parts that were on the allowance list, but were not in stock when requisitioned (see app. III). Navy officials told us that this problem could result partly from inaccuracies in the demand data that are used to develop allowance lists. Officials also suggested that it could stem from the inability of a ship’s crew to obtain a high percentage of the spare parts on their allowance lists prior to deployment. However, our analysis showed that, at deployment, Navy ships generally are stocked with a high percentage of the types of parts (range) and the quantities of parts (depth) that are on their allowance lists. Supply officials from the Navy’s Pacific Fleet told us that their goal for surface ships was to stock 93 percent of the range and 90 percent of the depth identified on their allowance lists and that deploying ships, which were usually given a high funding priority, generally deployed with percentages higher than these. As table 3 shows, our analysis of data for the Lincoln battle group (Pacific Fleet) deployed in fiscal year 2002 indicated that the ships were stocked with an average of 98.1 percent of the different types of parts (range) and an average of 93.1 percent of the quantities of each part (depth) that were on their allowance lists, which included the parts expected to be needed during the first 90 days of deployment (July to September 2002). In contrast, during this period, an average of only 58.3 percent of the ships’ requisitions were filled from parts carried on board. This assessment shows that, although these ships carried a high percentage of the types and quantities of allowed items, they continued to fall short of meeting the Navy’s supply effectiveness rate goal of 65 percent. The Navy’s spare parts supply problems can delay the completion of needed maintenance and repair jobs on deployed ships and can affect their operations and mission readiness, although their precise impacts are not always well defined. Our analysis of data on more than 50,000 maintenance work orders for 6 battle group deployments in 1999 and 2000 indicated that about 58 percent were delayed because the needed repair parts were not available on board ship. Our closer analysis of maintenance work orders and casualty reports for one battle group indicated a discrepancy in reporting the extent to which equipment failures occurred and, thus, the extent to which these problems were reflected in readiness assessments is unclear. The Navy’s supply problems also have an impact on costs. Although the exact amounts have not been quantified, Navy officials recognize that they incur substantial costs to obtain needed parts from off-ship supply sources. The Navy also expends substantial funds—totaling nearly $25 million for the six ships we reviewed—to maintain large inventories that are not requisitioned during deployments because it has given low priority to identifying and purging unneeded spare parts from ship inventories. Shortages of required parts can often delay the completion of needed maintenance and repair jobs. Our analysis of more than 50,000 maintenance work orders opened during 6 recent battle group deployments indicates that about 29,000 (almost 58 percent of the total) could not be completed because one or more needed repair parts were not on board ship. Table 4 summarizes this information. Navy fleet officials told us that a maintenance job is generally not started until all the needed parts are on board ship. This delay is due to the time and labor involved in tearing down equipment and possibly losing parts if equipment is left partially disassembled awaiting repair. A complete picture of the impact of the Navy’s spare part shortages, however, is unclear because the Navy’s two forms of reporting on the extent to which significant equipment malfunctions affect a ship’s operations and mission readiness are inconsistent. The two forms of reporting are high-priority maintenance work orders and casualty reports. The Navy uses four priority codes for maintenance work, with priorities 1, 2, and 3 considered high priority. High priority work is defined as critical, extremely important, or important to a ship’s essential equipment and systems, operations, or mission (see app. II for complete definitions of these codes). Navy maintenance reporting instructions require that any maintenance job with one of these three priority codes should generate a casualty report. According to Navy guidance on casualty reports, they are directly related to a unit’s readiness reporting and identify the ship’s equipment status and impact on the ship’s operations and mission readiness. Where casualty reports are issued, these problems are to be reflected in a ship’s readiness reporting. Our review of about 4,000 casualty reports issued for deployed Pacific Fleet ships from 1999 to 2001 indicated that they generally resulted in degraded ship readiness, as reported by the Status of Resources and Training System (SORTS). SORTS is used DOD-wide to report the degree to which a unit is capable of undertaking its assigned wartime missions. However, our analysis of ship maintenance work orders and casualty reports for one battle group (Truman) in the Atlantic Fleet deployed in fiscal year 2000 showed a discrepancy between the number of work orders with priority 1, 2, or 3 and the number of casualty reports that were filled out when a job was assigned one of these priority codes. The work orders indicated that, of 5,435 total maintenance jobs, 2,635 were identified as priority 1, 2, or 3. Although there should have been a similar number of casualty reports, only 906, or one-third of the 2,635, were issued for these ships during this period of time. One must assume that a more complete reporting of casualty reports, as required for high priority maintenance work orders, would provide the basis for a more complete assessment of readiness. A similar discrepancy occurred between the number of high-priority work orders and casualty reports issued for maintenance jobs on surface ships in the Pacific Fleet between fiscal years 1995 and 2002. According to a Pacific Fleet maintenance analyst, of about 1 million surface ship maintenance jobs coded with priority 1, 2, or 3, only about 50,000 casualty reports, or about 5 percent, were issued. Although Navy guidance calls for up-to-date and accurate casualty reports, Navy officials said that the final decision on whether to submit a casualty report is left to the judgment of the ships’ commanders and is based on their perception of the importance of the degraded equipment to the ships’ assigned missions and the status of redundant equipment that the ships carry. Navy officials said that the number of casualty reports that are issued should be higher, but they suggested that commanders’ concerns that a high number of such reports could reflect negatively on their leadership may limit the number of reports that are issued. For example, we were told that casualty reports are usually not generated when ships are getting ready to deploy; if too many are generated, it might be seen as a failure of the ships’ command leadership. Some ships that issued only a few minor casualty reports were found, on closer inspection, to have significant ship operations and mission readiness problems. For example, Navy ships are required to have periodic inspections to determine if they are fit for further service and to identify any conditions that limit their capability to carry out assigned missions. Inspection reports we reviewed identified various deficiencies, such as the failure of equipment to meet performance and safety requirements or the need for excessive maintenance resources. During an inspection in February 2002 of a destroyer forward-deployed in Yokosuka, Japan, which had issued 16 low-priority casualty reports prior to the inspection, inspectors gave the ship an unsatisfactory rating—the lowest possible rating—in the areas of self-defense, full power, and steering tests; they also found that it had significant material deficiencies and equipment operational capabilities discrepancies. Inspectors told us such discrepancies between casualty reporting and the actual conditions found during the inspections of the ships were not uncommon. Another effect of the Navy’s spare parts supply problems is increased costs. The Navy expends additional funds to obtain needed spare parts from off-ship sources. To get these parts, it must identify where they are available (e.g., from a shore-based Navy supply center or a commercial vender) and then transport them to the ship. The Navy also incurs substantial costs to carry large parts inventories that are not requisitioned. Our analysis of data for six ships in the Lincoln battle group (Pacific Fleet) during deployment in 2002 showed that the ships requisitioned only a small percentage of the different types of parts carried on board. As shown in table 5, the ships carried a total of 62,727 different types of parts. By the end of 6 months, the supply crews had received 10,471 requisitions for spare parts and filled 6,549 of them from onboard stocks. This number (6,549) represented 10.4 percent of the total part types carried on board. Navy fleet officials acknowledged that ships generally carry many times more parts than are requisitioned during their deployments and indicated that there are opportunities to reduce inventories without adversely affecting ship operations if more accurate data was available. Furthermore, the Navy spent far more to carry this inventory of spare parts than it spent for the parts that it actually used during the Lincoln battle group’s 6-month deployment in 2002. Using available Navy data on the value of the six ships’ onboard inventories, we estimated the value of the inventory carried onboard ship be about $27.6 million and the value of the used inventory to be about $2.9 million. See figure 1. According to Navy supply officials, to minimize the inventory of unneeded spare parts carried on board ships, ships could purge their existing inventories periodically and revise the allowance parts lists based on accurate configuration records, demand data, and allowance models. The revised allowance would identify both shortages of needed parts and excesses of unneeded parts. They said that allowance lists used to be reviewed and updated periodically, but these reviews are no longer performed. Although officials acknowledged that the inventory of unneeded parts should be minimized, they said a higher priority has been placed on correcting the shortages of needed spare parts because of their impact on ships’ operations and mission readiness. They said that the existing inventories of unneeded parts have already been purchased, and the costs cannot be recouped. The Navy’s long-standing failure to meet its spare parts supply performance goals has led to shortages of needed parts on board ships and some degradation in ships’ operations and mission readiness during long deployments at sea. These shortages stem from the Navy’s inability to determine, in a reliable way, what types of spare parts and how many of each type need to be stocked on board ship. The Navy uses inaccurate, out-of-date, or incomplete ship configuration and historical demand information to develop the parts allowance lists that identify what repair parts, manuals, and other related items a ship should carry in its onboard inventory. Even though a ship may stock almost all of the parts on the allowance list, it is likely to fall short of meeting the Navy’s supply performance goals because the data used to develop the allowance lists are inaccurate. When needed parts are not available on board, a large number of repair jobs are delayed and equipment is not functional—sometimes for weeks or months—until the ships’ crews can obtain the parts from off-ship sources. Moreover, the Navy may not have a complete picture of the actual impact that equipment downtime has on the ships’ operations and mission readiness because of discrepancies in the reporting systems the Navy uses to monitor these problems. The Navy’s spare parts supply problems also substantially increase costs. Because of inaccuracies in the information the Navy uses to develop its allowance lists, it often stocks the wrong types or the wrong quantities of parts on board ships. As a result, the Navy has to spend additional money to obtain the parts it needs from off-ship sources, often incurring high expenses to locate the parts and transport them to the ships. It also expends substantial funds to maintain large inventories on board its ships that are not requisitioned during deployments. However, the Navy has given low priority to purging unneeded parts from its ships’ inventories and, instead, has focused on purchasing additional spare parts to avoid future shortages. Until the reliance on poor ship configuration records and historical demand information to identify what spare parts should be carried on board is broken, the Navy’s deployed ships will continue to experience critical spare parts shortages that undermine their ability to fulfill their missions at sea. In order to improve supply availability, enhance operations and mission readiness, and reduce operating costs for deployed ships, we recommend the Secretary of Defense direct the Secretary of the Navy to develop plans to conduct periodic ship configuration audits and to ensure that configuration records are updated and maintained in order that accurate inventory data can be developed for deployed ships; ensure that demand data for parts entered into ship supply systems are recorded promptly and accurately as required to ensure that onboard ship inventories reflect current usage or demands; periodically identify and purge spare parts from ship inventories to reduce costs when parts have not been requisitioned for long periods of time and are not needed according to current and accurate configuration and parts demand information; and ensure that casualty reports are issued consistent with high priority maintenance work orders, as required by Navy instruction, to provide a more complete assessment of ship’s readiness. In written comments on a draft of this report, DOD concurred with three recommendations and concurred with the intent of the fourth recommendation, but not its specific action. DOD’s written comments are reprinted in their entirety in appendix IV. In concurring with our first recommendation, DOD said that, although the Navy has an audit plan to look at current ship configurations and provide updated allowance listings, the Navy needs to be more aggressive in following up on configuration changes to ensure that the configuration records on board ship match those in the Navy’s main configuration database. At the time of our review, the procedures had not been validated and reconciled, for example, with the high percentages of inaccuracies identified during validations done to identify and correct problems; moreover, sufficient funding to implement the program was not assured. DOD also noted that the Navy recently set up a Maritime Allowancing Working Group that is undertaking a comprehensive review of its current inventory and allowance practices, including ship configuration management. However, at the time of our review, the Navy had not established time frames for reporting on this effort. Although DOD concurred with our second recommendation, it asserted that our report does not adequately substantiate our claim about the accuracy of demand data. In our report, however, we cited Navy officials who told us that spare parts’ failure rates, which rely on demand data, are not always updated promptly or accurately. Moreover, 60,000 requisitions for spare parts were not on ships in 6 battle groups deployed in fiscal years 1999 and 2000 either because they were not on allowance parts lists or were on these lists but were not in stock when requisitioned (see app. III). Navy officials told us that such shortages occur in part from relying on inaccurate demand data. DOD pointed out that many items on the lists do not qualify for allowances. They said that these parts are not stocked on board because of a ship’s designated repair capability, the results of the readiness optimization calculation used in the sparing model, and the forecast for demand falling below the sparing threshold. However, these determinations also rely on accurate and timely demand data. In concurring with our third recommendation, DOD said that the Navy needs to undertake a more comprehensive program to identify and, when appropriate, purge excess spare parts from ship inventories, but it added that such efforts should not be based solely on parts demand history. In our recommendation, we said that decisions to remove spare parts from ship inventories should be based on both demand data and current and accurate ship configuration information. DOD correctly noted that critical items related to safety requirements and readiness optimization should not be removed because they could jeopardize a ship’s safety and mission. We support the Navy’s plan to focus initially on identifying and purging those spare parts that support systems that are no longer installed on board ships. DOD concurred with the intent of our fourth recommendation that called for the Navy to ensure that casualty reports are issued consistent with high priority maintenance work orders as required by Navy instruction, to provide a more complete assessment of ship’s readiness. We based our recommendation on the Navy’s current maintenance instruction that calls for casualty reports to be issued for certain high-priority maintenance actions according to the level of importance that the failed equipment has on a ship’s operations and mission. DOD said that casualty reports and maintenance orders are inherently different in purpose, and the instructions should be updated to ensure that casualty reports are generated when deemed appropriate to get the attention required from the logistics system. We believe that, while the instruction may need to be updated or revised, the maintenance data that are gathered under the current instruction are both relevant and important to the Navy’s ability to assess fully a ship’s operations and mission readiness. In its response, DOD said the Navy has emphasized the need to use standardized reporting procedures and that fleet commanders have asked their commanding officers to report on ship status accurately and in a timely manner through the Status of Resources and Training System report. We are sending this report to other interested congressional committees; the Secretary of Defense; the Secretary of the Navy; and the Director, Office of Management and Budget. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov/. Please contact me on (202) 512-8412 if you or your staff has any questions concerning this report. Key staff members who contributed to this report were Allan Roberts, Lionel Cooper, Gary Kunkle, Joel Aldape, Odilon Cuero, Dale Yuge, Jean Orland, and Nancy Benco. To identify the extent of spare parts shortages on deployed Navy ships, we focused on spare parts requisitions by deployed battle groups in the Atlantic and Pacific fleets during fiscal years 1999-2002. We analyzed the Navy’s goal and supply effectiveness data from its Maintenance and Material Management (3-M) Database Open Architectural Retrieval System by identifying supply requisitions for repair parts that were either filled or not filled from inventories on board deployed ships. We reviewed reports regarding the Navy’s overall ability to fill onboard spare parts requisitions on deployed ships between 1982 and 2001 in order to identify any long- term trends. We also reviewed the Navy’s goals and data on the average customer wait-time for critical and noncritical parts on deployed ships during fiscal years 1999 and 2002. To determine the reasons for spare parts shortages, we analyzed Navy data on unfilled requisitions for 6 battle groups deployed during fiscal year 1999-2000. We analyzed and categorized the reasons for parts shortages based on the reported data. We also examined Navy policies and procedures regarding ships’ spare parts, including the need for accurate data and the impact of inaccurate data on the allowed parts carried on deployed ships. We examined and discussed with Navy officials the procedures that are used to ensure that accurate ship configuration and demand data records are maintained and the circumstances that can affect this accuracy. Moreover, we analyzed the reasons for the differences between the spare parts provisions, (e.g., the range and depth) and the amounts that are actually used to fill spare parts requisitions in order to gain a better understanding of why the Navy’s provisioning process does not more effectively and efficiently meet the deployed ships’ spare parts requirements. To examine the impact of spare parts shortages on deployed ships’ operations and mission readiness, we analyzed data on maintenance work orders and requests for spare parts that were not available on board the 6 battle groups during selected fiscal year 1999-2000 deployments. Also, we reviewed the Navy’s criteria for assessing the effects of failed equipment on a ship’s ability to accomplish its mission, particularly the standards for determining what maintenance work orders result in casualty reports. We then applied the criteria to maintenance work orders for the Truman (Atlantic Fleet) battle group deployed in fiscal year 2000 to identify those that should have resulted in casualty reports reflecting ship operations and mission readiness. We compared the results of this analysis with data on Navy casualty reporting to determine if the number of failed equipment items meeting the criteria for reporting mission readiness degradation were reported in accordance with Navy criteria, policies, and procedures. We also reviewed data on casualty reports and SORTS data submitted by deployed Pacific Fleet surface ships during calendar years 1999, 2000, and 2001 to determine if the casualty reports were reflected in SORTS equipment readiness reporting. In addition, for six ships in the Lincoln (Pacific Fleet) battle group deployed in fiscal year 2002, we identified the total number of parts carried, both range and depth, and compared this to the number of requisitions submitted and filled from onboard inventories. We compared the Navy’s data on the estimated value of the onboard inventory with the estimated value of the inventory actually used in order to gain insight into the dollar impacts of carrying parts that are not used during ships’ deployments. We discussed the results of this analysis with Navy headquarters and fleet officials. We reviewed Navy briefings and prior GAO reports regarding the effects of parts shortages on Navy supply and maintenance actions, and we discussed the Navy’s goals and initiatives intended to assess the effects of parts shortages on ships’ operations and military readiness with Navy officials at the various locations we visited. These locations included the Naval Warfare Assessment Station, Corona, Calif.; the Fleet Technical Support Center, the Naval Air Force, and the Naval Surface Force, U.S. Pacific Fleet, San Diego, Calif.; the headquarters, U.S. Pacific Fleet and the Submarine Force, U.S. Pacific Fleet, Pearl Harbor, Hawaii; the Naval Supply Systems Command, its Naval Inventory Control Point, and the Naval Sea Logistics Center, Mechanicsburg, Pa.; and Naval Sea Systems Command and the office of the Chief of Naval Operations, Washington D.C. We performed our work from July 2002 to May 2003 in accordance with generally accepted government auditing standards. According to Navy maintenance reporting instructions, Navy ship crews are required to identify maintenance work order priorities. High-priority (Priority 1, 2, and 3) work orders affect equipment that is critical, extremely important, or important for a ship’s operation. Any maintenance job with one of these three priority codes is required to generate a casualty report (CASREP). Casualty reports are directly related to a unit’s readiness reporting and identify the ship’s equipment status and impact on the ship’s operations and mission readiness. Priority 1—Mandatory: Critical safety or damage control item. Required for performance of ship’s mission. Required to sustain bare minimum acceptable level of human needs and sanitation. C-4 CASREP (Casualty Report) on equipment. Priority 2—Essential: Extremely important safety or damage control item. Required for sustained performance of ship’s mission. Required to sustain normal level of basic human needs and sanitation. Required to maintain overall integrity of ship or a system essential to ship’s mission. Will contribute so markedly to efficient and economical operation and maintenance of a vital ship system that the pay-off in the next year will overshadow the cost to accomplish. Required for minimum acceptable level of preservation and protection. C-3 CASREP on equipment. Priority 3—Highly Desirable: Important safety or damage control item. Required for efficient performance of ship’s mission. Required for normal level of human comfort. Required for overall integrity of equipment or systems that are not essential, but are required as backups in case of primary system failure. Will contribute so markedly to efficient and economical operation and/or maintenance of a vital ship system that the payoff in the next year will at least equal the cost to accomplish. Will effect major reduction in future ship maintenance in an area or system that presently cannot be maintained close to acceptable standards. Required to achieve minimum acceptable level of appearance. C-2 CASREP on equipment. Priority 4—Desirable: Some contribution to efficient performance. Some contribution to normal level of human comfort and welfare. Required for overall integrity of other than an essential system or its backup system. Will contribute to appearance in an important area. Will significantly reduce future maintenance. Our analysis of the 60,365 unfilled requisitions from the deployments of six battle groups in fiscal years 1999 and 2000 showed that there are a number of reasons why the Navy might not stock needed parts on board ship (see fig. 2). These unfilled requisitions represented 46 percent of all 131,855 requisitions submitted during these deployments. Our analysis of the reasons identified in the Navy’s database showed that about 17.3 percent (10,472) of the unfilled requisitions were for parts that were not on the allowance parts list; about 44.4 percent (26,787) of the unfilled requisitions were for parts that were on the allowance parts list but the Navy decided not to carry them on board; and about 38.3 percent (23,106) of the unfilled requisitions were for parts that were on the allowance parts list, the Navy decided to carry them, but they were not in stock when needed. | GAO is conducting a series of reviews in response to a congressional request to identify ways to improve the Department of Defense's (DOD's) availability of high-quality spare parts for ships, aircraft, vehicles, and weapons systems. This report focuses on the effectiveness of the U.S. Navy's spare parts support to deployed ships. It examines (1) the extent to which the Navy is meeting its spare parts supply goals, (2) the reasons for any unmet supply goals, and (3) the effects of spare parts supply problems on ship operations, mission readiness, and costs. To conduct the review, GAO looked at data on parts requisitions, maintenance work orders, and casualty reports for various Navy ship deployments between fiscal years 1999 and 2003. In typical 6-month deployments at sea, Navy ships are generally unable to meet the Navy's supply performance goals for spare parts. GAO's analysis of data for 132,000 parts requisitions from ships in 6 Atlantic and Pacific battle groups deployed in fiscal years 1999 and 2000 showed that 54 percent could be filled from inventories onboard ship. This supply rate falls short of Navy's long-standing 65 percent goal. When parts were requisitioned, maintenance crews waited an average of 18.1 days to get the parts--more than 3 times the Navy's wait-time goal of 5.6 days for ships outside the continental United States. The Navy recognizes it has not met its supply goals for over 20 years. Two key problems contribute to the Navy's inability to achieve its supply goals. Its ship configuration records, which identify the types of equipment and weapons systems that are installed on a ship, are often inaccurate because they are not updated in a timely manner and because audits to ensure their accuracy are not conducted periodically. In addition, the Navy's historical demand data are often out-of-date, incomplete, or erroneous because supply crews do not always enter the right information into the ships' supply system databases or do not enter it on a timely basis. Because configuration-record and demand data are used in models to estimate what a ship needs to carry in inventory, inaccuracies in this information can result in a ship's not stocking the right parts for the equipment on board or not carrying the right number of parts that may be needed during deployment. While precise impacts are not always well defined, the Navy's spare parts supply problems can affect a deployed ship's operations, mission readiness, and costs. GAO's analysis of data on 50,000 work orders from 6 deployed battle groups showed that 58 percent could not be completed because the right parts were not available onboard. More complete reporting of work orders identified as critical or important would have resulted in a more complete assessment of ship mission readiness. In addition, the Navy expends substantial funds--nearly $25 million for six ships GAO reviewed--to maintain large inventories that are not requisitioned during deployments. |
Mexico’s accession to the General Agreement on Tariffs and Trade (GATT) in 1986 initiated a process of market liberalization that provided significant opportunities for U.S. agricultural exports. By the early 1990s, Mexico had become the fastest growing export market for U.S. agricultural products, and the United States enjoyed a substantial net agricultural trade surplus with Mexico. U.S. agricultural producer groups were generally supportive when the United States and Mexico entered into negotiations aimed at creating a free trade agreement, which eventually resulted in NAFTA. In negotiating NAFTA, the United States sought to gain additional market access for its agricultural exports to Mexico by eliminating Mexican agricultural tariffs. Mexico’s agricultural tariffs averaged 10 percent, compared to average U.S. tariffs of 4.5 percent at the time NAFTA was being negotiated. NAFTA called for Mexico to eliminate tariffs on most commodities immediately upon implementation of the agreement in 1994 and to do away with nontariff trade barriers, most notably its system of import licensing requirements. Some products that Mexico considered to be particularly sensitive commodities were granted transition periods for tariff elimination to allow time for Mexican producers to adjust to increased import competition. NAFTA sets forth the specific schedules for tariff elimination and places commodities in staging categories, or “baskets,” that define when the commodities should enter the market duty-free. In general, tariffs for products that were granted transition periods were reduced in equal increments over a specified time period (see table 1). However, for certain sensitive commodities (such as corn and poultry) the greater part of tariff reductions was postponed until the final years of the transition period, a practice referred to as “back-loading.” NAFTA also called for Mexico and the other NAFTA partners to replace quantitative import restrictions with tariff rate quotas (TRQs). Products subject to TRQs enter the importing market duty-free up to the level of the quota. Once the duty-free level (quantitative limit) is reached, a duty is imposed on the over-quota imports. NAFTA partner countries committed to gradually expanding the duty-free quota for the commodities, reducing the over-quota tariff charged during the transition period, and ultimately eliminating the TRQs. As with the phasing out of tariffs, NAFTA TRQs follow the same scheduled transition periods of 4, 9, and 14 years. In addition to providing for the elimination of tariff and nontariff trade barriers, NAFTA also established disciplines for the application of trade measures to counter threats or harm to domestic producers and consumers, such as sanitary and phytosanitary (SPS) requirements, antidumping and countervailing duties, and safeguard actions. For example, NAFTA requires that SPS measures must be science-based, nondiscriminatory, and transparent, and that they are applied only to the extent necessary to achieve a party’s appropriate level of protection. Similarly, under NAFTA the parties are required to follow their domestic legal procedures when applying antidumping or countervailing duties measures in response to unfair foreign trade practices. NAFTA also calls for safeguards to be applied through fair and open administrative procedures and for compensation to be provided for the affected countries. Under NAFTA, a party’s right to apply a safeguard terminates at the end of an agreed-upon transition period. Thereafter, a party may apply the safeguard only with the consent of the exporting party. Moreover, NAFTA allows the party applying a safeguard to impose duties only up to the level of its Most Favored Nation duties. Many studies projected that Mexico would benefit from improved access to U.S. agricultural markets for agricultural products under NAFTA. However, some observers raised concerns about the difficulties Mexico’s more traditional agricultural producers might encounter as the country opened up to U.S. products. With more than 22 percent of the population dependent on the sector, but with many farmers unable to compete under free market conditions, agriculture is a significant yet vulnerable area of the Mexican economy. Differences in perceived opportunities and challenges resulted from the three distinct types of agricultural producers present in Mexico. Mexico’s agriculture sector consists of a large number of small traditional farmers, some medium size commercially oriented growers, and a lesser number of large modern producers. These groups of farmers differ in many respects including farm size, access to capital, types of crops produced, and productivity. Small subsistence farmers produce primarily corn (maize), often at subsistence levels for self-consumption, in small parcels of less than 5 hectares of mostly rain-fed land. Corn is also among the major U.S. agricultural exports to Mexico, which is perceived by some to be in competition with the production of small subsistence farmers. Medium size farmers are involved in commercial-oriented operations, however, they face relatively high cost structures, which are marked by scarcity of capital and insufficiently developed marketing infrastructure. Some believe that medium size commercial farmers face the greatest impact from import competition and structural change. On the other hand, Mexico’s large commercial farmers usually have larger plots of irrigated land and a higher productivity level. They have better access to capital, including direct investment and commercial lending from abroad. Mexican commercial farmers are also typically involved in production of higher-valued commodities, notably fresh fruits and vegetables, which have undergone dynamic export growth since the early 1990s. Agricultural trade expansion since NAFTA’s implementation generally has been consistent with expectations. While U.S. trade data indicates Mexican agricultural exports have done well under the agreement, some observers maintain NAFTA has had negative consequences for small farmers. For example, one study asserts that employment opportunities for Mexican subsistence farmers have declined under NAFTA. According to this study, imports of cheaper corn have contributed to lower corn prices in Mexico, which has led medium size farms to cut back their demand for labor supplied by subsistence farmers. However, a December 2003 World Bank report noted that NAFTA did not bring about many of the anticipated negative effects on poor subsistence farmers and had not had a devastating effect on Mexican agriculture as a whole. This research notes that as consumers, Mexican farmers may have benefited from lower corn prices. In addition, corn production in Mexico has not declined, but rather had increased by about 14 percent since NAFTA was enacted, to a record high in 2003. Other research conducted by several Mexican academic institutions concluded that NAFTA had resulted in benefits for the country’s farm sector, including increased agricultural exports and greater investment in agricultural production. As implementation of NAFTA has progressed over the past decade, Mexico has phased out tariffs on agricultural imports in accordance with the agreement’s scheduled transition periods of 4, 9, and 14 years and has done away with a key nontariff trade barrier, import licensing requirements. U.S. agricultural exporters have benefited both from this process of continued trade liberalization under NAFTA and from the additional assurances provided through the NAFTA dispute settlement mechanism. Exports to Mexico have increased significantly since NAFTA, continuing a trend of export growth that started in the mid 1980s. However, despite the progress made, some U.S. agricultural products continue to experience difficulties gaining access to the Mexican market, typically due to antidumping, SPS requirements, safeguards, and other trade measures Mexico has put in place. These difficulties are not unlike challenges U.S. agricultural exports face in other major markets, such as Canada or Japan. Although Mexico had taken several steps to allow greater access to its markets prior to 1994, NAFTA provided a legal agreement and framework through which further market liberalization could take place. Further, NAFTA’s dispute settlement mechanism provided U.S. exporters with additional rules and processes for resolving disputes that did not exist prior to NAFTA. Mexico has thus far implemented its NAFTA commitments by reducing or eliminating tariffs according to schedule and removing nontariff barriers, resulting in greater access for U.S. agricultural goods. In the latest round of tariff eliminations (on Jan. 1, 2003), Mexico eliminated tariffs on more than a dozen commodity imports from its NAFTA partners, including products important to U.S. producers such as rice, soy oil, and pork. On January 1, 2003, in accordance with its commitments under NAFTA, Mexico had eliminated tariffs or TRQs on all but three commodities: corn, dry beans, and milk powder.Two of these commodities, corn and beans, are considered particularly sensitive commodities for Mexican agriculture because they are among the principal crops of small Mexican farmers and are also staples of the Mexican diet. TRQs on these commodities are scheduled for full elimination by the end of the 14-year transition period in 2008. In addition, Mexico has done away with import licensing requirements, a key nontariff barrier. These import licensing requirements functioned, in effect, as a type of quota, since only the volume of goods authorized under the import license could be imported, and they were intended to protect Mexican producers of agricultural commodities that were sensitive to foreign competition. Prior to NAFTA, many major U.S. agricultural exports to Mexico, such as poultry, dairy, wheat, corn, and dry beans, were subject to import licensing requirements. NAFTA permitted Mexico to use phased- in tariff elimination as a mechanism to transition away from the use of import licensing requirements. Under the agreement, Mexico immediately did away with import licensing requirements and converted them to either regular tariffs or TRQs. Additionally, NAFTA set a schedule to gradually eliminate both the tariffs and TRQs. NAFTA also benefits U.S. exporters by providing them with a formal mechanism for resolving disputes. Under the agreement, disputes that cannot be resolved through consultations between member countries may be brought before impartial, independent panels. Since both the United States and Mexico are members of the WTO as well as NAFTA, the United States can file trade grievances under the dispute settlement mechanism provided by either agreement. According to United States Trade Representative (USTR) officials, the United States generally would utilize the NAFTA dispute settlement mechanism if it determined that Mexico is in violation of a provision that is specific to NAFTA and is not covered under the WTO. These officials explained that the United States would rely on the WTO’s dispute settlement process if the matter also affected WTO members that are not members of NAFTA. According to information provided by USTR, to date, the United States has only brought one agricultural dispute settlement case against Mexico under NAFTA, compared to four under the WTO process. According to a U.S. Department of Agriculture (USDA) report, most trade disputes are resolved through informal discussions or consultations involving government and private sector representatives, rather than formal dispute settlement procedures. For example, through government- to-industry negotiations, a minimum price agreement was established for U.S. apples, and through government-to-government negotiations, an agreement was reached to modify Mexico’s dry bean quota auctions. In addition, through industry negotiations, a dispute involving U.S. and Mexican grape industry labeling regulation was resolved. The use of industry negotiations also deterred the Mexican cattle industry from filing an antidumping petition against imports of U.S. cattle. Another alternative dispute settlement mechanism is the NAFTA Advisory Committee on Private Commercial Disputes Regarding Agricultural Goods, which recommends less adversarial resolutions to agricultural contract or commercial disputes. Since NAFTA’s implementation, total U.S. agricultural exports to Mexico have nearly doubled, rising from $4.1 billion in 1993—the last year prior to NAFTA’s implementation—to $7.9 billion in 2003 (adjusted for inflation). Between 1993 and 2003, the value of U.S. exports to Mexico grew on average by 17.4 percent annually. By comparison, U.S. agricultural exports to the world grew at an average annual rate of 2.3 percent over the same time period. U.S. exports to Mexico have comprised an increasingly larger share of the United States’ total agricultural exports; Mexico’s share grew from about 8 percent in 1993 to about 13 percent in 2003. Moreover, according to USDA’s export strategy for Mexico, the full implementation of NAFTA, a growing urban population, increasing per capita income, and lack of arable land make Mexico an excellent long-term prospect for U.S. agricultural products. U.S. agricultural exports to Mexico already underwent significant growth after Mexico joined GATT in 1986 and began opening its market to foreign trade. By the early 1990s, Mexico attained its position as the third largest importer of U.S. agricultural products, after Canada and Japan. The overall increases in agricultural exports to Mexico since NAFTA began came about despite the collapse of the Mexican peso in late 1994, which harmed Mexican purchasing power for foreign goods and triggered an economic downturn. Beginning in about 1996, Mexico’s economy began a recovery, and U.S. exports to Mexico expanded accordingly. Not all increases in exports to Mexico can be attributed to NAFTA because factors such as economic growth, weather, exchange rates, domestic supply, and population growth also affect Mexico’s demand for U.S. products. U.S. imports of agricultural products from Mexico have also increased since NAFTA, rising from about $2.9 billion in 1993 to $6.3 billion in 2003 (adjusted for inflation). Agricultural imports from Mexico increased at an average annual rate of 8.5 percent over the same time period. In 2003, agricultural imports from Mexico accounted for about 13 percent of the total value of U.S. agricultural imports from the rest of the world. Figure 1 shows the total value of U.S.–Mexico agricultural trade. Notwithstanding the potential effects of external factors on trade, NAFTA’s impact on U.S. exports, particularly for certain key commodities, generally appears to have been positive. Earlier studies generally concluded that the agreement would increase U.S. export opportunities for grains, oilseeds, dairy products, tree nuts, and meats. Trends in the trade of the largest groupings of U.S. agricultural products have been generally consistent with these predictions. For example, the United States increased exports of animal products, grains and feeds, fruits and vegetables, and oilseeds to Mexico since NAFTA. From NAFTA’s implementation in 1994 until 2003, the value of exports of these key groups of products underwent average annual increases of between 3.2 percent (oilseeds) and 16 percent (grains and feeds) (see fig. 2). Some U.S. agricultural products continue to experience difficulties gaining access to the Mexican market due to the application of nontariff trade measures. Although Mexico removed import licensing requirements, a key nontariff trade barrier prior to NAFTA, it still applies several nontariff measures that affect imports from the United States. According to USDA, the nontariff measures that present the most significant barriers to market access for U.S. agricultural exports have been Mexico’s application of antidumping duties, SPS requirements, and safeguards. In addition to these trade measures, Mexico has put in place a product tax on all beverages containing sweeteners other than sugar, which has basically eliminated the Mexican market for high-fructose corn syrup (HFCS). However, these impediments are not unlike market access challenges experienced by U.S. agricultural exports to other major trade partners, including Canada, Japan, and the European Union. The following section presents information on the key nontariff barriers and examples of U.S. agricultural commodities that have encountered market access challenges in Mexico. The information is based, in part, on our analysis of market access issues related to seven selected agricultural commodities: apples, beef, corn, HFCS, pork, poultry, and rice. Our analysis of each of these commodities is presented in greater detail in appendix II. The use of antidumping duties continues to pose a barrier to U.S. agricultural exports to Mexico. The United States has raised complaints in the WTO regarding Mexico’s application of its antidumping laws on commodities such as hogs, rice, and beef. The United States requested a WTO panel with respect to rice and has argued that Mexico’s imposition of antidumping duties is inconsistent with the WTO Antidumping Agreement. Mexican officials at the Ministry of the Economy (Secretaría de Economía) stated that Mexico’s application of antidumping measures to U.S. agricultural imports was based on an objective and intensive investigation that determined harm. According to representatives from some U.S. producer groups and a former senior Mexican government official, however, there may also be other considerations that affect Mexico’s antidumping decisions. For example, U.S. apple producers question the timing of Mexico’s imposition of antidumping duties on apples in August 2002, only a few months before NAFTA’s tariff rate quota on apples was scheduled to be lifted on January 1, 2003. Additionally, these observers told us that Mexico’s antidumping actions against certain U.S. agricultural imports are, to some extent, a response to U.S. restrictions on Mexican exports to the United States. NAFTA establishes a number of general requirements to ensure that SPS measures are only used to the extent necessary to protect plant, animal, and human health and not as a means to protect domestic producers fromcompetition. As mentioned earlier, NAFTA calls for these measures to be science based, nondiscriminatory, and transparent and requires that the measures be applied only to the extent necessary to achieve an appropriate level of protection. Mexican officials responsible for plant and animal health protection maintain that Mexico’s SPS measures are based on sound science. However, USDA officials and industry group representatives have raised concerns about the legitimacy of some SPS measures imposed by Mexico on U.S. agricultural imports as it eliminates tariffs and tariff-rate quotas. U.S. producer groups told us that they believe Mexico sometimes uses SPS measures as a means to retaliate for U.S. policies against its agricultural exports to the United States. For example, some U.S. producer groups contend that in order to protest U.S. phytosanitary controls on imports of avocados from Mexico, Mexico’s agricultural authorities initiated a new policy against U.S. cherries requiring cherry exports to Mexico to undergo a much more rigorous inspection process at the border than is warranted. As a result, U.S. exports of cherries to Mexico dropped significantly because U.S. exporters wanted to avoid delays at the border that would pose risks with such a perishable commodity. Moreover, the 2004 proposed work-plan of phytosanitary measures was not signed. Table 2 illustrates examples of SPS controversies between the United States and Mexico. U.S. officials explained that SPS measures are the most commonly used nontariff measure affecting U.S. market access and may indeed, at times, be applied to protect domestic producers. According to U.S. and Mexican officials, determining when SPS measures are justified can be difficult for several reasons, including different country standards and different conclusions based on scientific data. Officials from USDA’s Animal and Plant Health Inspection Service (APHIS) and its Mexican counterpart SENASICA (Servicio Nacional de Sanidad, Inocuidad y Calidad Agroalimentaria) informed us that they are working to harmonize U.S. and Mexican SPS standards to minimize disagreements. In addition, they are collaborating to lift Mexico’s ban on imports of citrus from Arizona and areas in Texas due to concerns over fruit fly infestation, as well as to design and implement a more satisfactory inspection process for U.S. apple exports to Mexico. SPS disputes stemming from differing interpretations of scientific data or differences in regulatory standards illustrate the technical complexity of plant and animal health protection regulations and their impact on trade. U.S. officials told us that working through SPS issues with Mexican authorities under NAFTA provided lessons for later negotiations. They explained that as developing countries liberalize their markets and begin to develop mechanisms to address health risks associated with increased agricultural trade, they often need technical assistance. Thus, the United States provided trade capacity building assistance to address SPS issues for some Central American countries and the Dominican Republic in connection with free trade agreement negotiations with those countries. The USDA Unified Export Strategy for Mexico notes that beyond addressing individual SPS issues there must be broader cooperation with Mexico on technical issues, such as the harmonization of standards, equivalency of regulatory processes, and transparency in light of the increasing market integration of the two countries. U.S. government officials and U.S. agricultural producer groups told us that Mexico’s application of certain safeguards to U.S. agricultural products have been a trade nuisance. In the years following NAFTA, Mexico has applied special safeguard agricultural provisions on imports of U.S. live swine, pork, potato products, and fresh apples in the form of TRQs as provided for in NAFTA. Mexico also applied a safeguard under Chapter 8 of NAFTA on certain U.S. poultry products. Specifically, under NAFTA, Mexico’s TRQ on poultry products was to be eliminated on January 1, 2003. However, in late 2002, Mexico’s poultry industry petitioned the Mexican government to impose a safeguard on U.S. chicken leg quarters. The Mexican industry argued that the elimination of Mexico’s TRQ would result in a surge in imports from the United States which would injure Mexican producers. USTR officials said the safeguard on poultry was a unique situation and questioned whether a similar arrangement could be achieved in other industries. For more information on U.S. poultry exports to Mexico, see appendix II. The poultry case also highlights difficulties encountered in the implementation of a safeguard due to trade data discrepancies. The United States and Mexico did not agree on the quantity of U.S. chicken leg quarters that were exported to Mexico in the first half 2003. Mexican data showed a much larger surge than U.S. data. One U.S. official told us that the main reason for the large discrepancy was the way Mexico records its initial import statistics, which is based on notifications of intended imports filed by Mexican importers, rather than actual imports. After the TRQ on poultry expired on January 1, 2003, Mexican importers filed large number of entries, but some never crossed the border. In response to these difficulties, Mexican officials informed us they have taken steps to clear notices of intended imports from their database when imports do not actually occur within a specified time frame. In addition to the trade measures discussed above, Mexico has imposed a tax on beverages made with sweeteners other the sugar, which has led to a strongly contested dispute between the United States and Mexico regarding market access for U.S. HFCS exports. Specifically, in January 2002, the Mexican Congress imposed a 20 percent product tax on soft drinks and other beverages that use any sweetener other than cane sugar. This action meant that Mexico taxes any beverage containing HFCS, no matter the amount of HFCS present, at a rate of 20 percent, in addition to any other taxes already imposed. U.S. importers and producers of HFCS were affected immediately as Mexican beverage manufacturers switched to the use of domestically produced sugar instead of HFCS imported primarily from the United States. Although the tax was temporarily suspended by presidential decision for a 4-month period, Mexico’s Supreme Court of Justice unanimously voted to nullify this decision in July 2002. As a result, the tax was imposed once again. In December 2002, the Mexican Congress voted to extend the tax. In 2004, the United States filed a dispute case in the WTO against Mexico’s product tax on HFCS. The case is still pending resolution. See appendix II for more information on the HFCS case. Since the early 1990s, the Mexican government has enacted several agricultural assistance programs to help farmers adjust to the changes brought by trade liberalization, including NAFTA. Rapid urbanization has also created political urgency to provide low-cost food by promoting greater efficiency in domestic food production. The three main programs had a total budget of over $2 billion in 2003, and their objectives range from income support to improving agricultural productivity. However, deep- seated structural problems, notably tenuous land ownership and lack of rural credit, continue to hinder growth and rural development. Opponents of NAFTA have sought to link lagging rural development and rural poverty in Mexico to growing imports of U.S. agricultural products. They oppose further tariff eliminations as called for under NAFTA and demand a renegotiation of the agricultural provisions of the agreement. This opposition presents challenges to Mexico’s successful transition to liberalized agricultural trade under NAFTA. In response to the changes that market reforms and free trade would bring to its agricultural sector, Mexico enacted various agricultural programs and policies since the early 1990s to help farmers adjust to changing economic conditions. Three of the most significant agricultural assistance programs have been (1) a major cash transfer program, PROCAMPO (Programa de Apoyos Directos al Campo); (2) an investment program, Alianza (Alianza para el Campo); and (3) a marketing support program (Programa de Apoyos Directos al Productor por Excedentes de Comercialización para Reconversión Productiva, Integración de Cadenas Agroalimentarias y Atención a Factores Críticos, formerly Programa de Apoyos a la Comercialización y Desarrollo de Mercados Regionales). Besides these three programs, there are other support programs in rural Mexico, such as Progresa, which was introduced in 1997 to alleviate poverty through monetary and in-kind benefits, as well as to invest in education, health and nutrition. The three major agricultural assistance programs have different levels of budget and distinct objectives. Appendix III provides a detailed description of each program. PROCAMPO is the largest program in terms of annual budget, amounting to over $1.2 billion in 2003. It provides direct payments to oilseeds and grains (including corn) producers on a per-hectare basis. In 2001, it supported 2.7 million producers on 13.4 million hectares. Its objectives are to compensate farmers for expected losses under trade liberalization and the elimination of price subsidies, to make the free trade agreement acceptable to farmers, to alleviate poverty, and to reduce migration from rural areas. Alianza has an annual budget of around $570 million and supports about 2 million farmers. The program provides matching grants to finance productive investments and support services. The overall objective of the program is to improve agricultural productivity by promoting a transition to higher value crops, improving livestock health, facilitating technology transfers, and attracting investment in infrastructure. The marketing support program had an annual budget of about $580 million in 2003 and benefits 240,000 producers. It provides payments to producers of grains and oilseeds in certain areas, usually on a per-ton basis. The Mexican government’s evaluation suggests that the program provides certainty to farmers’ income and is an important factor in mitigating migration from the countryside. Notwithstanding various farm support programs including the ones discussed above, some researchers and Mexican and U.S. government officials noted that Mexico still needs to address structural impediments that hinder rural development. Some of these problems are related to Mexico’s tenuous land ownership, known as the ejido system. Some economists argue that the small size of farm plots under the ejido system does not make for economically viable production units. In addition, the ejido system limits farmers’ ability to obtain credit using land as collateral because the farmers do not have clear ownership of the land. Without access to credit, farmers cannot shift to new technologies and increase productivity. According to experts, the lack of rural credit has been a key impediment to Mexican agricultural development. Mexico’s financial crisis of 1995 exacerbated the problem of rural development by severely limiting the Mexican government’s budget available to carry out programs to invest in rural areas. In addition, according to USDA, other challenges identified by experts that contribute to the lack of rural development include: low education level, poor rural infrastructure, environmental problems related to land use, and low levels of technology. While U.S. officials note that NAFTA has greatly benefited Mexican agriculture overall, they express concern about the challenges posed by lagging rural development to the long-term successful implementation of the agreement. U.S. officials caution that lagging rural development fuels the arguments made by opponents of NAFTA that cheap imports from the United States have depressed Mexican agricultural product prices, hurting small farmers and deepening rural poverty. In its fiscal year 2005 Unified Export Strategy for Mexico, USDA acknowledged the need for efforts to highlight the benefits of NAFTA for Mexico’s economy while seeking ways to help Mexico address its rural development issues. The implementation of NAFTA became a major political issue as Mexico prepared to eliminate tariffs and tariff rate quotas in January 2003. Elimination of these tariffs provided U.S. agricultural exports even greater access to the Mexican market. In order to respond to intense criticism by the opponents of NAFTA at that time, USDA officials had to engage in extensive dialogue with Mexican legislative and executive officials, and they mounted a public information drive to explain the benefits of NAFTA for Mexican agriculture. Ultimately Mexico eliminated the tariffs, but the administration of Mexican President Vicente Fox found it necessary to negotiate a national agreement on agriculture with various domestic constituencies. He intended the agreement—referred to as Acuerdo Nacional para el Campo—to address concerns about perceived negative effects of trade liberalization on Mexico’s rural poor. As part of this agreement, the Mexican government commissioned several Mexican academic institutions to study the impacts of NAFTA on Mexican agriculture. This research generally confirmed that structural problems confronting Mexican agriculture preceded the implementation of NAFTA. However, certain Mexican producer groups continue to pressure the government, and a number of members of Mexico’s Congress have strong ties to groups that oppose NAFTA. U.S. and Mexican government officials and agricultural experts warned that there may be considerable opposition to the next round of tariff elimination in 2008. These officials cited the experience in the months leading up to the latest round of agricultural tariff elimination in 2003. In addition, they note that corn, one of the three remaining commodities scheduled to have tariffs lifted in 2008, is a commodity of particular concern in Mexico. Corn cultivation has ancient roots in Mexican rural culture; is central to the Mexican diet, accounting for about one-third of total calories; and remains the principal crop of subsistence farmers. For these reasons, eliminating tariffs on corn will be a sensitive cultural issue, as well as a matter of economic concern. Certain farm groups in Mexico have argued that allowing cheap imports of U.S. corn will drive the Mexican agriculture into ruin. Mexican politicians who oppose NAFTA note the continuing economic distress in rural areas of Mexico and insist on renegotiation of the agricultural provisions of the agreement to improve the conditions of Mexican farmers. Although the total elimination of already low Mexican tariffs on corn may not have much economic significance for U.S. producers, failure to comply with the final phase of tariff elimination may undercut support for NAFTA among U.S. producers who were in favor of the agreement with the expectation that it would lead to genuinely free trade. Additionally, U.S. trade officials have expressed serious reservations about any attempt to renegotiate the agricultural provisions of NAFTA, because it could lead to demands to renegotiate other aspects of the agreement and undermine the agreement as a model for trade liberalization throughout the Western Hemisphere. Over the last 10 years, U.S. agencies, primarily led by USDA, have carried out numerous activities that benefit both U.S. and Mexican agricultural interests. However, these activities have not been intended to address the challenges presented by lagging rural development to Mexico’s transition to liberalized trade under NAFTA. While the United States provides technical assistance to more recent free trade partners to facilitate their adjustment to trade liberalization, no such assistance was arranged for Mexico under NAFTA. Nevertheless, since 2001 the United States has supported collaborative efforts to promote economic development in the parts of Mexico where growth has lagged under the Partnership for Prosperity (P4P) initiative. Officials from both countries are working on a broader approach to Mexican rural development under the initiative, but they recognize that much still needs to be done in this area. In an effort to support rural development through P4P, the United States has provided some limited technical assistance to the Mexican government’s new rural lending institution. Recognizing the importance of rural development to the successful implementation of NAFTA, State Department and USDA strategies for Mexico call for building on collaborative activities under P4P to pursue the related goals of rural development and trade liberalization under NAFTA; however, the P4P action plans do not set forth specific strategies and activities that could be used to achieve these goals. Historically, U.S. agencies have undertaken numerous collaborative agricultural efforts of mutual interest with their Mexican counterparts; however, the agencies have not intended those efforts to address the challenges presented by lagging rural development. USDA, in conjunction with its Mexican counterparts, has led most of these efforts as part of its traditional mission of supporting U.S. agricultural production and exports. With the exception of pest eradication efforts sponsored by the Animal and Plant Health Inspection Service (APHIS)—approximately $280 million over the past 10 years—all USDA activities have involved modest funding of less than $8 million combined since NAFTA was implemented. Some U.S. agencies have been involved in collaborative efforts with Mexico in pursuit of plant, animal, and human health objectives. USDA’s APHIS and Food Safety and Inspection Service and the Food and Drug Administration have implemented several programs in Mexico to protect U.S. agriculture and consumers while also facilitating the export of Mexican agricultural products. For example, APHIS programs are working with the Mexican government and growers to eradicate the Mediterranean fruit fly. Eradicating the fruit fly is of great interest for U.S. fruit farmers. However, eliminating the fly would also allow Mexican farmers to eventually export fruit crops from formerly infested areas. Over the past 10 years APHIS has used almost all of its funds in Mexico for collaborative projects to finance various pest eradication efforts. USDA’s research, data collection, and marketing agencies, such as the Economic Research Service (ERS), National Agricultural Statistics Service, and Agricultural Marketing Service, have worked with their Mexican counterparts to enhance Mexico’s capacity to collect, analyze, and disseminate agricultural information. According to ERS officials, these efforts have improved and facilitated agricultural trade transactions through the Emerging Markets Program. Economic Research Service officials said that while the focus of the Emerging Markets Program is to improve Mexico’s data gathering and reporting systems, USDA has also benefited from Mexico’s improved capabilities because having reliable information facilitates public and private decision making for both the United States and Mexico. The Agriculture Research Service and the International Cooperation and Development area of USDA’s Foreign Agriculture Service have participated in extensive scientific and academic research to improve Mexico’s agricultural production. According to the Agriculture Research Service, there are several concerns over agricultural trade, including food safety, use and consumption of transgenic products, and control of plant and animal pests and diseases. For a list and description of collaborative activities with Mexico implemented by USDA agencies, see appendix IV. While the United Sates has provided technical assistance and support to more recent free trade partners through trade capacity building (TCB), no such assistance was arranged for Mexico when NAFTA was concluded in 1994. TCB became an element of U.S. trade policy after it was introduced under the WTO Doha Development Agenda in 2001. While it was recognized that some agricultural sectors in Mexico would find it challenging to adjust to free market conditions when NAFTA was being negotiated, the agreement did not require that Mexico should receive any assistance to facilitate the transition of its farmers to a more open market. One senior Mexican government official noted that in hindsight TCB or some type of assistance like it would have been beneficial as Mexico entered into a free trade environment with two very strong economies (the United States and Canada). However, this official stressed that Mexico has done very well under NAFTA overall, although small farmers have not typically benefited from economic opportunities provided by the agreement. Even though the United States does not have a comprehensive effort to provide TCB assistance to Mexico, some U.S. agencies have undertaken limited activities in Mexico, which they have characterized as TCB. In 2001, U.S. President George W. Bush and Mexican President Vicente Fox launched the P4P initiative, a new model for bilateral cooperation involving a public–private approach to collaborative development efforts. This new initiative is aimed at assisting those economically depressed regions of Mexico that are the primary sources of migration. These areas tend to be rural regions in Mexico. While P4P seeks to create a new model for collaborating on economic development in Mexico, officials from both countries recognize that few activities have been implemented under P4P that directly affect poor rural areas and that much still needs to be done in the area of rural development. P4P seeks to create a public–private alliance and develop a new model for U.S.–Mexican bilateral collaboration to promote development, particularly in regions of Mexico where economic growth has lagged and has fueled migration. No new funds were specifically allocated to P4P by either government; instead, the U.S. government sought to refocus resources already devoted to Mexico to create a more efficient collaborative network. According to State Department and USDA officials, since its establishment, P4P has become the umbrella for bilateral development collaboration and providing a broader approach to Mexico’s rural development needs that includes occupational and economic alternatives for people in the countryside. While this broader approach to rural development has been embraced by both the United States and Mexico, few activities have been implemented under P4P that directly affect poor rural areas. At the most recent P4P conference in Guadalajara, Mexico, a high-level State Department official responsible for P4P noted that many rural areas throughout central and southern Mexico have not yet been touched by P4P. Similarly, Mexican government officials commented that even though the P4P concept holds much promise, only a few new activities have been undertaken in rural development. For example, Mexican government officials told us and U.S. government documents confirm that approximately $10 million allocated for USAID rural development activities in Mexico under P4P have not yet been used to fund any new projects. Nevertheless, since the initiation of P4P, there have been several first-time achievements that benefit Mexico’s overall economic development. For example, under an arrangement worked out by the U.S. and Mexican government in cooperation with private sector financial institutions, the cost of remittances from the United States to Mexico has dropped by more than 50 percent over the last 3 years. Remittances from Mexican laborers living in the United States reached a record $16.6 billion in 2004. In addition, in 2003 a bilateral agreement was reached through P4P to allow the U.S. Overseas Private Investment Corporation (OPIC) to operate in Mexico for the first time. The agency’s mission is to help U.S. businesses invest overseas to foster economic development in new and emerging markets. According to OPIC officials, for over 30 years there had been resistance by the Mexican government to allow the agency to operate in Mexico because of concerns over sovereignty. Since the bilateral agreement was signed, the OPIC has provided financing to five projects in Mexico, including one related to agriculture. For a description of this and other activities related to rural development by U.S. agencies under P4P, see appendix V. One of the few P4P activities to target rural communities is the U.S. technical assistance provided to the Mexican government’s new rural lending institution, Financiera Rural. Financiera Rural supports agricultural and other economic activities in Mexico’s rural sector with the goal of raising productivity as well as improving the standard of living of rural populations by facilitating access to credit. Through the USDA Cochran Fellowship Program, several Financiera Rural officials were trained in the United States on how to operate a rural credit program. These officials will serve as trainers for credit managers for Financiera Rural. In addition, through a USAID fellowship, USDA arranged for a U.S. expert to assist Financiera Rural in developing a strategic plan. This strategic plan calls for the development of rural financial lending intermediaries in Mexico, which will be fostered using a model that complies with Mexico’s legal framework, determined by a study to be conducted jointly by the Financiera Rural and the Inter-American Development Bank. The new strategic plan also proposes that Financiera Rural fund any productive endeavor in the countryside, not only agricultural production. Activities could include eco-tourism, rural gas stations, transportation services, and so on. According to senior Financiera Rural officials, U.S. technical assistance under P4P has been instrumental in helping them roll out their rural credit program. Financiera Rural officials told us that while the assistance they have received under P4P has had a positive impact, it has been limited. They said that Financiera Rural faces a great challenge in efforts to address limited credit availability in the countryside, which, as noted earlier in this report, is a key factor in Mexico’s lagging rural development. In order to be able to establish an effective rural lending system for small and medium size farmers in Mexico, these officials explained that they need to shift from primarily short-term to long-term credit, develop a network of regional and local intermediary lending institutions, and provide financing for alternative rural economic activities beyond direct agricultural production. Mexican and U.S. officials told us that in order to accomplish these goals Financiera Rural needs to develop expertise in a number of areas, such as risk assessment, project management, and loan evaluation. These officials stated that the expertise in the field of rural credit that exists in the United States would be helpful in ensuring that Financiera Rural is successful in providing credit to small farmers and other entrepreneurs in the Mexican countryside. P4P offers an avenue for the United Sates to provide technical assistance and support to Mexico similar to what it has provided to more recent free trade partners through TCB, according to a senior USDA official. Similarly, Mexican officials said P4P provides the opportunity to make technical assistance available in areas such as rural development, which have not yet benefited from NAFTA. Recognizing the importance of rural development to the full and successful implementation of NAFTA, the State Department’s Mission Performance Plan and USDA’s Unified Export Strategy for Mexico call for building on collaborative activities under the P4P to pursue rural development and support trade liberalization. However, P4P documents generally have little to say about furthering Mexico’s successful transition to liberalized agricultural trade under NAFTA, and P4P action plans do not set forth specific strategies and activities that could be used to advance rural development in support of free trade. The lack of specific plans under P4P to pursue rural development in support of NAFTA is particularly noteworthy because USDA officials expressed concerns that Mexico’s lagging rural development presents a challenge to the successful transition to liberalized trade under NAFTA, including the elimination of remaining tariffs in 2008. USDA officials noted that the underlying factors in Mexico’s lagging rural development are structural and need to be addressed internally by Mexico. Nevertheless, USDA’s Unified Export Strategy for Mexico calls for coordination with the U.S. Agency for International Development to pursue a rural development strategy under the rubric of the P4P initiative. This document also acknowledges the need to continue to underscore the benefits of free trade for Mexico under NAFTA while seeking ways to help Mexico address its rural development issues. USDA officials stressed that it is critical to change the debate from the need for protection from U.S. imports to promoting rural development in Mexico so that small and medium farmers can take advantage of the opportunities provided by free trade. As tariffs and tariff-rate quotas have been reduced or eliminated under the provisions of NAFTA, Mexican authorities have come under pressure to put in place technical barriers to protect producers from perceived harm from growing U.S. imports. Moreover, while Mexico has taken the steps called for under NAFTA to liberalize trade, lagging rural development fuels opposition to further implementation of the agreement. Yet the full and successful implementation of NAFTA is an important factor in assuring market access for United States agricultural exports to Mexico, and it is critical to broader U.S. trade interests because NAFTA is a model for trade liberalization in the Western Hemisphere. While the strategies of U.S. agencies in Mexico see an opportunity to build on the P4P initiative to pursue the related goals of rural development and trade liberalization under NAFTA, P4P documents generally have little to say about NAFTA. More specifically the P4P action plans do not set forth specific strategies and activities that could be used to advance rural development in support of free trade. P4P offers an opportunity for the United States to design a multi-agency comprehensive strategy to address the challenges presented by lagging rural development to Mexico’s transition to liberalized agricultural trade under NAFTA, rather than providing assistance through individual measures. Mexico’s experience adjusting to the challenges of trade liberalization, ranging from difficulties associated with the application of SPS measures, problems raised by trade data discrepancies with the United States, and lagging rural development, illustrate the importance of technical assistance. While Mexico did not seek assistance under NAFTA to adjust to trade liberalization, the U.S. government has acknowledged the usefulness of technical assistance in addressing such challenges by providing TCB assistance in later trade agreements with developing countries. In Mexico, P4P offers an avenue for the United States to provide such technical assistance. A key impediment to Mexican rural development is the lack of credit in the countryside, and the United States with its significant experience in rural lending has the technical expertise Mexico seeks. Moreover, most of Mexico’s structural impediments must be dealt with internally, but facilitating rural credit is one area in which the United States, through P4P, is in a position to collaborate with Mexico. Improving the rural economy through credit facilitation increases the opportunities for Mexican importers of U.S. agricultural commodities and begins to counter negative perceptions of NAFTA’s impact. To aid the full and successful implementation of NAFTA, we recommend that the Secretary of State, as the head of one of the lead agencies for the P4P initiative, work with USDA and other relevant agencies to develop an action plan under P4P laying out specific collaborative efforts on rural development that would support the successful implementation of NAFTA. Such a plan could include a comprehensive strategy that outlines specific activities that are intended to address the challenges presented by lagging rural development to Mexico’s successful transition to liberalized agricultural trade under NAFTA, and sets time frames and performance measures for these activities. To promote rural development in Mexico and enhance Mexican small farmers’ ability to benefit from trade opportunities under NAFTA, which would also help shape a more positive perception of the agreement, we recommend that the Secretary of State, as the lead agency for the P4P initiative, work with USDA and other relevant agencies to expand collaborative efforts with the Mexican government to facilitate credit availability in the countryside. This would include providing Mexico with expertise in the area of rural financing, such as risk assessment, project management, and loan evaluation. We provided a draft of this report to the Department of State, USDA, USTR, USAID, FDA and OPIC for their review. We received formal written comments from the Department of State and from USDA, which are reprinted in appendixes VI and VII, respectively, along with our responses to specific points. In its written comments, the Department of State agreed with the need to develop a P4P action plan on rural development, and noted that on February 17, 2005, the U.S. and Mexican governments agreed to create a new structure under P4P establishing seven permanent working groups, including one on rural development. Each of these working groups has been asked to develop an action plan for 2005 activities. The Department of State also emphasized that the broader goal of P4P is to spur economic growth and development in parts of Mexico that have benefited less from NAFTA (i.e., not limited to rural development) and noted that the P4P initiative must work within existing resources. The Department of State raised concerns that the report generally overstates the strength of opposition to NAFTA in Mexico. However, we do not believe we have overstated the opposition to NAFTA in Mexico. As noted in the report, U.S. and Mexican officials expressed concerns about how negative perceptions of NAFTA may impact successful implementation of the agreement. In addition, the report recalls the difficulties experienced in Mexico in anticipation of tariffs elimination under NAFTA in 2003. In its letter, USDA expressed readiness to work with the Department of State and with other agencies, under P4P, to develop collaborative efforts to support Mexican rural development and facilitate the continued and successful implementation of NAFTA. The Department of State, USDA, USTR, OPIC, and FDA also suggested clarifications, technical corrections, and elaboration of certain points which we have incorporated into this report, as appropriate. USAID comments were incorporated in the formal letter from the Department of State. We also obtained comments on key sections of the report from the Mexican Ministry of the Economy (SE), the Ministry of Agriculture (SAGARPA), and Mexico’s rural lending institution for small and medium farmers (Financiera Rural). SE and SAGARPA submitted joint comments. While commending the overall positive portrayal of the U.S.–Mexican agricultural trade relationship, SE and SAGARPA expressed concern that the report did not sufficiently underscore the importance of the Mexican market for U.S. exports under NAFTA. They cited U.S. trade data to illustrate the dramatic growth in certain U.S. commodity exports to Mexico since NAFTA has been in effect. They noted that Mexico is the largest foreign market for U.S. beef and rice and the second largest foreign market for U.S. corn, pork, poultry, and apples, some of the commodities our report highlights to illustrate the effects of Mexican trade measures. Additionally, SE and SAGARPA commented that our report did not provide a sufficiently detailed objective analysis regarding the nature and validity of various Mexican trade measures. These agencies expressed concern that the report unfairly portrays various Mexican trade measures without an adequate evaluation of the facts behind Mexico’s implementation of these measures, such as the scientific support for certain SPS requirements, and the legitimate findings of antidumping investigations. SE and SAGARPA also objected to the report’s reliance on the testimony of parties directly impacted by these measures. Similarly, SE and SAGARPA expressed disappointment that the report does not examine U.S. trade measures that impact Mexican agricultural exports to the United States, which parallel many of the difficulties faced by U.S. agricultural exports to Mexico. Finally, SE and SAGARPA also stressed that the debate over the impact of NAFTA on the Mexican rural economy does not have any substantive implications for the implementation of Mexico’s obligations under the agreement. GAO fully recognizes, and our report documents, the vital importance of the Mexican market for U.S. agricultural exports. We note the rapid growth in the value of U.S. agricultural exports to Mexico, which grew on average 17.4 percent annually and almost doubled from 1993 to 2003. We also point out that Mexico is the third largest market for U.S. agricultural exports and that its share of the U.S. agricultural export market has risen from 8 percent in 1993 to about 13 percent in 2003. Regarding the concerns raised by SE and SAGARPA about the nature of GAO’s analysis, we believe the report presents a balanced and objective description of key Mexican trade measures that affect U.S. agricultural exports to Mexico. Consistent with GAO’s overarching mission to help improve the performance and accountability of U.S. government programs and activities, our report provides recommendations to the Department of State and USDA to help ensure the successful implementation of NAFTA. Since it is outside GAO’s jurisdiction to audit foreign government programs and procedures, our treatment of Mexican trade measures is descriptive not evaluative. We include testimonial, as well as other evidence, in our report in order to illustrate the positions of various parties. Throughout the report we have included the views of responsible Mexican officials and have added clarifications to the report in response to specific comments made by these Mexican agencies. For example, we added language to the report to clarify that the existence of a case under dispute settlement proceedings does not necessarily mean a trade partner’s actions violate the provisions of NAFTA or other trade agreements. Similarly, we eliminated references to difficulties related to labeling requirements and import permits, which, as USDA officials have acknowledged, have not been used frequently by Mexico. Instead we focused only on Mexico’s tax on beverages containing nonsugar sweeteners. In addition, our report covered a number of areas including collaborative activities of U.S. agencies in Mexico and concerns about the long-term success of NAFTA, as well as Mexican trade measures that impact U.S. agricultural exports to Mexico. While we are aware that Mexican agricultural exports to the United States also encounter challenges meeting U.S. import requirements, these issues were outside the scope of this project. We have included language clarifying the scope of our work in this report. Regarding the point raised by SE and SAGARPA on Mexico’s determination to proceed with the implementation of NAFTA, our report does not question the commitment of Mexican authorities to fulfill their obligations under the agreement. However, both U.S. and Mexican officials have expressed concerns about how negative perceptions of NAFTA may impact successful implementation of the agreement. Some of these officials recalled the difficulties experienced at the time of the 2003 tariff eliminations, including mass demonstrations against NAFTA, calls for a moratorium on implementation of the agreement, and pressure to renegotiate the agricultural provisions of NAFTA. We believe that in accordance with U.S. government pronouncements regarding the importance of NAFTA for U.S. farm interests, it is appropriate for U.S. agencies to actively plan to support the successful implementation of the agreement. In addition to these broader comments on the report’s presentation and approach, SE and SAGARPA provided technical comments and clarifications on Mexican agricultural programs, such as clarification on PROCAMPO payments, and on the crops included under the Direct Payments for Target Income subprograms. We have made a number of changes in the report to reflect their comments. Financiera Rural had only one technical comment on our representation of that agency’s strategic plan, which we have incorporated into our report. As we agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 30 days from the date of this letter. We will then send copies of this report to appropriate congressional committees and to the U.S. Trade Representative and the Secretaries of the Departments of Agriculture and State. Copies will be made available to others upon request. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-4347 or at [email protected]. Other GAO contacts and staff acknowledgments are listed in appendix VIII. To obtain information about the progress made, as well as difficulties encountered, in gaining market access for U.S. agricultural exports to Mexico, we reviewed the commitments in the NAFTA, including the tariff elimination schedules for agricultural products. We reviewed official documents related to various phases in the implementation of NAFTA and met with USDA and USTR officials to document progress made on each phase of tariff elimination. We studied trade flows to track changes in U.S. agricultural exports to Mexico, both at the aggregate level and at the product level using USDA’s Foreign Agricultural Trade of the United States database. We discussed the limitations and reliability of the trade data with USDA officials and determined the trade data reported by USDA are sufficiently reliable for the purpose of this report. We used various price indexes to adjust the trade value for inflation to convert trade values to constant 2003 dollars. We reviewed USDA publications on the Mexican market for U.S. agricultural products, and we reviewed studies by U.S. government and academic sources on the impact of NAFTA on U.S. exports to Mexico. We met with officials from USTR, USDA, and various producer groups to ascertain the progress and the difficulties in market access for U.S. agricultural exports to Mexico. We obtained from USTR a list of trade disputes with Mexico since NAFTA and reviewed WTO and NAFTA documentation on these agricultural trade dispute settlement cases. While we describe Mexico’s use of trade measures, we did not evaluate the validity of their application. To illustrate the scope and type of market access issues faced by U.S. agricultural exports to Mexico, we selected seven commodities to analyze and present as case studies. Our analysis and criteria for selecting the commodities is presented in appendix II. In order to review how Mexico has responded to the challenges and opportunities presented by free trade in agriculture and explore remaining challenges to the successful implementation of NAFTA, we reviewed relevant studies and research prepared by the Mexican Ministry of Agriculture (Secretaría de Agricultura, Ganadería, Pesca y Alimentación– SAGARPA), the World Bank, the United Nations Food and Agriculture Organization, and USDA. We conducted an extensive literature search, screening the results to identify the most appropriate research and studies. We considered various screening criteria including source, timing, and venue of publication. We cross checked key conclusions in various studies to assess their credibility. We reviewed the methodologies described for the studies we report on to determine their limitations. We also interviewed several authors of key studies we used in our report to clarify our understanding of their methodology and their conclusions. Finally, we discussed the conclusions of these studies with other experts including agricultural researchers and U.S. and Mexican government officials with expertise in the area of Mexican agriculture. We obtained data from SAGARPA and the Mexican National Institute of Statistics, Geography, and Information Technology (Instituto Nacional de Estadisticas, Geografía, e Informática) on agricultural production. We did not assess the reliability of the production data; however, the general trend of production is consistent with what is widely reported in other studies. We reviewed official Mexican government documents and other studies, which describe the major agricultural policies in Mexico since early 1990s. We interviewed current and past SAGARPA officials and the officials from the Ministry of the Economy (Secretaría de Economía–SE), who are familiar with current agricultural programs and the evolution of these programs under NAFTA. We obtained information from USDA agencies (FAS, APHIS, ERS, NASS, ARS, FSIS, and AMS) and from FDA on agriculture-related collaborative activities they have undertaken in Mexico for the 10 years that NAFTA has been in effect (1994 through 2004). This information included activity descriptions and funding by agency. To assess the quality and reliability of the data submitted by each agency, we interviewed the agency officials responsible for the data and reviewed the data provided. When we noted discrepancies or gaps in the data, we discussed these with the agency officials and obtained corrections and/or clarifications. Based on our work, we determined that the data were sufficiently reliable to portray overall levels of expenditures and the nature of these activities. For USDA agencies, we compiled this data in a set of tables presented in appendix IV. These tables reflect funding for activities implemented by these agencies from 1994 through 2004; however, some of the agency activities started before 1994, while others were concluded before 2004. For FDA we present a summary description of agency activities in the same appendix. We met with State Department officials in Washington, D.C., and U.S. embassy officials in Mexico to discuss U.S. efforts under the Partnership for Prosperity (P4P). We reviewed documents from the Department of State on P4P including the 2002 and 2003 P4P reports to Presidents Bush and Fox, the P4P Action Plan, testimonies by State officials, and press releases on P4P activities. In order to report on P4P activities related to agriculture or rural development, we discussed agency plans and ongoing activities with USDA, U.S. Agency for International Development, and Overseas Private Investment Corporation officials. We also discussed the impact of P4P with Mexican government officials from SAGARPA, the Mexican Ministry of the Economy (SE), the Mexican Ministry of Foreign Affairs (Secretaría de Relaciones Exteriores), and Mexico’s rural lending institution for small and medium size farmers (Financiera Rural). We conducted our review from February 2004 through February 2005 in accordance with generally accepted government auditing standards. To illustrate the range of market access barriers faced by certain U.S. agricultural exports to Mexico, we selected seven products to analyze and present as case studies: apples, beef, corn, high-fructose corn syrup (HFCS), pork, poultry, and rice. Each of the case studies includes a brief background and history of the exported product’s experience accessing the Mexican market, a description of the types of market access barriers each product faces, and a summary of the current status of market access issues. We selected commodities as representative of (1) products at various stages of the tariff elimination schedule; (2) different agricultural sectors— for example, grains (rice), horticultural products (apples), and meat (pork); (3) products that face varying types of tariff and nontariff barriers; (4) the range of mechanisms used in attempting to settle market access disputes; and (5) varying levels of export volume and value. Information presented in the case studies is based on our analysis of trade data, review of U.S., Mexican, WTO, and NAFTA official documents, and interviews with U.S. and Mexican government officials and various private sector representatives. Prior to NAFTA, Mexico restricted access to its fresh apple market through import licensing requirements and the application of a 20 percent tariff. In 1991, Mexico eliminated the licensing requirements. As part of its NAFTA commitments, Mexico established TRQs on apples, which were to be phased out over a 9-year period and result in duty-free access for U.S. apple imports by 2003. USDA reports that U.S. apple exports to Mexico have exceeded these specified TRQ amounts in each of the years following NAFTA’s implementation. The United States is the world’s leading apple producer, and apples comprised the largest portion of fruit exports to Mexico in 2003. U.S. apple exports to Mexico accounted for nearly 23 percent of U.S. worldwide apple exports. Between 1994 and 2003, the total quantity of fresh apple exports to Mexico increased by an average of 4.7 percent annually, and the value of exports totaled nearly $71 million in 2003 (see fig. 3). A key market access issue for U.S. apple exporters is the way Mexico has sought to exercise oversight for the application of its phytosanitary requirements. Mexico requires phytosanitary certificates for U.S. apples due to concerns about apple maggots in shipments. According the USDA’s Economic Research Service, most countries accept U.S. systems approaches for pest management as adequate protection against the threat of apple maggot. Mexico, however, requires that apples undergo a process called “cold treatment” before U.S. apple shipments can be imported into Mexico. Additionally, Mexico required that the Mexican government inspect and certify U.S. storage and treatment facilities. The treatment and inspection process increased U.S. producers’ cost of exporting apples to Mexico. In 1998, Mexico turned over supervision of the inspection program to USDA. Nevertheless, according to the U.S. Apple Association, some apple-producing states have been effectively shut out of the Mexican apple market because of the prohibitive treatment and certification costs. For example, the association representative noted that producing states like Pennsylvania, the fourth largest apple-producing state in the country, cannot recoup the “hundreds of thousands of dollars” of costs incurred through these inspections. In addition to Mexico’s phytosanitary treatment and certification requirements, Mexico initiated an antidumping investigation against U.S. apples in 1997 and imposed a preliminary import duty of more than 100 percent on Red and Golden Delicious apples. In 1998, the U.S. apple industry and the Mexican government signed an agreement suspending this duty, and the U.S. industry agreed to comply with a minimum-price scheme. U.S. apple exports to Mexico declined in 1998 (when the antidumping duty was in place) but experienced large, successive increases in 1999, 2000, and 2001 under the price agreement. However, in August 2002, the minimum price scheme was dropped at the request of Mexican growers, and Mexico resumed the dumping case and imposed antidumping duties of more than 45 percent on U.S. apples. As a result, U.S. exports decreased in 2002 and 2003. According to the U.S. Apple Association, the timing of the Mexican imposition of the dumping duty was notable, since NAFTA’s tariff rate quota and duty on apples were to be lifted on January 1, 2003. For this reason, the association noted that many U.S. apple exporters question the merits of the dumping allegations and maintain that Mexico is inappropriately restricting market access in order to protect its domestic industry. U.S. apple industry representatives note that Mexico’s policies restrict U.S. producers’ access to Mexico’s market. The U.S. apple industry notes that the treatment certification process takes several years and can be prohibitively costly in U.S. states where there are fewer producers to share costs. Furthermore, the U.S. apple industry is very fragmented, which is a significant challenge in dealing with market access problems in Mexico. For example, even though producers find the certification process burdensome, the industry does not have a joint strategy on how to address this problem. In 1992, 2 years prior to NAFTA’s implementation, Mexico raised tariffs on imported beef from zero to 20 percent. Per NAFTA, Mexico immediately eliminated these tariffs on imports of most U.S. beef products, and U.S. beef exports to Mexico increased. The recession that followed the 1994 peso crisis caused U.S. beef exports to Mexico to drop sharply by 1995, and exports did not recover fully until 1997. U.S. beef exports have grown steadily since 1995, and USDA notes that this increase is linked partially to the continuing improvements in the Mexican economy. Between 1994 and 2003, the volume of U.S. beef exports to Mexico increased by an average of 21 percent annually, and beef exports to Mexico accounted for 22.4 percent of the volume of U.S. beef exports worldwide (see fig. 4). The value of exports to Mexico in 2003 totaled $604 million. Although the volume of U.S. exports to Mexico has been increasing steadily over the past 10 years, market access for U.S. producers has been affected by antidumping actions and a ban on U.S. beef following the discovery in the United States of one cow (originally imported from Canada) with bovine spongiform encephalopathy (BSE) or “mad cow disease.” First, in 1994, the Mexican National Livestock Association initiated an antidumping case against certain types of beef imports by claiming discriminatory pricing on the part of U.S. exporters. Following industry-to-industry negotiations, the U.S. National Cattlemen’s Beef Association and the Mexican National Livestock Association signed a memorandum of understanding that formalized an agreement to (1) share U.S. technologies with Mexican producers and (2) coordinate both groups’ efforts to promote beef consumption in Mexico. As a result, the Mexican National Livestock Association dropped the dumping petition. However, in 1998 charges were made once again that the United States was dumping beef in Mexico. On August 1, 1999, Mexico announced antidumping tariffs that varied by company. Individual U.S. beef exporters appealed these tariffs, and on October 10, 2000, Mexico published a set of revised antidumping tariffs for certain beef exporters. These duties range from zero to 80 cents per kilogram, depending on the company and the type of beef. On June 16, 2003, the United States requested WTO consultations on Mexico’s antidumping measures on rice and beef, as well as certain provisions of Mexico’s Foreign Trade Act and its Federal Code of Civil Procedure. In addition, a NAFTA Chapter 19 panel is expected to rule shortly on whether these duties were applied in accordance with Mexican law. According to the National Cattlemen’s Beef Association, the root of the beef trade dispute in Mexico lies in the lack of differentiation between the values for various cuts of meat. In Mexico, the different cuts of beef generally all have the same value, whereas in the United States different cuts of beef have different values. These different values have led to antidumping cases against the United States because any commodity that sells for less than the value of the product in the home country is considered dumping. According to the National Cattlemen’s Beef Association representative, demand for variety meats (such as tripe and liver) is significantly higher in Mexico than it is in the United States. Because of these demand conditions, U.S. exporters can sell variety meats at a lower price, which leads Mexico’s industry to believe the United States is dumping these products on the Mexican market. In addition to facing dumping duties, the detection of one case of BSE in the United States in December 2003 led Mexico to impose a ban on all U.S. beef products. In March 2004, Mexico was the first country to reopen its market to certain types of U.S. beef products (U.S. boxed beef under 30 months of age), expanding the list of allowable beef products in April 2004, and USTR reports that the U.S. government is working to re-open the remainder of the market as soon as possible. According to producer group officials, market access for U.S. beef exports to Mexico has generally been very good, as evidenced by overall increases in trade. Both U.S. and Mexican industries plan to continue working together to resolve any potential trade disputes through industry negotiations. USTR notes that U.S. and Mexican beef and cattle industries are increasingly integrated, with benefits to producers, processors, and consumers in both countries. Corn is an important commodity in Mexico; in addition to being a dietary staple, white corn is the principal crop for many Mexican small farmers, and historically corn production is a fundamental feature of Mexican rural culture. Consequently, NAFTA negotiations regarding the phase-out of import barriers for corn were particularly sensitive. Prior to NAFTA, Mexico restricted access to its corn market through import licensing requirements, and there was no guaranteed level of access for U.S. imports. During NAFTA negotiations, it was widely believed in Mexico that immediate increases in imports of U.S. corn would displace Mexican corn producers. As a result, NAFTA negotiators agreed to allow Mexico to replace its import licensing requirements with transitional TRQs that will be phased out over a 14-year period—the longest transition period set forth in the agreement. The United States has been one of the major foreign suppliers of yellow (feed) corn to Mexico, and U.S. exports to Mexico comprised 13 percent of all U.S. corn exports worldwide in 2003. Between 1994 and 2003, the volume of U.S. corn exports to Mexico increased by an average of 18.5 percent annually (see fig. 5). The value of exports to Mexico in 2003 totaled $651 million. Although Mexico’s removal of restrictive import licensing requirements did away with a significant barrier to U.S. access to Mexico’s corn market, a number of other factors have affected U.S. exports before and after NAFTA’s implementation. For example, in the early 1990’s, Mexico lifted a ban on using corn to feed livestock, which immediately increased demand for imports of yellow corn from the United States, which had been declining for several years. In 2003, yellow feed corn exports comprised more than 80 percent of U.S. corn exports to Mexico. Additionally, in the years following NAFTA, Mexico has usually allowed higher levels of imports than are required under the NAFTA TRQs in order to ensure that domestic demand for corn is fully met. Thus, Mexico has generally applied much lower tariffs on these additional quantities than those set forth under the agreement. These more liberal market access policies for yellow (feed) corn imports are driven in part by a need to provide feed for Mexico’s expanding livestock industries. Notwithstanding these policies toward feed corn imports, a USDA analysis of Mexico’s corn market notes that imports of white corn (i.e., corn generally used directly for human consumption) from the United States have declined since 2000, partly because the Mexican government has provided marketing funds to domestic producers of white corn. Additionally, USDA reports that in a significant departure from past practice, Mexico levied the NAFTA-specified above quota tariff rate of 72.6 percent on white corn in 2004. Mexico’s tax on beverages sweetened with HFCS has also contributed to the decline in U.S. corn exports to Mexico. The tax has depressed Mexican production of HFCS, which is made from imported corn. U.S. exports of corn to Mexico are expected to increase significantly as Mexico eliminates the transitional TRQs in 2008. However, some industry groups noted concern about Mexico taking other steps to protect its sensitive domestic corn market. For example, one U.S. industry representative noted that it will be important for the U.S. government to ensure that Mexico does not use SPS requirements as a barrier to U.S. imports. On the other hand, other observers note that an expanding economy in Mexico will increase consumer demand for meat and, in turn, continue to increase demand for U.S. corn imports as feed for Mexican livestock production. Additionally, certain farm groups in Mexico have argued that allowing duty- free imports of U.S. corn will lead to a total collapse of Mexican agriculture, and they have vowed to mount an unprecedented campaign to stop the last round of tariff eliminations. Mexican politicians who oppose NAFTA note the continuing economic distress in rural areas of Mexico and insist on renegotiating the agricultural provisions of the agreement to improve the conditions of Mexican farmers. Although the total elimination of already low Mexican tariffs on corn may not have much economic significance for U.S. producers, failure to comply with the final phase of tariff elimination may undercut support for NAFTA among U.S. producers who were in favor of the agreement with the expectation that it would lead to genuinely free trade. Furthermore, U.S. trade officials have expressed serious reservations about any attempt to renegotiate the agricultural provisions of NAFTA because it could lead to demands to renegotiate other aspects of the agreement and undermine the agreement as a model for trade liberalization throughout the Western Hemisphere. Impediments confronted by U.S. HFCS exports to Mexico are related to difficulties encountered by Mexican cane sugar exports to the United States. Trade friction between the United States and Mexico over HFCS came to a head in 1997, when Mexico initiated an antidumping investigation of U.S. exports of this product. Based on the results of this investigation, Mexico imposed antidumping duties beginning in 1998. This triggered a lengthy WTO dispute settlement proceeding, in which the United States eventually prevailed in 2001. Thereafter, Mexico eliminated its antidumping duties but imposed a tax on beverages made with any sweetener other than cane sugar, including HFCS. The United States has challenged Mexico’s beverage tax in the WTO, and that dispute is still pending. Mexico defends its beverage tax, noting that the United States has not complied with its market access commitments with respect to Mexican cane sugar. However, the U.S. government has rejected Mexico’s arguments linking these two issues. As shown in figure 6, U.S. exports of HFCS began to decline in 1999 after Mexico imposed the antidumping duties, and dropped to nearly zero after Mexico imposed the beverage tax in 2002. Market access issues began in 1997 when Mexico imposed preliminary antidumping duties on U.S. exports of HFCS. In 1997, Mexico’s National Chamber of Sugar and Alcohol Industries, the association of Mexico’s sugar producers, filed a petition in which it claimed that U.S. HFCS was being sold in Mexico at less than fair value and that these imports constituted a threat of material injury to Mexico’s sugar industry. As a result of these claims, the Mexican Ministry of the Economy responded by imposing antidumping duties on U.S. HFCS. In 1998, USTR invoked a WTO dispute proceeding to challenge Mexico’s action, and in 2000, a WTO panel ruled that Mexico’s imposition of antidumping duties on U.S. imports of HFCS was inconsistent with the requirements of the WTO Antidumping Agreement. At that time, Mexico agreed to implement the panel recommendation by September 22, 2000. However, on September 20, 2000, Mexico issued a new determination and concluded that there was a threat of material injury to the Mexican sugar industry and that it would maintain the antidumping duties. The United States maintained that Mexico’s new determination did not conform to the WTO panel’s recommendations and challenged this new determination before a WTO compliance panel. The WTO compliance panel agreed with the U.S. position. Mexico appealed this ruling. The WTO Appellate Body agreed with the compliance panel’s conclusions and recommended that Mexico comply with its obligations under the WTO Antidumping Agreement. While Mexico revoked its antidumping duties on HFCS in April 2002, in January of that year the Mexican Congress imposed a 20 percent tax on soft drinks and other beverages that use any sweetener other than cane sugar, which effectively shut out U.S. HFCS from the Mexican market. The Fox administration acted to suspend the beverage tax from March 6 through September 2002. Mexico’s Supreme Court, however, ruled the suspension to be unconstitutional and reinstated the tax effective July 16, 2002. The United States argues the HFCS beverage tax is inconsistent with Mexico’s obligations under the WTO, which calls for treating imported products no less favorably than comparable domestic products. The United States considers that the beverage tax is inconsistent because it applies to beverages sweetened with imported HFCS, but not to products sweetened with Mexican cane sugar. In June 2004, the United States challenged Mexico’s beverage tax in the WTO. The dispute over Mexico’s beverage tax is pending before a WTO panel. The sugar industry would like to negotiate a resolution to the sweetener dispute. At this time, private meetings have taken place between sugar producer groups in the United States and Mexico, and the industries are working to reach a resolution before 2008. Prior to 1994, Mexico levied a duty of 20 percent on U.S. pork, but under NAFTA, Mexico agreed to establish TRQs to be phased out over a 9-year period that ended on January 1, 2003. For several categories of pork products, U.S. pork exports to Mexico greatly exceed the quantitative limits of the TRQs, and Mexico generally allowed the additional product to enter without applying the over-quota tariff. Additionally, NAFTA permitted Mexico to establish a special agricultural safeguard tariff rate quota for certain cuts of pork, under which Mexico can apply higher tariffs if imports of that product exceed specified levels. If imports rise above that level, the duty reverts to the lower of the current Most Favored Nation or pre-NAFTA levels. The safeguard levels expanded 3 percent each year until the provision expired on January 1, 2003. U.S. pork exports to Mexico have increased significantly since NAFTA, with the total volume of U.S. exports rising by an average of 18.5 percent annually between 1994 and 2003 (see fig. 7). Exports to Mexico accounted for 22.3 of U.S. pork exports worldwide, and U.S. exports to Mexico totaled about $217 million in 2003. In November 2002, Mexican producers submitted a dumping complaint to the Mexican government, alleging that U.S. exporters were engaging in price discrimination by selling pork to Mexican buyers at lower prices than they would sell to buyers in other countries. On January 7, 2003, Mexico initiated the antidumping investigation against U.S. pork. According to U.S. pork producers, the Mexican association that requested the investigation does not represent the Mexican pork industry, and, therefore, did not have a legal right to make the request. The producers of pork in Mexico—the slaughterhouses and the packers—stated that they do not want the investigation to proceed and asked that it be terminated. On May 28, 2004, the Mexican government terminated the January 2003 investigation and initiated a more limited antidumping investigation on hams only. Even after the antidumping case was filed against U.S. pork, Mexico continued to be the second major market for U.S. pork exports. Furthermore, USDA officials stated that any decreases in pork exports due to the case were more than offset by the increase in demand for pork following Mexico’s ban on U.S. beef products after a case of BSE was discovered in the United States. In addition, USDA noted that demand for U.S. pork exports to Mexico correlates closely to income growth in that country (i.e., the rise of the middle class). Thus, while Mexico’s tariff reductions have been an important contributing factor to the growth of U.S. pork exports to Mexico, the far more significant drivers of export growth have been the rapid recovery of the Mexican economy following its recession in 1995 and continuing income and economic growth since then. The U.S. government has questioned the basis of the May 2004 ham antidumping investigation. Furthermore, USTR asserts that the United States is actively working to prevent potential actions that Mexico may take on exports of U.S. pork. USTR officials believe that Mexico’s January 2003 initiation of a pork dumping investigation and a May 2004 initiation of a ham dumping investigation may violate WTO rules and questions the statistics being used by the Mexican government to determine the level of imports. USTR has engaged the Mexican government to terminate the ham- dumping investigation, to resolve differences on trade statistics, and to seek alternatives to trade restrictive measures. Despite the antidumping dispute, Mexico and the United States have pledged to build on their long history of cooperation regarding swine and pork bilateral trade on the basis of equal and mutual benefit. Prior to NAFTA, Mexico restricted access to its poultry market through import licensing requirements and 10 percent tariffs on imports. As with other products subject to import licensing, Mexico replaced these barriers with TRQs as part of its NAFTA commitments. NAFTA called for the TRQs to be phased out over a 9-year period, with duty-free access for U.S. poultry by 2003. Per NAFTA, the larger portion of the tariff cuts was to be implemented in the latter half of the phase-out period—a process referred to as “backloading.” Mechanically deboned meat, which is used by Mexican sausage manufacturers, comprises the most significant portion of U.S. poultry exports to Mexico. Since NAFTA, the Mexican government has chosen not to impose the above-quota tariff on this commodity due to the Mexican sausage industry’s high demand for the product, and, as a result, U.S. exports have routinely exceeded the TRQ levels set forth in the agreement. Between 1994 and 2003, imports of U.S. dark meat chicken parts have also generally exceeded the transitional TRQ levels. The United States is the major foreign poultry supplier to Mexico’s market, and Mexico is typically among the top three markets worldwide for U.S. poultry exports. From 1994 to 2003, the volume of U.S. poultry meat exports to Mexico increased by an average of 5.7 percent annually (see fig. 8). U.S. exports to Mexico accounted for 11.4 percent of U.S poultry meat exports worldwide, and the value of U.S. poultry exports to Mexico totaled about $260 million in 2003. Demand for certain U.S. poultry products in Mexico was driven, in part, by insufficient domestic poultry production in Mexico. Additionally, because U.S. domestic demand for dark meat is low relative to Mexico’s consumer demand, U.S. producers have been able to keep dark poultry meat prices relatively low and thus attractive to Mexican buyers. Over the years since NAFTA’s implementation, Mexico’s domestic poultry industry has expanded, and concern about U.S. competition among Mexican producers has increased commensurately. As the end of Mexico’s transitional TRQ on poultry products drew near in 2002, the Mexican poultry industry petitioned the Mexican government to apply a safeguard on imports of U.S. chicken leg quarters. The petitioners argued that the end of the TRQ would result in an import surge from the United States and injury to Mexico’s domestic industry. Article 703 of NAFTA would have permitted Mexico to impose duties of up to 240 percent on U.S. poultry imports, if NAFTA’s conditions for a safeguard were met. Rather than face such potentially high tariffs and a disruption to U.S. exports, U.S. producers, in industry-to-industry negotiations with the Mexican petitioners, agreed to a more favorable regime. In July 2003, Mexico issued a final safeguard determination that imposed a TRQ which allows the quota to expand each calendar year through 2007, at which point the duties will be eliminated. The within-quota duty is zero, and the initial over-quota duty was 98.8 percent, which declines each year until reaching zero on January 1, 2008. The U.S. and Mexican governments agreed on a package of compensation measures in response to the safeguard. In particular, Mexico agreed not to impose any other restrictions on U.S. poultry products and to eliminate certain SPS restrictions. The U.S. government also agreed, following consultations with U.S. industry, to consent to Mexico’s application of the safeguard past the expiration of the transition period. Some poultry industry representatives noted that settlement of the poultry safeguard issue brought some initial criticism from other U.S. producer groups, who maintained that the settlement set a precedent for Mexico to force renegotiation of its NAFTA commitments. However, USTR officials stated that the United States will not consider any renegotiation or rescission of Mexico’s NAFTA commitments and views the poultry settlement as a unique workable solution that forestalled possible significant disruption to U.S. exports. They doubted a similar outcome could be achieved in other industries. USDA reports that domestic poultry production in Mexico continues to expand. USDA and industry representatives said that the additional protection for Mexican producers established under the safeguard settlement will provide Mexican producers additional time to prepare for free trade. USDA also notes that demand for poultry, combined with an expanding Mexican economy and a removal of the ban on some U.S. poultry exports, will continue to increase demand for U.S. poultry products. Nevertheless, some U.S. industry representatives remain concerned and noted that once the TRQ expires, Mexican authorities may employ other measures, such as sanitary restrictions, as a means to constrain U.S. access to Mexico’s market. The United States is the primary supplier of rice to Mexico, mostly due to the fact that Mexico has banned or placed strict phytosanitary standards on imports of rice from Asian countries since the early 1990s. The United States exports both rough (i.e., unprocessed) rice and milled (i.e., processed) rice to Mexico, although demand for rough rice is much higher. As a result of the lack of supply from Asian producers and the high demand for rough rice, rough rice accounted for about 90 percent of the total volume of U.S. rice exports to Mexico in 2003. Prior to NAFTA’s implementation, Mexico levied duties of 20 percent on brown and milled (i.e., processed) rice and 10 percent on rough (unprocessed) rice. Under NAFTA, Mexico agreed to phase out rice tariffs over a 9-year period, with all tariffs to be eliminated by 2003. With the phasing out of tariffs on rice, the volume of U.S. exports has increased by an average of 14.4 percent annually from 1994 to 2003 (see fig. 9). U.S. rice exports to Mexico accounted for 17.7 percent of U.S. rice exports worldwide, and exports to Mexico totaled about $140 million in 2003. In December 2000, Mexico initiated an antidumping investigation on imports of long-grain milled rice from the United States. Mexican rice millers (who process rice that competes with U.S. milled rice imports) alleged that U.S. milled rice is being sold in Mexico at a prices less that its fair market value. The Mexican government subsequently levied antidumping duties in April 2000 and June 2002 on specific U.S. rice imports. A U.S. rice industry representative told us that the U.S. rice industry attempted to resolve the issue through the industry-to-industry negotiations but that the negotiations were unsuccessful. Following the industry negotiations, the United States formally requested WTO consultations with Mexico in June 2003. These consultations were held from July 31 through August 1, 2003, on the basis of concerns regarding Mexico’s methodology for determining injury to the domestic market and for calculating dumping margins. WTO consultations failed to resolve the issue, and in February 2004 a WTO dispute panel was formed to resolve the case. The U.S. rice industry representative said that several other U.S. commodity groups were supporting this case in the WTO because the case deals with broad issues related to Mexico’s application of the antidumping law that could affect their exports as well. A ruling on the WTO dispute is expected in April 2005. Notwithstanding the outcome of the case, U.S. rice exporters generally benefit from preferential access under NAFTA and Asian exporters’ restricted access to the Mexican market. USDA reports indicate that U.S. exporters could face increased competition in the milled rice market in Mexico should Asian exporters satisfactorily meet Mexico’s phytosanitary concerns. Recognizing the challenges and anticipating the opportunities that market reforms and free trade posed for its farm sector, the Mexican government has implemented several programs to help its farmers adjust to changing economic conditions. The three main support programs implemented since the early 1990s are PROCAMPO, marketing support, and Alianza. PROCAMPO (Programa de Apoyos Directos al Campo) Budget: PROCAMPO is the largest agricultural support program, accounting for 35 percent of Mexico’s Agriculture Ministry’s (SAGARPA) budget in 2003, around $1.27 billion. Goal: PROCAMPO is a 15-year program that provides transitional income support to Mexican agriculture as it undergoes structural changes in response to market conditions and the phasing out of trade barriers under NAFTA. The political objective is to manage the acceptability of the free trade agreement among farmers and to prevent extensive levels of poverty and out-migration. How it operates: The program makes payments on a per-hectare basis to any producer who cultivates a licit crop on eligible land or utilizes that land for livestock or forestry production or some ecological project. Eligible land is defined as that which has been cultivated with corn, sorghum, beans, wheat, barley, cotton, safflower, soybeans, or rice in any of the three agricultural cycles before August 1993. There are three types of PROCAMPO payments: preferential, traditional, and capitalized. Preferential payment is for producers with fewer than 5 hectares in nonirrigated lands who only produce in the spring-summer cycle. For the spring-summer 2003 agricultural cycle, the payment levels equaled 1,050 Mexican pesos ($100) per hectare. The traditional payment is for the rest of the producers. It was 905 pesos ($86) per hectare in 2003. The capitalized payment is made under certain conditions to producers who request the sum of their future PROCAMPO payments. Beneficiaries: During 2001, 2.7 million producers with a total of 13.4 million hectares received PROCAMPO payments. Around 75 percent of farmers in the PROCAMPO database have less than 5 hectares of land. Changes in the program: There was a proposal in November 2002, as part of a broader Mexican government initiative for rural support, to update the payments according to yields. However, this action was never put into practice. Another program will be created for producers who are not currently registered in PROCAMPO, who also may be considered for assistance to smooth out income fluctuations. Also, the National Agreement’s emergency spending proposal contains 650 million pesos ($62 million) for the inclusion of additional land on the PROCAMPO roster. According to Mexican officials, even where there are new producers enrolling, the total benefiting area has not changed because those new producers are filling the place left by former producers whose lands are no longer eligible to receive support. Impact: PROCAMPO has become an important source of some rural households’ income, and it may have income multiplier effects when recipients put the money they receive to work to generate further income. The Mexican government reported that between 1989 and 2002 incomes from agricultural businesses have lost importance, while other sources, such as government support programs, remittances, salaries, and wages, have increased their share in rural households’ income. Scholars have found payment from PROCAMPO has forestalled the income decline of subsistence farmers. In addition, scholars found that payment from PROCAMPO generated an income multiplier effect, which meant that the PROCAMPO payment was used productively and generated additional income for rural households. However, scholars believe that the level of payment from PROCAMPO was not large enough to offset the risks of switching to more profitable crops, which is part of the goals of the marketing support program (discussed below). Marketing Support and Regional Market Development Program (Programa de Apoyos Directos al Productor por Excedentes de Comercialización para Reconversión Productiva, Integración de Cadenas Agroalimentarias y Atención a Factores Críticos, formerly Programa de Apoyos a la Comercialización y Desarrollo de Mercados Regionales) Budget: The marketing support program is the second largest agricultural program. Marketing Support and Regional Market Development Program accounts for about 16 percent of SAGARPA’s budget. For 2003, the budget was around $580 million. Goal: The program supports various aspects of agro-marketing and commerce. The Agricultural Marketing Board (ASERCA) was created to substitute the traditional direct intervention that the government formerly made through a parastatal state trading enterprise for sorghum and wheat. How it operates: The program has seven subprograms: (1) direct payment to producers, (2) price supports, (3) collateral loans, (4) crop conversion, (5) other types of support, (6) slaughter house certification, and (7) special support for corn. The major subprogram is the direct payment to producers. This program provides payments to producers of rice, corn, wheat, sorghum, barley, canola, copra, peanuts, cotton, and safflower in certain areas, usually on a per-ton basis. Beneficiaries: Beneficiaries of the marketing support program on average have more land than PROCAMPO payment recipients. According to Mexican government documents, around 22 percent of the respondents to its annual survey of the marketing support program have fewer than 5 hectares, while almost half have more than 15 hectares. In 2004, the program supported 240,000 producers. Changes in the program: In 2003, Mexican farmers asked for support that would “mirror” what was provided U.S. farmers under the U.S. Farm Bill, which led the Mexican government to establish “target income” support. The new program has seven subprograms including direct payments for (1) target income, (2) slaughtering in certified slaughter houses, (3) accessing domestic forages, (4) crop conversion, (5) price hedging, (6) pledging, and (7) other specified activities. Additionally, barley, copra, and peanuts are no longer on support list. For a period of 5 years, the government plans to guarantee a target income, expressed per ton, for producers of certain grains and oilseeds. Nearly 17 billion Mexican pesos ($1.6 billion) have been designated for this program. In determining whether a producer has reached the target income, the government evaluates a producer’s income from selling on the market, and if the income from selling on the market falls short of the target income, the government will provide additional support to ensure that farmers’ incomes reach the set target. Under the former program, just a few states were able to request support, while the new program makes payments to producers with commercial surpluses in all states. Impact: The program has had an impact on crop patterns and migration. The “target price” program has led to concentration in basic crop production instead of crop diversification. Mexican officials hope the new “target income” approach will help farmers to be more responsive to the market conditions. A Mexican official document points out that the program is an important factor in mitigating migration from the countryside, but the document also recognizes that the program did not succeed in integrating farmers into the marketing chain. Thirty percent of the respondents to the program annual survey said they would have sought employment somewhere else if they had not received this assistance. A USDA study of grain production finds that the marketing supports, along with the constitutional reforms that allow the rental of ejidal lands, have facilitated the emergence of large-scale farms of corn and dried beans. Alianza (Alianza para el Campo) Budget: Alianza accounts for about 15 percent of SAGARPA’s budget, about $570 million in 2003. Goal: The goals of the programs are to boost agricultural productivity and promote the transition to higher value crops. The objectives include increasing producer income, improving the balance of trade, achieving an agricultural production growth rate higher than the population growth rate, and supporting the overall development of rural communities. How it operates: The programs were grouped under four categories: agriculture, livestock, phytosanitary, and technology transfers. Activities include better use of water and fertilizer, adoption of improved seeds, better disease and pest control practices, improved genetic quality of crops and livestock, improved cattle stocks, better health and sanitation practices, and pasture development and related infrastructure development for increased production. These programs are decentralized and are financed jointly by federal and state governments and producers. Beneficiaries: The evaluation done by the United Nations Food and Agriculture Organization (FAO) found that the program serves farmers with various socio-economic backgrounds, educational levels, ages, farm size, and income levels. The FAO evaluation also found that medium size producers have benefited the most from the agriculture program, and 24 percent of small farmers have benefited. Changes: In 2002, for the first time, general objectives were established for all the sub programs. These objectives are to (1) increase income, (2) diversify employment options, (3) increase investment in rural development, (4) strengthen producer group organizations, and (5) advance sanitary standards. To achieve these objectives, strategies were established to integrate standards, bring together regional producer groups, and discuss important issues such as land and water use. Also in 2002, there was recognition by the government of a need to transfer technology and investment to the rural sector. Impact: The FAO evaluation pointed out some benefits from Alianza. For example, technology helped certain areas get access to water. Alianza also created a forum to consolidate processes of participation and implementation of different policies for the agricultural sector, allowing the participation of the state and producers in the conversation. The same evaluation pointed out that the additional employment generated from the program was modest. While U.S. development assistance to Mexico has been limited, U.S. agencies have undertaken numerous collaborative efforts that benefit both U.S. and Mexican agricultural interests. Most of these efforts have been led by the United States Department of Agriculture (USDA), in conjunction with its Mexican counterparts, in support of overall agricultural production and trade objectives. USDA’s Foreign Agricultural Service officials noted that historically USDA has had a very strong collaborative relationship with Mexico’s Ministry of Agriculture. USDA’s Animal and Plant Health Inspection Service (APHIS) has invested more funds in collaborative efforts with Mexico than of any USDA agency, about $280 million, since NAFTA was implemented. Besides APHIS’s collaborative activities, six other USDA agencies—the Economic Research Service (ERS), the Agricultural Research Service (ARS), the Foreign Agricultural Service/International Cooperation and Development (FAS/ICD), the Agricultural Marketing Service (AMS), the Food Safety and Inspection Service (FSIS) and the National Agricultural Statistics Service (NASS)— have participated in agricultural collaborative projects in Mexico. However, funding for collaborative activities in Mexico from these agencies has been very modest, about $7.5 million combined over the past 10 years. In addition to collaborative efforts implemented by USDA agencies, the Food and Drug Administration (FDA) has also had a role in activities that benefit Mexican agriculture. In the course of fulfilling its responsibilities of protecting and promoting U.S. agricultural health, APHIS has collaborated with Mexico for over 50 years (see table 3). APHIS has also implemented programs that facilitate agricultural trade from Mexico, such as its preclearance programs. Furthermore, APHIS has been by far the U.S. agency that has invested the most money in agricultural collaborative efforts with Mexico, the bulk of it on its Medfly and Screwworm eradication programs. APHIS reported spending a total of about $286 million on its plant and animal health activities in Mexico since the implementation of NAFTA. Since 1996, ERS has spent $2.5 million in funding to implement the Emerging Markets Program to enhance Mexico’s capacity to collect, analyze, and disseminate agricultural information. ERS officials said that Mexico’s enhanced data-gathering and reporting capability also benefits the USDA because reliable information allows the agency to make better informed decisions on bilateral agricultural trade. For a full list and descriptions ERS activities, see table 4. In June 1998, ARS and Mexico’s Agriculture Research Institute, Instituto Nacional de Investigaciones Forestales, Agriclas y Pecuarias (INIFAP), signed a Letter of Intent to promote U.S.–Mexico collaboration in agricultural research programs. Since then, ARS has spent about $2.3 million on several collaborative projects involving ARS and Mexican scientists. According to ARS officials, it is important for the United States that scientists in Mexico have academic backgrounds similar to their American counterparts’ in order to reach common solutions to problems that impact agriculture in both countries. For a full list and descriptions ARS activities, see table 5. Over the past 10 years, FAS/ICD has spent a total of $1.8 million on its Scientific Cooperation Research Program (SCRP) and Cochran Fellowship Program (CFP). Under SCRP, U.S. and Mexican scientists have conducted joint research and scientific exchanges for over 20 years to help solve mutual food, agricultural, and environmental problems. Since NAFTA was enacted, SCRP has sponsored 32 joint agricultural research projects among U.S. and Mexican universities and other research institutions, of which about half have been related to trade. In addition, FAS administers CFP, which provides U.S.-based agricultural training opportunities for senior and midlevel specialists and administrators from the Mexican public and private sectors who are concerned with agricultural trade, agribusiness development, management, policy, and marketing. For a full list and descriptions of FAS/ICD activities, see table 6. AMS has spent about $548,200 since 1994 in collaborative activities with Mexico. Most of AMS activities consist of providing training to Mexican fresh fruit and vegetable inspectors to help them meet U.S. inspection standards. For a full list and descriptions of AMS agricultural collaborative activities, see table 7. NASS has been involved in a few collaborative activities in Mexico since 1997. Using the Emerging Markets Program, NASS has spent $361,000 to help improve the agricultural statistics system and methodology in Mexico. As part of this assistance, NASS provided training to analysts from Mexico’s agricultural statistics service, Servicio de Información y Estadística Agroalimentaria y Pesquera (SIAP). This training focused on methodology for preparing official agricultural statistics. For a full list and descriptions of NASS activities, see table 8. Since 2001, FSIS has implemented a small number of activities valued at $298,412 under the Emerging Markets Program in Mexico. Most of these activities consist of providing training and technical assistance to Mexican meat and poultry exporters to help them meet U.S. import regulations. For a full list and descriptions of FSIS activities, see table 9. In its efforts to protect U.S. consumers, FDA has also undertaken activities that benefit Mexican agricultural producers. FDA’s approach has been to work with Mexican government agencies to help them establish effective food safety regulatory, inspection, and enforcement infrastructure, focusing particularly on microbiological hazards. For example, if a food- borne disease outbreak resulting from a Mexican import occurs, FDA determines the cause and works with the Mexican government to try and resolve the problem and develop a system to prevent future outbreaks. FDA officials explained that in 1997 their agency launched its Food Safety Initiative (FSI) to improve the safety of the U.S. food supply, which includes imported foods. Because Mexico exports around $3 billion in fruits and vegetables to the United States each year, an important FSI component has been to help Mexican commodity exporters become more familiar with FDA regulatory requirements and to improve their ability to comply with U.S. food safety regulations. FDA activities under FSI have basically involved a series of training programs since 2002 for Mexican fruit and vegetable exporters, academics, and government officials. In addition to activities under FSI, FDA established the Southwest Import District Office in 1999 to enhance food inspection activities along the Mexican border. The Southwest Import District inspects imported goods entering the United States through the Mexican Border from Brownsville, Texas, to San Diego, California. During the last 4 years, FDA’s Center for Veterinary Medicine has also participated in training and assisted in the establishment of a program in four agricultural states of Mexico to monitor pathogens that are transmitted via contaminated food. FDA reported it has spent about $1.8 million for its activities related to agricultural production in Mexico since NAFTA went into effect. The Partnership for Prosperity (P4P) initiative has a few collaborative programs that are oriented towards agriculture. On the U.S. side, USDA’s FAS, OPIC, and USAID have played key roles in implementing the programs. Overall, P4P seeks to create a public-private alliance and develop a new model for U.S.–Mexican bilateral collaboration to promote development, particularly in regions of Mexico where economic growth has lagged and is fueling migration. No new funds were specifically allocated to P4P by either government since the program’s inception; instead, the U.S. government has sought to refocus resources already devoted to Mexico to create a more efficient collaborative network. According to State Department and USDA officials, since its establishment, P4P has become the “umbrella” under which development collaboration between the United States and Mexico takes place. USDA’s FAS has worked closely with several Mexican government agencies, including Mexico’s new rural lending institution, Financiera Rural, to incorporate P4P’s broader approach to rural development and assistance to small farmers. For example, FAS arranged for USAID to use its U.S. fellowship program to place one of its participants at Financiera Rural. Through this fellowship, Financiera Rural hosted a professor from the University of Minnesota who assisted the agency in developing a strategic plan to incorporate the new paradigm for rural development proposed in the P4P conferences, acknowledging that Financiera Rural is better suited to operate as a second-tier lender. This strategic plan calls for the development of rural financial lending intermediaries in Mexico, which will be fostered using a model that complies with Mexico’s legal framework, determined by a study to be conducted jointly by the Financiera Rural and the International Development Bank. The new strategic plan also calls for the agency to fund any productive endeavor in the countryside, not only agricultural production. Activities could include such things as eco-tourism, rural gas stations, and transportation services. According to Financiera Rural officials, the guidance provided by the USAID fellow has positively contributed to Financiera Rural operations because funding and access to these types of resources and knowledge are not otherwise available in Mexico. Furthermore, the fellowship has provided support in trying to resolve the issue of limited credit availability—one of Mexico’s most significant structural problems. According to U.S. Embassy officials in Mexico, one of the most significant accomplishments under P4P has been the bilateral agreement to allow the Overseas Private Investment Corporation (OPIC) to operate and provide financing in Mexico. OPIC’s mission is to help U.S. businesses invest overseas, to foster economic development in new and emerging markets, and to complement the private sector in managing the risks associated with foreign direct investment. According to OPIC officials, for over 30 years there had been resistance by the Mexican government to allow the agency to operate in Mexico because of concerns over sovereignty. Mexico did not want a U.S. government agency to provide loans in Mexico because that would mean that the agency could ask for collateral and possibly own Mexican property in the case of default on a loan. However, in 2003, an agreement was reached through P4P to allow OPIC to operate in Mexico. Since the bilateral agreement was signed, OPIC has begun to provide financing for five projects in Mexico, including one related to agriculture. For the agriculture-related project, OPIC approved a $3.3 million loan to Southern Valley Fruit and Vegetable, Inc., of Georgia to develop a new farming project in Mexico that will serve as a winter division of the company that will grow, package, and ship cucumbers, squash, eggplant, and zucchini. The project will employ approximately 300 laborers and professionals in an area of high unemployment. Southern Valley has committed over $2.2 million in equity to the project. OPIC officials indicated that they expect their lending portfolio to grow in Mexico. USAID plans to expand its activities in Mexico to support rural development. USAID officials explained that, overall, USAID has not had a large presence in Mexico, and historically funding for activities in Mexico has been limited. Furthermore, USAID activities in Mexico have typically been in the areas of population, democracy, governance, health, and micro- financing, instead of agriculture. However, in 2004 USAID received an added $10.2 million specifically for rural development in Mexico, which brought its budget to $32 million. USAID is now working with other U.S. and Mexican agencies to develop projects to assist rural areas of Mexico. In recent months USAID has initiated several activities targeting rural development including: Small Farmer Support/Rural Business Development: Through this activity, USAID award h is providing targeted business development and marketing services to agricultural producer organizations and cooperatives in the southern rural states of Oaxaca and Chiapas. Connecting Small Producers with Market Opportunities: In partnership with Michigan State University and USDA, USAID launched this activity in late 2004 designed to allow small and medium producers to better compete for opportunities in the mushrooming domestic market for food and produce. Rural Finance: In late 2004, USAID expanded what had been an urban- focused micro-enterprise finance program to include rural finance as a priority activity. University Partnerships: In 2004, USAID focused the ongoing Training, Internships, Exchanges, and Scholarships annual partnership competition on proposals that would spur agribusiness and other issues tied to rural economic growth. In August 2004, USAID awarded five new partnerships directly related to rural development. The following are GAO’s comments on the State Department’s letter dated March 16, 2005. 1. We revised title to make clear that we are not suggesting that Mexico has failed to implement its obligations under NAFTA’s agricultural provisions. 2. We do not believe that we overstate the opposition to NAFTA in Mexico. As noted in the report, U.S. and Mexican officials have expressed concerns about how negative perceptions of NAFTA may impact successful implementation of the agreement. In addition, the report recalls the difficulties experienced in Mexico at the time of tariffs elimination under NAFTA in 2003. 3. We changed language in the two locations of the report cited by the State Department to clarify that as a matter of course the United States has not committed to providing technical assistance to its post-NAFTA free trade partners. The report now states simply that the United States has recently provided such assistance. 4. The points about the P4P Initiative noted by the State Department are also mentioned in our report. We did not consider it necessary to make revisions to address these points. 5. In our recommendations we identify the Secretary of State as the head of one of the agencies taking the lead on P4P activities. We have added a footnote in appendix V on P4P activities to clarify the roles of the Departments of Commerce and Treasury. While these departments also have a leading role in P4P activities, they are not directly involved in activities related to rural development or the agricultural sector, and therefore our recommendation is not addressed to these agencies. 6. Our review was concluded by the time the Partnership for Prosperity working groups cited by the State Department had taken place. These developments may represent the first steps in addressing our recommendation. 7. We revised appendix V of the report to include key elements of the information provided on recent USAID activities. In addition to those listed above, Ming Chen, Francisco Enriquez, Matthew Helm, Sona Kalapura, Jamie McDonald, Marisela Perez, and Jonathan Rose made key contributions to this report. | In 1994, the North American Free Trade Agreement (NAFTA) created the world's largest free trade area and, among other things, reduced or eliminated barriers for U.S. agricultural exports to Mexico's vast and growing markets. As part of a body of GAO work on NAFTA issues, this report (1) identifies progress made and difficulties encountered in gaining market access for U.S. agricultural exports to Mexico; (2) describes Mexico's response to changes brought by agricultural trade liberalization and challenges to the successful implementation of NAFTA; and (3) examines collaborative activities and assesses strategies to support Mexico's transition to liberalized agricultural trade under NAFTA. U.S. agricultural exports have made progress in gaining greater access to Mexico's market as Mexico has phased out barriers to most U.S. agricultural products, and only a handful of tariffs remain to be eliminated in 2008. Total U.S. agricultural exports to Mexico grew from $4.1 billion in 1993 to $7.9 billion in 2003. Despite progress, some commodities still have difficulties gaining access to the Mexican market. GAO found that Mexico's use of antidumping, plant and animal health requirements, safeguards and other nontariff trade barriers, such as consumption taxes, presented the most significant market access issues for U.S. agricultural exports to Mexico. Mexico has put in place several programs to help farmers adjust to trade liberalization, but structural problems, such as lack of rural credit, continue to impede growth in rural areas, presenting challenges to full implementation of NAFTA. Lagging rural development fuels arguments that NAFTA has hurt small farmers, although studies, including some Mexican studies, do not support this conclusion. Opponents of NAFTA want to block further tariff eliminations and are demanding renegotiation of NAFTA's agricultural provisions. Concerned about such opposition, U.S. officials acknowledged the need to promote the benefits of NAFTA, while seeking ways to help Mexico address its rural development issues. Historically, U.S. agencies have undertaken many agriculture-related collaborative efforts with Mexico. Since 2001, U.S.-Mexico development activities have taken place under the Partnership for Prosperity (P4P) Initiative to promote development in parts of Mexico where economic growth has lagged. Recognizing the importance of rural development to the success of NAFTA, Department of State and USDA strategies for Mexico call for building on collaborative activities under the P4P to pursue the related goals of rural development and trade liberalization under NAFTA; however, the P4P action plans do not set forth specific strategies and activities that could be used to achieve these goals. |
Global disease eradication and elimination campaigns are initiated, primarily by WHO, to concentrate and mobilize resources from both affected and donor countries. WHO provides recommendations for disease eradication and elimination to its governing body, the World Health Assembly, based on two general criteria—scientific feasibility and the level of political support by endemic and donor countries. Formal campaigns were initiated against dracunculiasis and leprosy in 1991, and against polio and lymphatic filariasis in 1988 and 1997, respectively. Regional or subregional campaigns are also underway against measles, onchocerciasis, and Chagas’ disease. Disease eradication and elimination efforts are normally implemented by national governments of the affected countries. Developing countries typically receive assistance for these efforts from bilateral and multilateral donors, nongovernmental organizations, and the private sector. In April 1997, WHO provided the House International Relations Committee with estimated costs and target dates for eradicating or eliminating the seven diseases. Subsequently, WHO revised some of the costs and time frames based on more recent information. We also made some adjustments for consistency among the figures. Our review focuses on the estimates that WHO provided to us as of December 1997. WHO officials estimated that about $7.5 billion would be needed to eradicate or eliminate the seven targeted diseases. Developing costs and time frames for these efforts is difficult due to challenges in gathering and verifying data from countries with minimal health infrastructure. Unpredictable and unstable country conditions, such as civil unrest, further complicate efforts to project how much these efforts will cost and how much time is needed. Table 1 provides a breakdown of costs and time frames for eradicating or eliminating each disease. To assess the soundness of WHO’s estimated costs and time frames, we met with the WHO officials responsible for preparing them and with other international health experts who discussed the factors that should be considered when estimating how much disease eradication or elimination will cost and how time frames are established. Following consultation with WHO and other experts, we determined five overall factors to be considered for estimating costs. These experts also provided information on how targets are developed and the variable circumstances that may affect time frames. We used this information to assess whether the data underlying WHO’s estimates were sound. In addition to WHO, the experts we consulted included officials from the Pan American Health Organization (PAHO), the U.S. Agency for International Development (USAID), the U.S. Centers for Disease Control and Prevention (CDC), the Carter Center’s Global 2000 health program, the Johns Hopkins University, and Emory University to obtain their views on WHO’s estimates. Appendix IX contains a detailed description of our scope and methodology. WHO officials and other experts identified the following as the key factors to consider in estimating direct costs for eradicating or eliminating diseases: (1) the funds needed to purchase the required intervention products, such as vaccines, drugs, insecticides, or water filters; (2) the prevalence and incidence of the disease and the population targeted for intervention; (3) the administrative costs for delivering products to the target population (for example, transportation, setting up local infrastructure, administering vaccines or treatment, spraying, and technical assistance); (4) the costs for surveillance activities, such as diagnosing the disease, testing blood or other specimens at laboratories, and monitoring and reporting disease incidence; and (5) for eradication, the costs of certifying that each country is free of the disease. We focused our assessment primarily on these five factors. WHO addressed all five factors in developing its cost estimates, except for the measles estimate, which did not include certification costs. The completeness of the data underlying the estimates varies by disease. Estimates for those diseases with long-standing campaigns that are closest to eradication or elimination—dracunculiasis, polio, and leprosy—are more complete, and costs are based on actual experience in endemic countries. For the other diseases, WHO is still gathering data and refining its assumptions. For several diseases, products are donated and are not included in projected costs. Examples include nylon filters donated by Dupont Corporation and Precision Fabrics Group for controlling dracunculiasis, donations of ivermectin by the Merck Company for the onchocerciasis program, and donations of albendazole by SmithKline Beecham for treating lymphatic filariasis. The Nippon Foundation of Japan also funds the drugs used for leprosy treatment. WHO establishes time frames primarily to gain commitment and mobilize resources from endemic and donor countries. WHO bases time frame estimates on the technical feasibility of reaching target populations over a period of time and an assessment of the commitment of endemic and donor countries. As part of that assessment WHO considers the economic and political conditions in endemic countries that could affect their ability to carry out disease campaigns. As with costs, time frames for diseases expected to be eradicated or eliminated within 5 to 10 years are considered more accurate than for those with later target dates because of the unavailability of data and the difficulty of predicting commitment levels and country conditions over time. The following sections describe in more detail WHO’s cost and time frame estimates for eradicating or eliminating each of the seven diseases. WHO’s cost estimate for eradicating dracunculiasis included data on each of the five key factors and appears to be sound. The cost data associated with each element are based on historical data from community-based control programs underway since 1980. WHO had previously set target dates of 1995 and the year 2000 for eradication, but continuing civil unrest in some endemic areas precluded meeting those dates. WHO now expects that all countries except Nigeria and Sudan will be free of dracunculiasis by 2005 at the latest; assuming safe access to endemic areas and appropriate funding, WHO officials said this goal could be reached by 2002. WHO expects that transmission of the disease will be interrupted in Nigeria and Sudan by 2010, provided that safe access and funding conditions can be met. WHO has prepared a biennial estimate of the funds needed through 2011, including certification costs. Experts we interviewed agreed that eradicating dracunculiasis is generally feasible within the time frame and cost estimate established by WHO. In fact, officials from CDC and the Carter Center’s Global 2000 program believe that dracunculiasis will be eradicated in some countries even sooner than WHO estimated and costs will therefore be lower than WHO’s projections. However, one expert cautioned that continuing instability in the region could extend the projected time frame. WHO’s cost estimate for eradicating polio is generally sound and included well-developed cost data on each of the five key factors based on historical experience in controlling the disease. The global effort to eradicate polio was formally launched in 1988, although many countries began polio vaccinations as part of the Expanded Programme on Immunization during the 1970s and 1980s. WHO relies on UNICEF for estimates of vaccine costs and uses its own estimates for the cost of vaccine delivery based on actual experience in countries around the world. While the World Health Assembly originally targeted polio for eradication by the year 2000, most experts we consulted said that polio is on track for eradication by 2002 and certification by 2005. However, some experts raised concern about whether less developed countries will maintain the required level of commitment to polio vaccinations and surveillance until eradication is achieved. In addition, a 1997 WHO report raised concerns about some countries’ progress in meeting performance indicators for detecting and reporting acute flaccid paralysis, a key component of polio surveillance. According to WHO, unless sufficient resources are mobilized to improve detection capability, eradication cannot be certified. WHO’s cost estimate for eliminating leprosy as a public health problem included well-defined data on all key cost elements and appears to be sound. The current elimination strategy is based on the multidrug therapy program begun in 1981, so cost information is well developed. Endemic countries have made significant progress toward eliminating leprosy since the 1980s. However, WHO officials noted that it is possible that some countries with concentrated pockets of leprosy might need to continue campaigns beyond the target date of the year 2000 to reach the global leprosy elimination target of less than 1 case per 10,000 people. Despite this caution, experts generally agreed that WHO’s cost and time frame estimates for leprosy are reasonable. WHO’s measles eradication estimates are speculative. While vaccine costs are well known and based on UNICEF data, WHO officials told us that their estimates did not include the costs of certifying measles eradication and that cost estimates for other factors were low or incomplete. Specifically, WHO officials noted that information on the number of children to be vaccinated is incomplete; administrative costs may be underestimated and are in need of further refinement, and assumptions regarding the efficacy of mass campaigns may be overstated; and assumptions regarding the costs of surveillance and monitoring are low because WHO did not account for inadequate health systems in some countries. Despite these limitations, WHO noted that the measles eradication estimates benefit from the experience of previous eradication efforts. The vaccine administration, surveillance, and certification costs utilize estimates from the polio eradication experience and are adjusted upward to account for difficulties in administering an injectable rather than an oral vaccine. Experts we consulted, including WHO officials, noted that there are unique challenges to eradicating measles within the estimated time frames. Measles is highly contagious, requiring even higher routine vaccination coverage than smallpox and polio. Special campaigns in varying age groups are also necessary to catch those still susceptible after vaccination because the vaccine is not 100 percent effective. Outbreaks can occur even in areas with high routine vaccination coverage. Injection safety is also a concern in the large-scale campaigns required for eradication, particularly in areas where the risk of infection with human immunodeficiency virus and hepatitis is high. In addition, diagnosis is difficult because the symptoms can mimic other, less severe infections, and surveillance is difficult because the disease can spread rapidly while laboratory analysis and confirmation are undertaken. Finally, while measles is a major cause of mortality and morbidity for children in poorer countries, according to some experts we consulted, it is not perceived to be a major public health problem by some industrialized countries. As a result, unlike polio, some developed countries have not initiated the measles elimination efforts necessary to prepare for global eradication. More than half of the estimated cost of measles eradication is expected to be incurred by developed countries. WHO estimates that the lowest income countries will require up to $1.8 billion in external funding for measles eradication. At a February 1998 meeting in Atlanta, Georgia, over 200 disease eradication experts concluded that it is biologically plausible to eradicate measles with the current vaccine, noting that measles transmission appears to have been interrupted for variable time intervals in the Americas. According to a CDC summary of the meeting, participants recommended, among other things, that (1) developed countries proceed with measles elimination efforts as a step toward eradication; (2) less developed countries accelerate control efforts, particularly in areas with high mortality; and (3) experience from regional and country level interventions be used to refine the strategies for eventual eradication. Participants ranked measles as the disease most likely to be the next candidate for a global eradication effort. USAID officials told us that many participants, while agreeing on the technical feasibility of eradicating measles, also cautioned that further study should be undertaken to fully understand the magnitude of the effort and resources required for eradication. According to WHO and CDC, some areas are beginning to set regional elimination goals. In addition to the PAHO elimination goal for the year 2000, over 50 countries encompassing Europe and the Newly Independent States are in the final stages of adopting a goal of regional elimination by 2007, and the Eastern Mediterranean region has adopted an elimination goal of 2010. WHO’s estimate for eliminating onchocerciasis is somewhat speculative. It incorporates data on all key cost elements—including the costs for larvicides and drug treatment, delivery, and surveillance—but data on the size of the target population are incomplete, which could affect the cost and time frame estimates. A control program covering 11 countries in West Africa has been in place for 24 years and has almost reached its elimination goal, and a program covering 6 countries in Latin America has been ongoing since 1991. Thus, the costs for these countries are well defined. However, WHO officials told us that the amount estimated for the other 19 endemic African countries of the African Programme for Onchocerciasis Control (APOC) is more speculative because WHO is still mapping the prevalence of the disease in this area. WHO’s early estimates of the population eligible for treatment, upon which the APOC cost estimate was based, are low for some areas. The latest estimate for the population eligible for treatment in the APOC program is 42 million compared to the original estimate of 35 million. Due to the political unrest in the Democratic Republic of the Congo (formerly Zaire), WHO does not have a reliable estimate of the number of people to be treated. However, according to WHO officials, this region is probably the first or second most infected area in the world. Experts generally agreed that the ongoing West Africa and Latin America programs are on schedule and onchocerciasis is likely to be eliminated as a public health problem within the cost and time frames estimated by WHO. The APOC program started its operations in 1996 and, according to WHO, it is too early to judge whether it will achieve elimination goals within the set time frame. Although WHO included data on all five cost factors, the estimates for eliminating Chagas’ disease are understated because (1) not all countries have submitted estimates and (2) countries that are targeted for elimination of Chagas’ disease by 2010 only submitted estimates through 2005. Like onchocerciasis, the cost and time frame estimates vary among several regional efforts. The program for the southern portion of South America has been underway since 1991, so data from this region are more complete and based on actual experience. However, the efforts in the Central American and Andean countries only began in 1997. Costs and time frames in these countries are less certain because three countries have not submitted cost estimates, and three countries have not submitted prevalence and incidence data. Experts generally agreed that the first program in South America is on track and will probably meet elimination goals by the target date of 2005. However, they believed that the estimates for some of the other countries are likely to increase. Costs for eliminating lymphatic filariasis are very speculative. While all five direct cost factors were addressed in the estimates, WHO officials said that the data are very preliminary. Unlike its information for some of the other diseases, WHO has limited historical data on costs because formal campaigns have only recently begun in some of the 73 countries in which lymphatic filariasis is known to be present. WHO extrapolated actual program costs from the first four country programs to other countries and is continuing to develop more accurate estimates of costs based on further experience. In addition, WHO officials said that they have not completed country assessments to establish the number of people who must be treated in identified countries and to determine whether there are other endemic countries. Quantitative targets for defining elimination have not yet been established, but WHO plans to prepare a draft document with elimination definitions to be reviewed by an expert working group by the end of 1998. According to WHO, initial control programs show such dramatic results in reducing disease transmission that WHO believes that elimination may occur in a number of endemic areas (particularly island populations) after 5 to 6 years of effective control efforts. Experts generally agreed that the disease was a good candidate for elimination but that the costs and time frames were speculative at best. The United States currently spends about $391 million a year on these diseases. This amount includes $300 million a year on polio and measles prevention programs and leprosy treatment in the United States, and about another $91 million abroad for all seven diseases (see table 2). Most of this amount would be saved if eradication and elimination goals were met and efforts to combat them ceased or were reduced. The United States does not currently track domestic costs related to Chagas’ disease, but there have been discussions about implementing routine blood screening for it. An American Red Cross official estimated this screening could cost $25 million a year. The overall savings to the United States as a result of polio eradication are estimated to be at least $304 million a year, including about $230 million in public and private expenditures for controlling polio within U.S. borders and about $74 million for the global eradication effort. This estimate does not include the costs of caring for about eight or nine vaccine-associated polio cases that occur in the United States each year. As a donor, the United States currently funds the global polio eradication effort through CDC and USAID and indirectly through support of the Expanded Programme on Immunization. According to CDC, about 48 percent of domestic expenditures is for the cost of the oral polio vaccine and about 52 percent is for administrative costs. The U.S. polio schedule is four vaccine doses; until recently, most children received only the oral vaccine. For purposes of estimating savings to the United States with eradication, CDC estimates an additional $20 million a year may be incurred due to a 1996 CDC recommendation to administer two doses of the more expensive injectable vaccine before administering two doses of oral vaccine. Unlike the injectable polio vaccine, the oral vaccine is a live, attenuated vaccine that causes disease in several people each year in the United States. Providing the injectable vaccine first in the vaccine schedule will lessen the possibility of provoking disease from the oral vaccine. However, the oral vaccine is the vaccine of choice for eradication because, unlike the injectable vaccine, it prevents the wild poliovirus from readily multiplying in the gut and thus stops person-to-person transmission. The overall savings to the United States as a result of eradicating measles are estimated at a minimum of $61.7 million a year, including about $50 million for domestic vaccine costs and about $11.7 million for global measles control efforts. CDC estimates that it spent an additional $1.3 million on domestic measles research in 1997. The $50 million spent in the United States only includes the cost of the vaccine and not administration costs because immunization against measles is included in the vaccine for mumps and rubella, and the United States would continue administering mumps and rubella vaccines even if measles were eradicated. Therefore, projected savings are not as large as for the eradication of polio. Additional savings would be realized from preventing periodic measles epidemics in the United States; the last measles epidemic of 1989-91 cost $150 million, not including costs associated with lost productivity. For the other tropical diseases we reviewed, U.S. savings from eradication or elimination are estimated at about $25 million. The U.S. Department of Health and Human Services spends approximately $20 million a year to treat a small number of leprosy patients in the United States. However, without eradication of the disease, it is likely that the United States would continue to have a small number of cases. USAID funds the dracunculiasis eradication effort at $500,000 a year and the onchocerciasis effort at $3.5 million a year. CDC spends about $1 million for overseas efforts against dracunculiasis, Chagas’ disease, and onchocerciasis. Eradicating dracunculiasis and eliminating onchocerciasis, Chagas’ disease, and lymphatic filariasis will remove or reduce the need for U.S. assistance. In addition, as previously discussed, U.S. blood banks may begin screening donated blood for Chagas’ disease due to a significant number of infected Latin American immigrants in certain areas of the United States. Screening requirements might be reduced or unnecessary at some point if a successful elimination effort diminished the threat to the U.S. blood supply. International public health experts at CDC and Johns Hopkins University and a 1993 report by the International Task Force for Disease Eradication (ITFDE) revealed a number of diseases that pose threats to the United States and that are technically possible to eradicate. Diseases commonly mentioned include rubella, mumps, hepatitis B, and Hib. The ITFDE concluded that mumps and rubella could probably be eradicated and that the transmission of hepatitis B could be eliminated by universal vaccination. While these diseases generally meet the technical criteria for eradication, we discuss in the following paragraphs some of the challenges to initiating campaigns at this time and WHO’s position on eradicating these diseases. CDC officials suggested that rubella and mumps could be considered candidates for eradication as part of a measles eradication effort, since they are often included as part of a trivalent vaccine against measles, mumps, and rubella. Their inclusion would result in significant increased savings to the United States because, without the eradication of rubella and mumps, most of the cost of the measles vaccination—vaccine administration—would continue to be incurred after measles eradication. CDC estimated U.S. savings from eradicating measles, mumps, and rubella at about $255.5 million a year. According to WHO and CDC officials, rubella constitutes a significant health burden in the form of birth defects and is being discussed as an elimination initiative for the Americas. As with polio and measles, a successful strategy in the Western Hemisphere would likely be a model for global eradication. Challenges to eradication are difficulties in diagnosis and the additional costs, particularly for developing countries. WHO said that, because the global burden of mumps is relatively low or unknown in some areas, the costs of an eradication effort would be difficult to justify. According to WHO and CDC officials, the viral disease hepatitis B may be a candidate for eventual eradication because the vaccine is effective and relatively inexpensive—about 50 to 75 cents per dose. In addition, a good diagnostic tool is available and it appears that humans are the only reservoir for the disease. Hepatitis B is considered a major public health threat because it often progresses to cancer. Almost 1.2 million deaths result each year from hepatitis B, usually from liver cancer or chronic liver disease. The National Science and Technology Council and the National Institutes of Health estimate that the United States spends about $720 million each year in direct and indirect costs related to hepatitis B. CDC estimates that U.S. public and private sectors spend from $308 million to $383 million a year for hepatitis B vaccines alone. According to CDC officials and the ITFDE report, the major barrier to eradication is that it would take decades to achieve because some people are chronic carriers and would have to die before the disease could be considered eradicated. Hib is a bacterial infection that is the most common cause of childhood meningitis and, like hepatitis B, poses a serious global disease burden, including 400,000 to 700,000 deaths each year among children in developing countries. The U.S. public and private sectors spend about $162 million a year on Hib vaccines. According to CDC officials, this disease has potential for eradication but more needs to be known about the vaccine before it could be an eradication candidate. WHO has made Hib a priority for introduction to routine childhood immunization, but cost is a barrier. The vaccine costs $1 to $2 per dose, which would substantially increase the vaccine costs of the Expanded Programme on Immunization. According to WHO officials, due to the public health burden associated with rubella, hepatitis B, and Hib and the success in controlling the diseases in some parts of the world, these three diseases could be eventual candidates for eradication. However, WHO officials noted that, due to the high costs associated with eradication efforts, political will and popular support are as critical to any eradication effort as the technical ability to achieve success. As a result, they said that it is important to limit the number of ongoing efforts and that they do not support adding campaigns at this time. They noted that other diseases could be considered as eradication candidates after success with the currently targeted diseases is achieved. Other infectious diseases pose a growing threat to the United States but do not have characteristics that make them amenable to eradication. During congressional testimony last year, a WHO official noted several other diseases—in addition to human immunodeficiency virus (HIV)/acquired immunodeficiency syndrome (AIDS)—that continue to be major public health problems, globally and in the United States. For example, malaria, which results in about 500 million infections and 2 million to 3 million deaths outside the United States each year, is being imported into the United States about 1,000 times each year. In some instances, malaria is then transmitted locally by mosquitoes present in the United States. During 1996, a tourist to Latin America returned to Tennessee with yellow fever. According to the WHO witness, if mosquitoes in Tennessee had become infected with yellow fever from this patient, they could have caused an epidemic in the United States similar to the one that caused high mortality in the southern United States at the beginning of the 20th century. Outbreaks of dengue fever, another mosquito-borne disease, have occurred in more than 100 tropical and subtropical countries, including recent epidemics in Central America. WHO reported 138,000 deaths from dengue in 1996. There are about 8 million new cases worldwide of tuberculosis each year, a new infection every second, and 3 million deaths in 1996. Finally, influenza, a viral disease, causes between 10,000 and 40,000 deaths each year in the United States alone. These diseases are not likely candidates for eradication over the next generation for a variety of reasons, although it is possible to control disease transmission in some instances. According to the ITFDE, eradicating malaria has proven difficult due to the lack of an effective vaccine, resistance of some mosquitoes to insecticides, and resistance of some malaria parasites to treatment. Although an effective vaccine for yellow fever has been available for more than 50 years, it has only recently been standardized in freeze-dried form so that its stability, both in the freeze-dried and reconstituted form, resembles measles vaccine. According to WHO officials, the additional cost is proving a major constraint to having endemic countries include it in their routine childhood immunization programs. Yellow fever cannot be eradicated because humans are not the only reservoir for infection—an animal reservoir also exists. No effective treatment is available for dengue fever; the primary intervention is mosquito control—and a possible monkey reservoir for dengue infection is suspected. The need for improved diagnostic tests, chemotherapy, and vaccines is cited as obstacles to eradicating tuberculosis; emerging drug-resistant strains of the bacterium causing tuberculosis have complicated control programs. Finally, influenza reemerges worldwide each year in a new form and is highly infectious; the yearly vaccines are only partially effective. The ITFDE reported that an animal reservoir is also suspected for influenza. According to the literature and experts with whom we met, the primary lesson learned from the smallpox initiative was that disease eradication can be technically feasible. The smallpox campaign provided valuable institutional knowledge on the role of community, national, and international mobilization. Eradicating smallpox also meant that costly programs for immunizations and treatment of infected cases were no longer needed. However, unlike most of the diseases that are currently candidates for eradication, smallpox had unique characteristics that made it particularly vulnerable to eradication and therefore has limitations as a model for current efforts. As the first and only disease to be eradicated through human intervention, smallpox is used as evidence that disease eradication is technically feasible. According to some experts, the smallpox effort yielded lessons that have since been applied to other disease control and health care efforts, such as the role of surveillance and the ability to garner resources for massive campaigns. The considerable amounts spent on smallpox prevention and treatment ceased after eradication, resulting in considerable savings. Using 1967 estimated smallpox costs as a baseline measure for savings from smallpox eradication and adjusting for annual birth rates, we estimated the cumulative present value global savings in 1997 dollars for the post-eradication period 1978-97 at $168 billion. This amount included vaccinations, treatment, and loss of economic productivity for developing countries. For the United States, cumulative savings from smallpox eradication are estimated at $17 billion. The United States spent about $610 million in 1997 dollars for domestic smallpox control in 1968 and about $130 million in 1997 dollars during 1968-77 on the overseas eradication effort. We estimated the annual real rate of return for the United States at about 46 percent per year since smallpox was eradicated. Smallpox had the characteristics that experts consider desirable for eradication. The disease was easily diagnosed, and all infections resulted in visible symptoms. The smallpox vaccine was effective with only one dose, stable in heat, and inexpensive. Polio and measles share many of the desirable eradication characteristics of smallpox, including being viral agents with human-only reservoirs, having effective interventions available to interrupt transmission, and providing long-lasting immunity after vaccination. However, certain differences exist. For example, smallpox was less infectious than either polio or measles. Polio is difficult to diagnose without laboratory confirmation because the vast majority of infections show no symptoms, and the paralytic manifestations of polio can be due to other causes. In addition, while the oral vaccine is easy to administer and does not always require trained health workers, up to four doses are recommended, and the vaccine is sensitive to heat, requiring refrigeration until administered. Similarly, measles is not as easily diagnosed as smallpox and is much more infectious. Because the measles virus spreads so easily and the diagnosis may present difficulties, the surveillance and containment strategies used for the smallpox eradication campaign are not as effective for measles, and a surveillance strategy uniquely tailored to measles is required. Even in the United States, where transmission of the measles virus has essentially been interrupted since 1993, occasional outbreaks still occur due to imported virus. Dracunculiasis is very different from smallpox since it is a parasitic disease and not vaccine preventable. However, like smallpox, it is vulnerable to eradication efforts primarily because the interventions are inexpensive and effective, and the infection is easily diagnosed. Simply using a water filter and keeping infected persons out of the water supply can stop transmission of the disease. The main barriers to eradication within the time frames set by WHO are ongoing civil strife in the endemic regions of Africa and a potential lag in national and donor support for a disease that is found mostly in isolated rural areas. The soundness of WHO’s cost and time frame estimates for eradicating and eliminating these seven diseases varies for each disease. The estimates are most sound for diseases where eradication or elimination campaigns have been underway for several years. For the other diseases, complete data are unavailable so the estimates are more speculative. WHO officials acknowledge their estimates are a snapshot in time, based on the information then available. They also pointed out that they are continuously revising their assumptions and the data underlying cost factors to refine the estimates. For some of the diseases, WHO indicated that obtaining good data will be difficult because many developing countries do not have good disease surveillance systems or the health infrastructure to collect and report the information. Moreover, WHO indicated that external factors, such as civil strife and government commitment to disease eradication and elimination, can influence the cost and time frame estimates. The United States is spending a significant amount to combat these diseases domestically and overseas, most of which could be saved if eradication and elimination efforts are successful. In addition, other diseases posing significant public health problems and costs for the United States may be potential candidates for eradication and possible U.S. savings if the current strategies prove successful. WHO, the State Department, CDC, and USAID provided written comments on a draft of this report. Their responses and our evaluation, where appropriate, are printed in appendixes X through XIII. WHO, CDC, and USAID also provided technical comments, which we incorporated as appropriate. WHO stated that the report fairly reflects the processes it is using to estimate the costs and time frames associated with global eradication or elimination of the seven diseases. WHO pointed out that, as we state in our report, such estimates are most complete for those diseases with long-standing campaigns and closer target dates and that all estimates are refined as new information becomes available. WHO noted that successful campaigns against a disease must build on and build up strong national and international health infrastructure, such as routine immunization, disease reporting systems, trained health workers, and laboratory capacity. WHO stated that the explanations in the report appendixes about the unique challenges faced by each campaign should prove useful to decisionmakers in focusing on these important contextual dimensions. The State Department stated that our report provides a comprehensive analysis of WHO’s estimates. State noted that estimates are inexact and should not become an unrealistic yardstick for measuring costs. State also said that the value of investments in eradication and control should provide support for U.S. investment in bilateral and multilateral programs associated with campaigns against diseases. However, State pointed out that it is important to maintain a balance between eradication and elimination programs and other vital health care programs. State indicated that resources should not necessarily be diverted to eradication programs from other important health activities because, while the results may not be as dramatic, they are nonetheless essential. CDC discussed the benefits of eradication programs, citing the 46 percent annual return on investment we estimated for smallpox and the $300 million that could be saved by the United States as a result of polio eradication. CDC added that these costs will be saved in perpetuity. CDC also noted that it appreciated our “recognition of the value of disease eradication and elimination programs.” However, we did not assess the value of eradication or elimination programs. Rather, our work focused on WHO’s estimates of program costs and potential U.S. savings based on current expenditures. USAID commented that in general our report was comprehensive and informative. However, USAID expressed concern that we did not fully consider the costs and concerns regarding disease eradication and as a result we imply that there is global consensus on the eradication potential of the seven diseases reviewed. In particular, USAID said that we did not consider the financial and opportunity costs to health systems of eradication campaigns and that we implied a consensus on the feasibility and soundness of measles eradication. USAID said that eradication campaigns can be disruptive to primary health care systems and may result in an unfortunate reduction in efforts to prevent other diseases. As recognized by USAID, our report clearly states that our objective was to assess the soundness of WHO’s estimates. We did not assess the potential impacts of eradication or elimination campaigns on national health care systems. In addition, we do not imply that there is a global consensus on measles. In fact, our report specifically discusses many of the experts’ views and the challenges facing eradication and elimination campaigns, particularly for measles. Unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after its issue date. At that time, we will send copies of this report to the Director General of WHO, the Secretary of State, the Director of CDC, the Administrator of USAID, and other interested congressional committees. Copies will be provided to others upon request. Please contact me at (202) 512-4128 if you or your staff have any questions concerning this report. Major contributors to this report are Lynne Holloway, Audrey Solis, Ann Baker, and Bruce Kutnick. Drinking water contaminated with water fleas that carry the larvae of the parasite. Adult worm (up to 1 meter in length) migrates through the body, usually emerging painfully through the foot and causing illness and incapacitation for weeks or months. Human to human, via contact with feces of an infected person. Usually no or mild symptoms; attacks the central nervous system and may cause aseptic meningitis (in 5%-10% of cases), paralysis or reduced breathing capacity (in less than 1% of cases), or death. Believed to be primarily human to human, via droplets from respiratory tract of a severely infected person, but exact mode of transmission is not fully understood. Slowly affects skin, nerves, and mucous membranes; can lead to permanent damage to nerves, bones, eyes, and other organs and deformities of face and extremities after many years. Human to human, via droplets from respiratory tract of an infected person. High fever, malaise, conjunctivitis, congestion, and cough, followed by rash; may lead to serious complications or death, especially from secondary infections. Bite of blackflies that carry the larvae from human to human. Adult worms lodge in nodules under the skin; immature worms move through the body, causing intense itching, skin disease, swollen genitals, and visual impairment or blindness. 36 countries in Africa and the Americas, plus Yemen (99% of cases are in Africa) Estimated global health burden (selected data) Temporary illness and incapacitation in every case. Water filters or other water safety measures to prevent ingestion of parasite; prevention of persons with emerging worms from entering drinking water supply. Global prevalence reduced by 97% between 1986 and 1996. Eradication certified in Pakistan in 1997. Civil unrest in Sudan, where about 75% of cases now occur. Deaths: 1,750 (1997). Paralysis: 10 million-20 million total cases. Elimination of the wild virus in the Americas certified in 1994. Global prevalence reduced by over 90% since 1988. Need to maintain vaccination coverage of 90% in all countries until eradication effort is complete. Inadequate surveillance of acute flaccid paralysis in some countries. Deaths: 2,000 (1996). Disabilities: 1 million-2 million total cases. Global prevalence reduced by 84% since 1985 with the introduction of multidrug therapy. Need to detect hidden cases and reach patients in remote and underserved areas. Deaths: 961,000 children (1997). Incidence reduced 99% since 1990 in the Americas. Transmission interrupted briefly in some countries, including the United States. High infectiousness requires very high vaccination coverage (95% or higher). Measles is not perceived as a major burden by many developed countries, which results in poor surveillance and lack of willingness to improve control. Deaths: 47,000 (1996). Blindness: 270,000 cases. Other visual impairment: 500,000 cases. Skin disease: 6 million cases. (Above are totals.) Drug treatment; insecticide spraying to control blackflies. In West Africa, near elimination in original program area (seven countries), 1.5 million cured, and blindness prevented in 185,000. Need to sustain implementation of long-term, community-based drug treatment. Possibility of development of resistance to drug. (continued) Contact with feces of certain parasite-carrying insects that bite humans; also transmitted through blood transfusions and congenitally. Initial acute phase may cause illness or, rarely, death; possibly fatal damage to heart and digestive tract may occur in chronic phase many years after infection. Bite of mosquitoes that carry the larvae from human to human. Adult and immature worms damage the lymphatic ducts, causing gross swelling and sores on limbs, genital areas, and breasts and damage to lymphatic and renal systems. Estimated global health burden (selected data) Deaths: 45,000 per year. Chronic complications: 2 million-3 million total cases. Insecticide treatment of houses to control insects; blood screening to prevent transmission through blood supply; drug treatment for acute and congenital cases. Transmission interrupted in Uruguay in 1997. Significant reductions in house infestation and prevalence of human infection in Argentina, Brazil, and Chile. Insect carriers in Andean and Central American countries cannot be controlled by household insecticides and will require development of new strategies. Swollen limbs and genitals and lung disease: 44 million total cases. Preclinical damage to organs: 76 million total cases. Drug treatment or regular use of drug-fortified table salt to kill immature worms; limited control of mosquito populations; hygiene measures, antibiotics, and antifungal agents to treat effects of the disease. A few national control programs are underway. SmithKline Beecham recently agreed to donate one drug (albendazole) to all endemic countries. National and international funding commitments are uncertain. The number of reported disease cases is generally less than the number of actual cases. For dracunculiasis, the World Bank estimated that the total number of cases in 1996 was 330,000. Dracunculiasis is caused by the parasite Dracunculus medinensis, or guinea worm. Infection occurs by drinking water contaminated with the intermediate hosts (water fleas) of the parasite. Once a person is infected, the worm migrates throughout the body, growing to a length of up to 1 meter. About a year after infection, the worm emerges from the body, normally through the foot, causing an intensely painful swelling and blister. Perforation of the skin is accompanied by fever, nausea, and vomiting. Secondary infections are common and can cause permanent deformity of the joints. Although the infection rarely kills, it inflicts intense suffering and sickness for at least several months, and a small percentage of victims may become permanently disabled. The diagnostic tools for dracunculiasis are visual and testimonial. Health workers and trained villagers can see the emerging worms or the scars from previous infection and take the testimony of the victim. In endemic countries, the disease typically appears during the agricultural season, with farmers in particular being affected. A United Nations Children’s Fund (UNICEF) study of an area in Nigeria with 1.6 million people found that rice farmers lost about $20 million a year due to the effects of the disease on their ability to harvest. A World Bank study showed an economic rate of return of 29 percent for the eradication program for 1987-98, acknowledging a conservative assumption of 5 weeks for the average disability period caused by infection. According to the World Bank study and a Carter Center expert on dracunculiasis, the average period of disability is about 8 weeks. Dracunculiasis is present in Yemen and 16 countries in Africa, 10 of which are considered least developed countries. Last year, Pakistan was the first endemic country to be certified free of dracunculiasis; India and Kenya recently reached zero cases. The number of endemic villages decreased from about 23,000 in 1992 to 9,900 in 1996; reported cases during the same period fell from 422,555 to 152,814, according to the World Health Organization (WHO). Dracunculiasis eradication has been divided into three major phases—interruption of transmission in endemic countries, surveillance in formerly endemic countries, and certification that countries are free of the disease. Because no vaccine or drugs exist to prevent dracunculiasis or to kill the worm inside the body, interrupting transmission of the disease is the basis of eradication. The strategy promoted in endemic countries combines several approaches, including community-based surveillance, case containment measures, and targeted interventions such as provision of safe water, health education, community mobilization, distribution of filters, and treatment of selected water sources. According to WHO, the most powerful tools in monitoring eradication of dracunculiasis are village-based surveillance and case containment strategies. For effective surveillance, cases should be identified prior to worm emergence or within 24 hours after the worm appears. Due to the intense pain as the worm emerges, victims often put their foot in the nearest water source, thereby releasing the larvae back into the water to reproduce and continue the contamination. Once a case is identified, containment measures are initiated, the wound is bandaged to help prevent further transmission, and the patient is advised to avoid contact with stagnant water. The community is educated regarding prevention and containment and encouraged to filter or boil drinking water. According to WHO, this strategy has proven very effective and has been implemented in almost all endemic villages, except in Sudan. Other methods to provide safe drinking water include digging bore-hole wells and treating water sources with larvicide. Wells are considered the best option because they provide protection against diarrheal diseases. However, such interventions are more expensive. Experts agree that eradication of dracunculiasis is feasible and no technical obstacles exist. The relatively simple interventions for interrupting transmission and the community-based surveillance network are effective. Potential obstacles to achieving eradication within the time frames set by WHO include ongoing civil unrest and unanticipated upheavals in health, communications, and transportation infrastructure. Some experts are concerned about sustaining donor and national support for eradicating a disease rarely seen outside rural and often remote areas; they caution that such support must be maintained to achieve eradication. Polio is an infectious disease caused by any of three related types of poliovirus that mostly affect children under three. The virus usually enters through the nose or mouth and multiplies in the throat and intestines. Poliovirus can enter the bloodstream and invade the central nervous system. As it multiplies, the virus destroys the motor neurons that activate muscles. These nerve cells cannot be regenerated, and the affected muscles no longer function. Muscle pain, spasms, and fever are associated with the rapid onset of acute flaccid paralysis. In the most severe cases, poliovirus attacks the motor neurons of the brain stem, reducing breathing capacity and causing difficulty in swallowing and speaking. Without adequate respiratory support, this type of polio can result in death by asphyxiation. Although paralysis is the most visible sign of polio infection, less than 1 percent of polio infections result in paralysis. About 90 percent of cases produce either no or mild symptoms and usually go unrecognized. The remaining cases involve mild, flu-like symptoms common to other viral infections but do not result in paralysis. About 5 to 10 percent of all polio infections result in aseptic meningitis, a viral inflammation of the outer covering of the brain. There are no animal or insect reservoirs or long-term human carriers. Once deprived of its human host, poliovirus will rapidly die out. While most people are unaware of their infection, they can shed the virus intermittently in feces for several weeks. This enables the rapid spread of poliovirus, especially in areas with poor sanitation and hygiene, but also in any environment in which young children, not yet fully toilet trained, are a ready source of poliovirus transmission. Poliovirus circulates “silently” at first—possibly infecting up to 200 people before the first case of polio paralysis emerges. Due to this silent transmission and the rapid spread of the virus, WHO considers a single confirmed case of polio paralysis to be evidence of an outbreak. Protective immunity against polio is established through immunization or as a result of natural infection with the virus. Polio infection provides lifelong immunity to the disease but the protection is largely limited to the particular type of poliovirus involved and may fail to protect against the other two types. Immunization provides protection against all three types of poliovirus. The last case of indigenous polio in the Western Hemisphere was reported in Peru in August 1991; the Pan American Health Organization (PAHO) certified the eradication of polio from the Americas in 1994. In 1996, 155 countries and territories reported zero cases of polio. Polio is still considered endemic in 61 countries, mostly in Africa and Asia. Before 1996, India accounted for over half the world’s polio cases every year; however, India’s polio eradication strategy has recently decreased this portion to about 25 percent of worldwide polio cases. It is estimated that about 10 million to 20 million people of all ages are living with paralysis due to polio. The number of reported cases was 4,074 in 1996—a decline from 35,251 reported in 1988. However, due to incomplete epidemiological surveillance in many countries, WHO estimates that approximately 35,000 to 40,000 cases of paralytic polio occurred in 1996. Before the development of polio vaccines, it is estimated that about 500,000 people a year were paralyzed or died after contracting the disease. WHO’s strategy for polio eradication has four components: routine immunization coverage, supplemental immunization in the form of mass campaigns or national immunization days, effective surveillance, and door-to-door campaigns (“mop-ups”) in the final stages in areas where the virus persists. According to WHO, routine coverage with four doses of oral vaccine is needed among infants to reduce the incidence of polio and make eradication feasible. Unless high routine coverage is maintained, pockets of nonimmunized children accumulate, creating ideal conditions for the spread of the virus. National immunization days are intended to supplement routine immunization. In polio endemic countries, this usually means organizing two rounds of national immunization days a year, 1 month apart, over at least 3 years or until circulation of the virus is interrupted in the country. For the poorest endemic countries, where health, communications, and transportation systems are most deficient, WHO estimates that 5 years of national immunization days may be necessary. Surveillance is needed to pinpoint where and how the wild poliovirus is still circulating and to verify when it has been eradicated. Health care workers are asked to report every case of acute flaccid paralysis in any child under 15. The number of cases reported each year is used as an indicator of the effectiveness of a country’s surveillance system. Because it is often difficult to tell whether a case of acute flaccid paralysis is caused by polio, WHO recommends laboratory-based surveillance in addition to collecting clinical and epidemiological information. Early detection and testing are essential because the highest concentrations of the virus are found during the first 2 weeks after the onset of paralysis. Precise information on the patterns of poliovirus spread is considered essential in developing strategies for global eradication. Finally, following up on surveillance data, mop-up campaigns are conducted door to door to provide two doses at 1-month intervals to immunize all children under 5 in high-risk districts regardless of the child’s immunization status. As the more developed countries reach eradication goals, the least developed countries are just beginning to conduct national immunization days and increase routine coverage. The poorest countries are least able to support vaccine programs. In the countries of the Americas, national funding averaged 80 percent of the costs, and campaigns were started in countries with generally higher routine vaccine coverage than in most African countries. WHO estimates that the poorest countries fund about 25 to 75 percent of the costs and, in countries affected by conflict, 100 percent of the costs may need to be funded from external resources. Many of the least developed and most unstable countries are unable to reach the majority of their population with even the most basic health services. Some academic experts also state that, while local mobilization for supplemental campaigns can be sustained for 2 or 3 years, the volunteer spirit dissipates as the disease appears to be under control. At that point, supplemental campaigns tend to become more expensive. At the same time, WHO fears that “donor fatigue” may set in and the competing needs for funds to combat other infectious diseases—some more widespread and life-threatening than polio—will slow the eradication momentum. According to the U.S. Agency for International Development (USAID) officials and several academic experts, eradicating polio is not a priority for developing countries compared to controlling malaria, tuberculosis, acquired immunodeficiency syndrome (AIDS), and diarrheal and respiratory diseases. These experts assert that, if eradication is to be achieved, industrialized countries, which will enjoy greater benefits from eradication, need to assume a substantial part of the cost. Developing a surveillance system is a long-term process that must be maintained until eradication is certified. Surveillance of acute flaccid paralysis poses special difficulties in countries with inadequate health, transportation, and communication infrastructures. According to WHO, of the 61 countries where polio is endemic, less than 10 percent are meeting the essential criterion of reporting at least 1 case of acute flaccid paralysis for every 100,000 children under 15. Moreover, by the end of 1996, 25 polio-endemic countries had not officially established a surveillance system for acute flaccid paralysis, a crucial requirement for certifying eradication. In some countries, infrastructures have been destroyed by war and neglect, vaccine supply lines cut off, and immunization programs suspended, setting the stage for an upsurge in polio and other vaccine-preventable diseases. War-related outbreaks of polio occurred in Chechnya in the Russian Federation in 1995, in Iraq during 1992 and 1993, and in Sudan in 1993. Today, emerging polio-free areas are threatened by continuing unrest in Afghanistan, Angola, Iraq, Liberia, Somalia, Sudan, and the Democratic Republic of the Congo (formerly Zaire). However, as some officials have pointed out, unrest existed in several countries near the end of the smallpox eradication effort, yet political pressure and massive, military-style campaigns allowed health workers to deliver the vaccine. Leprosy is a chronic infection caused by a bacillus that multiplies very slowly and mainly affects the skin, nerves, and mucous membrane; infection may lead to permanent disfigurement, disability, and deformity. Humans are the primary reservoir for leprosy, although some wild animals, such as the armadillo in the southwestern United States, may also serve as reservoirs. The transmission cycle of the disease is not fully defined, but it is generally accepted that infected humans serve as the source for all human infections, most likely through droplets spread from more severe cases. Leprosy cases are diagnosed through existing health facilities. Minimum diagnostic procedures include clinical examination and a skin smear. Detection of leprosy remains a challenge because leprosy patients are often ostracized from society or they are ashamed of the disease and hide themselves from public view. Leprosy remains a public health problem in 55 countries, but only 16 of these are considered seriously endemic, accounting for 91 percent of the cases. At the beginning of 1997, there were about 1.15 million leprosy cases, a significant decrease from the 10 million to 12 million estimated cases in 122 countries in 1985. The overall strategy for eliminating leprosy is to ensure cases are identified and patients have access to treatment. Leprosy cases are divided into two general categories. Paucibacillary cases are those that have fewer bacteria—normally less than 1 million bacilli in a gram of skin tissue. Multibacillary cases—the most serious and infectious cases—may have more than 100 billion bacilli. Leprosy is curable with a combination of drugs—dapsone, rifampin, and clofazimine—known as multidrug therapy. This combination has prevented the bacillus from becoming resistant to any one of the three drugs. According to the Centers for Disease Control and Prevention (CDC), for paucibacillary patients, the treatment is six doses of rifampin within a 6-month period plus daily dapsone. Until recently, multibacillary patients received 24 doses within a 24 to 36 month period. In June 1997, however, the Expert Committee on Leprosy recommended reducing treatment for multibacillary patients to monthly doses of rifampin for 12 to 18 months plus daily dapsone. In most countries, multidrug therapy services have reached patients who have easy access to the health care system. However, certain areas in some endemic countries have patients who have not been reached because there is no health infrastructure to deliver multidrug therapy, the present geographical coverage is poor, or the health services for delivering multidrug therapy are not operating properly. To reach these patients, leprosy elimination campaigns and special action projects have been established so that elimination goals can be achieved. Campaigns are based on three elements: diagnosing and treating patients, increasing community awareness and participation, and establishing capacity-building measures for health workers. While WHO and other experts agreed that the elimination program has been largely successful, they noted several factors that may affect achieving elimination by the year 2000. In densely populated countries with significant numbers of infected people, large declines in cases, even as much as 95 percent, may not be enough to reach the elimination target. Civil unrest and difficult conditions in countries such as Sudan, Nigeria, Sierra Leone, and the Democratic Republic of the Congo (formerly Zaire) may delay detection, treatment, and surveillance. Complacency may also become a problem as some countries believe they have done a good job and cease conducting campaigns. Finally, leprosy patients are often ostracized and hidden, making case identification difficult and possibly slowing progress toward elimination of leprosy. Measles is a highly contagious viral disease that mostly affects children. Before vaccines were available, almost everyone eventually acquired measles, usually as a young child. The virus is transmitted by droplets or airborne spray from the respiratory tract of infected individuals to mucous membranes in the upper respiratory tract or eyes of susceptible persons. Secondary attack rates among susceptible household members are reported to be more than 80 percent. Humans are the only known reservoir for measles infection, although some primates can be infected. Protective immunity against measles is established either through immunization or as a result of natural infection with the virus. Global immunization coverage of infants is estimated at about 80 percent; in WHO’s Africa region, the rate is only about 56 percent. The virus is not expected to develop a resistance to the vaccine. The clinical diagnosis of measles can be difficult, particularly as incidence decreases, making surveillance a challenge. Measles symptoms develop approximately 10 days after exposure. The early symptoms of high fever, malaise, conjunctivitis, upper respiratory congestion, and cough are followed after 2 to 4 days by a rash that lasts several days. The patient is most infectious during the earlier phase but can transmit the virus during the first 3 to 4 days after the rash appears. Communicability generally decreases rapidly after the appearance of the rash. Rashes due to other causes, such as other viruses and drug reactions, and accompanied by similar symptoms, are easily confused with measles. About 1 million deaths each year are attributed to measles, the vast majority of them children under age 5 in developing countries. About another 30 million cases survived the illness in 1997. Complications, such as ear infections, pneumonia, croup, and diarrhea are common in young children, and acute encephalitis occurs in about 1 of every 1,000 cases. Measles is more severe among malnourished children in developing countries. For the most part, measles transmission has been interrupted in the Americas and the United Kingdom. According to CDC, measles reached record low levels in the United States during 1997, with a provisional total of 135 cases reported. However, measles outbreaks may still occur in the United States and other developed countries that have maintained high immunization coverage. Measles elimination refers to the interruption of transmission of the virus in a sizable geographic area in which vaccination would nevertheless need to continue because reintroduction of the virus is an ongoing threat. Eradication is the global interruption of measles transmission, representing the sum of successful elimination efforts in all countries. Once eradication is achieved, vaccinations could be stopped without risk of future measles outbreaks. Estimates of the appropriate level of population immunity needed to stop transmission of the virus vary. Many variables affect transmission, such as population density, living patterns, and temperature and humidity, but the consensus is that transmission is very efficient. Outbreaks have been reported in populations in which as few as 3 to 7 percent of individuals were susceptible. Current estimates of the routine coverage needed range from 90 to 95 percent or higher, and some experts suggest that 97 percent may not be enough under certain conditions. WHO is using PAHO’s measles elimination strategy as guidance in developing a possible global measles eradication initiative. This strategy aims to (1) rapidly interrupt measles transmission by initially conducting mass campaigns and (2) maintain interruption of transmission by sustaining high population immunity through vaccination of infants at routine health services facilities supplemented by periodic mass campaigns. Surveillance of both symptoms and virus transmission is to be a key part of this strategy. Many countries have made significant progress in decreasing the transmission of the measles virus; in the Americas, measles incidence decreased by 99 percent from 1990 to 2,109 cases in 1996. However, the nature of measles presents several challenges to an elimination or eradication campaign. It is highly contagious and requires high immunization coverage rates that are difficult to achieve, even in the most developed countries. The accumulation of susceptible persons over time is considered the most serious impediment to the elimination or eradication of measles. However, experts at WHO, PAHO, and CDC believe that strategies that provide at least two doses of vaccine to each child can overcome this challenge. The timing of immunization also presents special difficulties. Vaccinating infants under 12 months is less effective due to the presence of maternal antibodies and hastens the accumulation of susceptible preschool aged children. The PAHO strategy and experience in the United States demonstrate that vaccinating at 12 to 15 months or switching to a two-dose schedule provides immunity more effectively. However, vaccinating those under 12 months has substantially reduced measles incidence in this group, in which mortality from this disease is the highest. Some experts express concern that use of the PAHO strategy as a model may not work globally or will require modifications to allow for less favorable country conditions. They point out that high immunization coverage and surveillance have been successful in the Americas due to the relatively advanced state of the health, transportation, and communications infrastructure in these countries compared with the infrastructure of the least developed countries. Good surveillance systems allow PAHO countries to calculate the number of susceptible children and target campaigns accordingly. Some experts remain doubtful that such high coverage and good surveillance can be achieved in the least developed countries with much weaker infrastructure. WHO officials agreed that sustaining a measles eradication campaign in the poorest countries will be a challenge. In addition to technical challenges, political commitment in selected industrialized countries and adequate donor support for low-income countries remain uncertain. While measles is a major childhood killer among the poor, it is often perceived as a mild illness, and many industrialized countries do not consider the disease a major public health threat. This perception can inhibit the public and political support for allocating the resources needed for a successful eradication effort. Accordingly, immunization coverage and surveillance systems in many areas, including industrialized countries, are inadequate to interrupt transmission. The measles strains that enter the United States, for example, largely do not originate in less developed countries. Most measles strains imported into the United States come from France, Germany, Japan, and Italy, according to CDC. However, according to WHO and CDC officials, support for measles eradication is increasing. For example, the more than 50 countries encompassing WHO’s region for Europe and the former Soviet Union are in the final stages of adopting a goal of regional elimination by 2007, and WHO’s Eastern Mediterranean region has adopted an elimination goal by 2010. Despite the challenges to measles eradication, WHO and CDC officials believe that a global measles eradication strategy should be pursued based on the burden of the disease and the technical feasibility of eradication. They point out that similar skepticism existed before and during the early years of the smallpox and polio eradication initiatives. Several global meetings on measles, sponsored primarily by WHO, PAHO, UNICEF, and CDC, have been held in recent years to discuss challenges and build consensus on eradication. At the most recent meeting of about 200 public health experts in February 1998, measles was identified as the leading candidate for the next global eradication initiative due to its biological feasibility, high mortality and complications among children, effective interventions, demonstrated feasibility in the Americas, increasing global support, and potential cost benefits. According to USAID, participants also agreed that further study should be undertaken regarding operational feasibility and possible costs to the development of sustainable primary health care systems before a global campaign is launched. Onchocerciasis, also known as river blindness, is a chronic parasitic disease that causes blindness and severe skin conditions. The clinical manifestations of the disease include formation of nodules under the skin, changes in skin pigmentation, loss of skin elasticity, debilitation, severe itching, visual loss, and blindness. A World Bank study for calculating the net benefits of the Onchocerciasis Control Programme in West Africa assumed that people who become blind due to the disease live another 8 years with blindness and die 12 years prematurely, thus indicating that preventing one case of blindness can add 20 years of productive life. Humans are the only known host for the disease. The parasite is transmitted between humans by the bite of blackflies, which breed in streams and rivers. When a fly bites an infected human host, the fly becomes infected with the larvae of Onchocerca volvulus. When the infected fly bites another human, the larvae may develop into adult worms (macrofilariae) in the human, producing offspring, or microfilariae. These microfilariae may in turn be ingested by other blackflies, thus continuing the transmission. A human is infectious to the blackfly only when microfilariae are present; the adult worm is not transmitted. However, the adult worms usually live about 12 to 15 years inside the body and generally keep reproducing microfilariae for much of that time if not treated. Although onchocerciasis is considered nonfatal, it is the second leading cause of infectious blindness and the source of enormously debilitating skin disease. WHO estimates that 120 million people are at risk and that 18 million are infected. Blindness afflicts about 270,000 persons, and about 500,000 suffer visual impairment. Severe itching and dermatitis affect about 6 million. Onchocerciasis is suspected to be endemic in 30 countries of sub-Saharan Africa, in Yemen, and in 6 countries in Latin America. Because the disease is endemic in fertile river valleys, it has had significant socioeconomic impact over the years as residents have abandoned villages with arable land and moved to more arid areas. The first onchocerciasis control program in West Africa has resulted in people beginning to resettle in lands that have been deserted for as long as 50 to 100 years, resulting in increased income levels. Twenty-five million hectares have been opened for resettlement and cultivation, an area that can feed a population of about 17 million people. Two specific elimination strategies have been implemented: controlling the vector (blackfly) in endemic areas and treating infected persons with ivermectin. Vector control is accomplished through the use of larvicide in rivers and streams, mostly by helicopter spraying, and aims at interrupting disease transmission. The drug ivermectin kills the microfilariae, thus arresting further development of the disease. It has a very limited effect, if any, on killing the adult worms. Treatment with ivermectin once a year is considered sufficient to prevent blindness. Ivermectin treatment reduces transmission of the parasite but does not appear to halt it. Annual, large-scale treatment will therefore have to continue for a long time. Current predictions based on a simulation model indicate that annual treatment at the current level of coverage may have to continue for about 1-1/2 to 2 decades, although elimination of the disease as a public health problem is likely to occur before the full treatment regimen is complete. A third treatment option, not widely used, is removing the nodules under the skin in which the microfilariae are lodged. Sustainability of community-directed ivermectin distribution systems is a potential concern. Cost estimates assume that community-based programs will be independent within 5 years, but this may be modified as these systems are evaluated. One issue is whether community volunteers will continue to work without compensation. Another unknown is whether people will continue to come for treatment after their condition improves, but WHO officials do not see this as a problem at this time. It is also uncertain whether the parasite will develop resistance to ivermectin. A final challenge to eliminating onchocerciasis within estimated costs and time frames is the fact that WHO is still mapping the prevalence of the disease in the area of the African Programme for Onchocerciasis Control, where the population to be treated appears to be greater than originally estimated. Chagas’ disease is a parasitic disease with both acute and chronic complications. It is caused by a parasite, Trypanosoma cruzi, contained in the feces of reduviid insects. More than 100 species of mammals have been found infected. Normally, humans become infected following the insect’s bite, but the contaminated feces may also enter through the mucous membrane when a child rubs or scratches a bite then touches his or her eyes or mouth. The parasite may also be transmitted from human to human through transfusions of contaminated blood or through congenital transmission from an infected mother to the fetus. The insect favors poverty conditions, normally living in the cracks of poorly built or decaying housing. The acute phase of Chagas’ disease appears shortly after infection and often has no distinctive symptoms. It can be characterized by inflammation at the site of the infection and flu-like symptoms. If the parasite is introduced into the eye, conjunctivitis and swelling of the eye area develops. A characteristic lesion may also develop, but often the disease goes unnoticed and undiagnosed during this period. However, it is during the early phase of the infection—lasting only a few weeks—that the parasite can be seen in the blood and that the disease may be curable with the drugs nifurtimox or benznidazole. Once the acute phase has passed, the parasite moves into tissue and cannot be treated. About one-third of those infected will develop chronic conditions, especially heart disease. Chronic cardiopathy occurs in 27 percent of those infected, chronic digestive lesions in 6 percent, and neurological disorders in 3 percent. Patients with severe chronic disease become progressively sick and ultimately die, usually from heart failure. Prevalence of Chagas’ disease is limited to the Americas. WHO estimates that about 100 million people in 18 countries are at risk in Latin America. The Caribbean region has not reported any cases. Up to 18 million are currently infected, with about 2 million to 3 million of these suffering from chronic complications. Various estimates place the number of infected persons in the United States at up to 100,000, due mostly to immigration. The World Bank has characterized Chagas’ disease as a major public health burden in Latin America. Control and eventual elimination of Chagas’ disease centers on two overall strategies to interrupt transmission of the parasite—vector control and blood bank screening. Vector control includes insecticide spraying, insecticidal paints, fumigant canisters, housing improvement, and health education. The blood screening strategy aims to screen all blood donors in and from endemic countries for antibodies and to strengthen existing health service infrastructure for multiple blood screening. Serological testing is also conducted to treat the disease in its acute phase and for surveillance purposes. Distribution of Chagas’ disease may be divided into two areas: the Southern Cone countries of Argentina, Bolivia, Brazil, Chile, Paraguay, and Uruguay; and the areas of northern South America and Central America. The insects that transmit Chagas’ disease differ in these two areas; this has implications for disease control strategies. In the Southern Cone countries, the insect mainly lives in the cracks of poorly constructed housing and not outside the home. In these countries, the use of insecticides and other vector control measures are reducing infection significantly. In northern South America and in Central America, the insect can live in housing and outside in other diverse habitats. Because vector control measures have limited effectiveness, the initial strategy in these countries is to interrupt transmission through blood screening measures. As noted, the vectors carrying the parasite that transmits Chagas’ disease differ between the Southern Cone countries and the endemic areas in the Andes and Central America. Because the vector in the latter areas is less easily controlled, the elimination strategy currently relies on blood screening to interrupt transmission. The Andean and Central American elimination initiatives were launched only last year, and serological testing for donated contaminated blood has not yet been undertaken in all countries. Moreover, it is not yet clear that this strategy will eliminate Chagas’ disease as a public health problem because humans will still be vulnerable to being bitten by the vector. Lymphatic filariasis, a parasitic disease transmitted by mosquitoes, is the world’s second leading cause of permanent and long-term disability. Like onchocerciasis, the infected vector takes blood from a human and passes on the infection. The adult worms, or macrofilariae, settle into the lymphatic system and mature over a period of 3 to 15 months. When fertilized, female adults produce large numbers of larvae known as microfilariae, which invade the blood stream. Mosquitoes can then ingest them when they bite an infected human and transmit the microfilariae to other people, in whom they pass through a larval sequence to become new adults. The vast majority of microfilariae remain in the body as immature forms for 6 months to 2 years, growing up to a third of a millimeter in length and doing immense damage. The adult macrofilariae can grow to several centimeters long, damaging the lymphatic ducts. Humans are the only hosts of the most common forms of filariasis. The infection causes a very severe pathology of the lymph system. This can result in elephantiasis, a condition in which one or more limbs becomes grossly swollen and covered with sores; in hydrocele, a grotesque enlargement of the male scrotum; or in lymphoedema in women, in which their breasts or genitals are grossly swollen. Other internal damage and related infections can also occur, but the effects are often hidden. The disease can have serious social and psychological consequences, including sexual dysfunction and social exclusion. Diagnosis of lymphatic filariasis used to be difficult—blood samples had to be taken between 9:00 p.m. and 3:00 a.m. because the parasite remained in the organs during the day and entered the bloodstream at night. Diagnostic tools were improved, and now a test of a drop of blood on cardboard can detect the infection from blood taken at any hour because the test detects a specific antigen, not the parasite itself. Another new diagnostic tool detects deoxyribonucleic acid of the parasite in infected mosquitoes or in human blood. WHO estimates that at least 120 million people in 73 endemic countries worldwide are infected with filarial parasites. The percentage infected is about 49 percent in Southeast Asia, 34 percent in Africa, and 16 percent in the western Pacific. There is some, but very little, incidence of the disease in Europe and the Americas. The prevalence of the disease is growing in some endemic areas, due in large part to rapid unplanned urbanization. The mosquitoes carrying this parasite tend to breed in dirty urban water, making this disease more prevalent in dense urban slums. The strategy for eliminating lymphatic filariasis is to interrupt the transmission between mosquitoes and humans. In the past, the strategy was to control the mosquito population, but this proved difficult, expensive, and ineffective, according to WHO. While limited vector control activities may continue, the recent development of treatment options based on drugs that are inexpensive (diethylcarbamazine, or DEC) or donated (ivermectin and albendazole), safe, easily administered, and broadly effective has changed the strategy to mass distribution of medication to entire at-risk populations. The optimal treatment regimens that result in almost complete elimination of microfilaria-stage parasites from the blood (thus blocking transmission by vector mosquitoes) involve two drugs administered concurrently (either albendazole or DEC plus ivermectin) given once yearly over a period of 4 to 6 years. According to WHO, experimental observations in the field indicate that such yearly regimens are effective in interrupting transmission. An alternative treatment is the substitution of regular table salt with DEC-fortified salt for 1 to 2 years. This strategy also decreases blood microfilaria numbers to very low levels and has been shown in large-scale control programs to be effective in interrupting transmission. The treatment programs are largely community based. Techniques for identifying communities in need of treatment include estimating infection rates from existing health records, assessing the presence of hydrocele in adult men, examining mosquito vectors for infection, and evaluating daytime finger-prick blood samples from selected groups. Geographical information systems for mapping public health resources and disease patterns are now available for use in planning and monitoring lymphatic filariasis control programs. National and international funding commitments through 2030 are uncertain. Although there is some possibility that the parasites will develop resistance to the drugs, this is less likely because the drugs are being used in combination and taken only once a year, according to WHO officials. Our objectives were to examine (1) the soundness of WHO’s cost and time frame estimates for eradicating or eliminating seven infectious diseases, (2) U.S. spending related to these diseases in fiscal year 1997 and any potential U.S. savings as a result of eradication or elimination, (3) other diseases that may pose a risk to Americans and that could be candidates for eradication, and (4) historical information on U.S. costs and savings from smallpox eradication and whether experts view smallpox eradication as a model for other diseases. To assess the soundness of the WHO’s cost and time frame estimates for the seven diseases, we met with epidemiologists and health economists to understand the key elements of estimates and with cognizant WHO officials to understand the information on which their estimates were based. We also reviewed the criteria that WHO set forth to identify candidates for eradication or elimination and assessed how the diseases fit the criteria. We conducted a search of the medical and scientific literature on these diseases to identify studies and research by other experts on the costs and time frames associated with disease control efforts and other factors relevant to eradication or elimination. We also met with epidemiologists at the PAHO, CDC, and the Carter Center and with epidemiologists, economists, and other experts at the Johns Hopkins University, Emory University, USAID, and Abt Associates (a USAID health project contractor that conducted a cost study for child survival initiatives) to discuss the characteristics of the diseases and the bases for cost and time frame estimates developed by WHO. We used the information to assess whether the data underlying WHO’s estimates were sound. We did not develop independent estimates of the costs and time frames for eradicating or eliminating these diseases nor did we verify the accuracy of the data underlying the estimates. However, we adjusted some of the numbers to ensure consistency across diseases, particularly to express all estimates as cumulative totals in 1997 dollars. For dracunculiasis, measles, and Chagas’ disease, no adjustments were necessary because WHO’s estimates had been calculated in 1997 dollars with no annual inflation adjustments. For polio and onchocerciasis, we took out WHO’s inflation adjustments. Because WHO’s leprosy estimate covered 2 years prior to this review, we recalculated for the period 1998-2000. We subtracted $72 million from the lymphatic filariasis estimate for the cost of treating symptoms for infected cases since treatment was not included in the other estimates. To determine past and current U.S. spending on these diseases and any likely savings that may be gained by the United States as a result of reaching these goals, we obtained public and private expenditure data and projections from CDC and USAID, including information on U.S. contributions to WHO. We discussed the incidence of the diseases and their potential threat to the United States. We also spoke with an official of the American Red Cross to determine projected spending for screening donated blood for Chagas’ disease. To identify other diseases that pose threats to the United States and that could be candidates for eradication, we reviewed the medical and scientific literature and consulted experts in epidemiology and international public health at WHO, CDC, and USAID. Finally, we obtained information from CDC on global and U.S. spending for smallpox; adjusted estimated savings to reflect inflation, birth rates, and present value in 1997 dollars; and estimated the annual real rate of return on the U.S. investment in smallpox eradication. We discussed with public health officials and epidemiologists at WHO, CDC, USAID, and the Johns Hopkins University how that undertaking could be applied for ongoing efforts. We conducted our review from August 1997 to December 1997 in accordance with generally accepted government auditing standards. The following are GAO’s comments on USAID’s letter dated April 1, 1998. 1. We do not imply global consensus on the eradication of all seven diseases. As we noted in our draft report, the World Health Assembly, which is composed of health ministers from WHO member countries, voted to initiate formal eradication campaigns against dracunculiasis and polio in 1988 and 1991, respectively. The only other disease being discussed for possible eradication is measles, for which we outline the challenges to eradication. 2. We discuss many of the operational challenges facing measles eradication raised by USAID. We have clarified the text to reflect USAID’s concern about injection safety. 3. The basis for our estimates of cost savings to the United States is the current level of U.S. spending on those diseases. It is not based on WHO’s cost estimates for disease eradication and elimination. Thus, the fact that some of the estimates are speculative does not affect the potential U.S. cost savings, only whether or when they might be forthcoming. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO reviewed the World Health Organization's (WHO) efforts to eradicate seven infectious diseases--dracun culiasis, polio, leprosy, measles, onchocerciasis, Chagas' disease, and lymphatic filariasis--worldwide, focusing on: (1) the cost and timeframe estimates developed by WHO for eradicating or eliminating these diseases; (2) U.S. spending related to the seven diseases in fiscal year 1997 and any potential cost savings to the United States as a result of eradication or elimination; (3) other diseases that international health experts believe pose a risk to Americans and could be candidates for eradication; and (4) historical information on U.S. costs and savings from smallpox eradication and whether experts view smallpox eradication as a model for other diseases. GAO noted that: (1) the soundness of WHO's cost and timeframe estimates for eradicating or eliminating the seven diseases varied for each disease; (2) cost and timeframe estimates for dracunculiasis, polio, and leprosy were the most sound because campaigns against them have been under way for several years and are largely based on firm data about target populations and intervention costs from ongoing initiatives; (3) for the other diseases, WHO's estimates are more speculative because data underlying the cost and timeframe estimates are incomplete or unavailable; (4) WHO officials acknowledge that the costs and timeframes provided to the House Committee on International Relations are not exact and that they must continually be refined as new information becomes available; (5) the United States spent about $391 million in 1997 on programs to combat these diseases; (6) potential savings to the United States if eradication or elimination of these diseases were achieved could be substantial; (7) most of the savings would result from eliminating the need to vaccinate U.S. children against polio and measles; (8) the experts GAO interviewed and its review of the literature identified several other diseases that pose health threats to the United States and that meet the scientific criteria for eradication used by health experts; (9) four diseases were frequently mentioned: rubella, mumps, hepatitis B, and Hemophilus influenzae type B; (10) WHO officials stated that while it is technically possible to eradicate these diseases with existing vaccines, it is unlikely that other diseases will be considered for eradication before achieving success with currently targeted diseases; (11) using Centers for Disease Control and Prevention data, GAO estimated that the United States has saved almost $17 billion to date from the eradication of smallpox in 1977; (12) the savings are due to the cessation of vaccinations and related expenditures such as surveillance, treatment, and loss of productivity; (13) experts agree that several lessons can be learned from the smallpox effort, but the primary lesson is that a disease can actually be eradicated; and (14) however, they also suggested that smallpox has limitations as a model for other diseases because it had characteristics that were uniquely amenable to eradication. |
Manufacturing generally involves the mechanical, physical, or chemical transformation of materials, substances, or components into new products, including the production of food, automobiles, and clothing, among many other things. The materials that manufacturers transform into new products include raw materials from agricultural, forestry, fishing, mining, or quarrying businesses as well as component items produced by other manufacturers. Manufacturing also includes the assembly of components into manufactured products. Businesses engaged in manufacturing often are referred to as plants, factories, or mills, and most use power-driven machines and materials-handling equipment. However, products that are made by hand, or in a worker’s home, and businesses that both make and sell products at the same location, such as bakeries, also qualify as manufacturers. In its narrowest sense, manufacturing consists of “factory floor” activities that contribute directly to the production of goods, such as cutting, grinding, and assembly. More broadly, manufacturing can include a range of activities that both precede and follow factory floor activities. Some activities, such as product design, process improvements, and quality management, are more specific to the manufacturing enterprise. Other activities are common to many types of businesses, such as the effective use of information technology, strategic planning, and administrative operations. Although no standard definition for small manufacturing businesses exists, two systems that are widely used to classify businesses by type and size can be used to define small businesses engaged in manufacturing (referred to in this report as “small manufacturers”). Specifically, the North American Industry Classification System (NAICS), which categorizes businesses according to the principal activity in which they engage, has three general classifications for businesses engaged in manufacturing. In addition, the Small Business Administration (SBA) has size standards that define small businesses on the basis of average annual revenue or number of employees (typically, 500 or fewer). For this report, we define small manufacturers as those businesses that have a NAICS manufacturing classification and meet SBA’s criteria for small businesses. Small manufacturers are an important component of the manufacturing sector. These businesses numbered over 300,000 in 2004 and accounted for almost 45 percent of all U.S. manufacturing jobs. Many small manufacturers also export their goods directly or indirectly as suppliers or contractors for larger companies. In addition, small manufacturers are a significant source of innovation in the U.S. economy. On average, small manufacturers produce patents that are more frequently cited as important contributors to new patents than do large manufacturers. Over the past 14 years, studies have reported that small manufacturers possess many strengths due to their size, such as the ability to respond quickly to market changes. On the other hand, small manufacturers, like small businesses in general, lack the staff, resources, and expertise of their larger competitors and consequently face numerous challenges, including (1) finding sources of operating capital and investment funds (financial assistance); (2) bringing new products to market or finding new uses for existing technology (technology development and deployment assistance); (3) becoming familiar with new technologies, production techniques, and business management practices (technology, business, and management assistance); (4) competing in overseas markets (export assistance); and (5) obtaining skilled employees (worker training assistance). Federal programs offer a wide range of services to help businesses of all sizes and types address these challenges. For example, federal programs may offer financial services, such as grants, loans, loan guarantees, or insurance. These financial services may be for general business purposes, such as providing working capital or acquiring new equipment, or targeted to a specific need, such as covering the expenses necessary to export goods. Programs that offer nonfinancial services may include those that help businesses acquire the various types of specialized knowledge and skills they need to begin, operate, and expand their businesses; commercialize the results of their research projects; export their goods; or appropriately train their workforce. Federal programs also may provide financial and nonfinancial services using federal employees or through agreements with state governments, private entities, and nonprofit organizations that act on behalf of the federal government. Some federal programs are targeted to the needs of businesses of a specific size, regardless of type, such as the assistance SBA offers to small businesses. In other cases, federal programs target services to any size business but of a specific type, such as the assistance that the Farm Service Agency offers to food processors regardless of their size. Assistance also may be targeted to businesses adversely affected by trade policies or local disasters. Federal attention to the needs of manufacturers increased following the economic recession that began in 2001 when manufacturing job losses were substantial and recovery in the manufacturing sector lagged behind other sectors. The extent to which agencies track program funding, the number of businesses they assist, and the type of businesses they assist varies. Agencies tend to track the financial services they provide in the form of grants, loans, loan guarantees, letters of credit, or insurance, in terms of both the value and the number of financial services. In addition, agencies may track financial data according to the source of their funds. For example, agencies may track the funds by their annual appropriations, the obligations to which they dedicate the appropriated funds, or the amount of dollars they expended in financial assistance. Agencies less often track the funding for and participation in nonfinancial service programs. Such services may be offered in single- or multipurpose “service centers” that offer assistance on a range of issues, and may involve the specialized expertise of staff from multiple agencies. Service centers may track the number of individuals or firms they serve but not the specific type of service provided to each business. Moreover, agencies may not gather NAICS codes or other information on the type of businesses they serve. Because of these differences, agencies may not track funding and participation data in a consistent manner. Federal agencies also may form interagency groups to coordinate the operations of their programs and help ensure that resources are used efficiently. These interagency efforts may focus on a specific program; for example, multiple federal agencies share responsibility for administering the Small Business Innovation Research Program and have created an interagency group to help ensure that the program is being implemented consistently across all of the agencies. Similarly, agencies may form an interagency effort to address specific activities, such as ensuring that small businesses have access to federal procurement opportunities. In addition, multiple agencies may be tasked by the President to focus their efforts on a specific topic of relevance to the business community. These agencies may create interagency groups consisting of representatives from multiple federal agencies to better coordinate their individual programs and crosscutting activities. For example, Commerce created an interagency group to implement its 2003 Manufacturing Initiative, which called for a comprehensive review of issues affecting manufacturers’ competitiveness and a strategy to foster competition. Interagency groups that are set up to coordinate task-specific efforts may disband upon completion of the assigned task. Of the 254 federal programs we identified that provide financial or nonfinancial services or both to support the U.S. business sector, 5 programs provide services specifically to small businesses engaged in manufacturing, while an additional 15 programs target manufacturers, regardless of their size. In addition, we identified 127 programs that offer financial or nonfinancial assistance or both to small businesses, regardless of type, and 107 other federal programs designed to support all types of businesses, regardless of their size or type. Appendixes II through XX provide detailed information on all 254 programs, by agency. We identified 5 federal programs that specifically provide services to support small manufacturers. Each of the 5 programs offers various types of nonfinancial business, management, and technical assistance that are specifically related to manufacturing operations, processes, and problems. Only 1 of the 5 programs offered financial assistance in addition to its nonfinancial services. The types of services provided by the 5 programs were generally aligned with the mission of the administering agency and included the following: The Outreach to Small and Very Small Plants program is administered by Agriculture’s Food Safety and Inspection Service (FSIS), which regulates manufacturers of meat, poultry, and egg products of all sizes, and helps small meat and poultry processors comply with food safety regulations. FSIS delivers information through partnerships with colleges, universities, and other Agriculture agencies. Its services to small manufacturers include informational materials about regulatory compliance; referrals to other sources of information; funding for university workshops; and training materials such as videos. FSIS also offers education sessions to small and very small plant owners and operators on how to improve their food safety and food defense systems, and provides guidance regarding federal inspection of their products to small and very small plant owners who want to start operations. MilTech, administered in the Office of the Secretary, is a partnership between Defense’s TechLink Program and the Montana Manufacturing Extension Partnership Center. MilTech provides companies with engineering, manufacturing, and business development assistance to help accelerate the transition of new technology to the U.S. warfighter, lower the cost and cycle time of technology acquisition, and help Defense more fully benefit from its small business research and development investment. The Defense Small Business Technology and Readiness Resources Program (DSTARR), is administered by the Navy. DSTARR provides assessments of participating small manufacturers’ operational processes at their places of business, and develops detailed continuous improvement plans to help participants implement industry best practices, gain knowledge, and improve operations. In addition, DSTARR offers online access to information and training, and access to technical experts who provide both on-site and remote technical assistance and training in manufacturing and business processes. The goal of DSTARR is to develop a national network of small manufacturers and specialized information technology companies to meet Defense’s needs. Prior to December 2006, DSTARR was known as the Manufacturing Technical Assistance Partnership Program, The Next Generation. The Manufacturing Technical Assistance Production Program (MTAPP) is administered by the Air Force. MTAPP provides technical and managerial assistance to enhance the capabilities of small manufacturers and increase their ability to deliver high-quality products to the Air Force, as well as to Defense, and its major contractors. Each participant receives an in-depth assessment of its operations and a continuous improvement plan. In addition, MTAPP provides hands on assistance with quality assurance, improving the efficiency of manufacturing operations, sales and marketing, information technology, and business planning. The Technology Insertion, Demonstration, and Evaluation (TIDE) Program is also administered by the Air Force and is a federally funded research and development center that operates through Carnegie Mellon University with funding from Defense. TIDE encourages and assists small manufacturers––specifically, those that supply goods and services important to national defense––to adopt commercially available software and information technology. The program demonstrates to these firms the advantages of using advanced software and information technology in their operations and adapts existing commercial software and information technology for small manufacturers’ use. TIDE also offers workshops, conferences, and courses that provide some of the training small manufacturers need to successfully adopt new technology. Specifically, TIDE has addressed product data management, electronic data distribution, data security, flexible scheduling of manufacturing operations, and computer simulation of manufacturing processes. Only the Defense agencies that administer 3 of the 5 programs that target small manufacturers tracked detailed information on annual funding and participation levels for their programs. As shown in table 1, these 3 programs provided $3.8 million and served 95 small manufacturers on average each year from fiscal years 2004 through 2006. For more information on these programs, see appendixes II and IV. Agencies within Agriculture, Commerce, Defense, Energy, Health and Human Services, Housing and Urban Development, and Labor administered 15 programs that provided services specifically to manufacturers, regardless of their size. As with federal programs designed to support small manufacturers, the programs that target the needs of manufacturers in general offer services aligned with the mission of the administering agency. We identified 9 programs that offer only nonfinancial assistance and 6 programs that offer both financial and nonfinancial assistance. These programs include the following: The Domestic Food Distribution Procurements and the International Food Aid Procurements, administered by Agriculture’s Farm Service Agency, provide financial services, in the form of direct purchases, to processors of foods used for domestic food assistance, export, and foreign aid programs. Specifically, dairy, vegetable oil, and other processed commodities are purchased for various domestic and international food aid programs from food manufacturers, regardless of their size. The Hollings Manufacturing Extension Partnership (MEP) administered by Commerce’s National Institute of Standards and Technology, supports a nationwide network of not-for-profit centers in nearly 350 locations. The centers, funded by federal, state, local, and private resources, provide manufacturers with access to the expertise of knowledgeable manufacturing and business specialists all over the country. Each center works directly with area manufacturers to provide expertise and services tailored to their most critical needs, ranging from process improvements and worker training to business practices and applications of information technology. Solutions are offered through a combination of direct assistance from center staff and outside consultants. According to an agency official, 92 percent of the manufacturing businesses that the program serves are small manufacturing businesses. The Trade Adjustment Assistance (TAA) for Firms Program, administered by Commerce’s Economic Development Administration, offers only nonfinancial services to manufacturers that have experienced declines in sales or employment due to competition from imports in the preceding 2 years. TAA for Firms is a cost-sharing program that provides funds to pay one-half the cost of consultants or industry-specific experts for projects that improve a manufacturer’s competitiveness. The Textiles and Apparel Program, administered by Commerce’s International Trade Administration, offers nonfinancial export assistance to textile manufacturers, such as oversight of strategies and programs to improve the domestic and international competitiveness of the U.S. fiber, textile, and apparel industries as well as industries that manufacture a wide range of consumer products. Among other things, the program performs research and analysis, compiles industry data, and promotes U.S. trade events for a whole spectrum of textiles and apparel. The Manufacturing Technology (ManTech), the Next Generation Manufacturing Technology Initiative, and the Best Manufacturing Practices Programs, administered by the Office of the Secretary of Defense and the Navy, provide nonfinancial technical and business assistance to help large and small manufacturers, including ones that supply parts and equipment to Defense. These 3 programs help firms modernize their operations, apply information technology, or network with other businesses. In addition, ManTech provides financial assistance to manufacturers. The Industrial Technologies Program, administered by Energy, works with manufacturers to improve industrial energy efficiency and environmental performance. The program, which offers both financial and nonfinancial assistance, invests in high-risk, high-value research and development to reduce industrial energy use while stimulating productivity and growth. The Manufacturers’ Assistance, Investigational New Drug Application, and Prescription Drug User Fee Act and Reductions for Small Business Programs, administered by Health and Human Services’ Food and Drug Administration, offer nonfinancial services, such as training, and information to industry and trade associations on the policies and procedures relevant to those products that are regulated by the agency, such as vaccines. The Research Program for the Manufacturing Sector, administered by Health and Human Services’ National Institute for Occupational Safety and Health, offers nonfinancial services by partnering with manufacturers to develop practices and products for the workplace that can help prevent occupational diseases and injuries. The Partnership for Advancing Technology in Housing Initiative, administered by Housing and Urban Development, is a public/private partnership that brings together key federal agencies with leaders of the home building, product manufacturing, insurance, and financial industries to develop and deploy innovative building technologies for the next generation of housing. The goal of this initiative is to identify techniques for building more affordable, durable, disaster-resistant, safe, and energy- efficient housing. Dream It. Do It, a campaign launched by the Manufacturing Institute of the National Association of Manufacturers that is partially funded by Labor, provides nonfinancial assistance to develop tools and partnerships to help inform young people, their parents, and educators of career opportunities in advanced manufacturing. The initiative develops tools and partnerships among employers, training providers, and local Workforce Investment Boards in Kansas City and Washington State as well as in parts of Virginia, Ohio, Indiana, and the Dallas-Fort Worth metropolitan area. Only 7 of the 15 programs that we identified that target manufacturers, regardless of their size, had funding or participation data or both for fiscal years 2004 through 2006. This information is provided in table 2. Because not all of these programs gather data on the size of the manufacturing businesses they serve, we could not determine the extent to which small manufacturers avail themselves of the services that each of these programs offer. For more information on these programs, see appendixes II, III, IV, VI, VII, IX and XI. We identified 127 federal programs administered by 18 agencies that target small businesses regardless of type. Five agencies account for over one- half of these small business programs: SBA has 35 programs, Veterans Affairs has 10, Defense has 9, and Health and Human Services and Transportation each have 8. Of the 127 programs, 7 offer only financial services, such as loans or loan guarantees; 73 offer only nonfinancial services, such as technical, business, and management assistance; 46 offer both financial and nonfinancial services; and 1 did not specify the type of services it offered. For example, of the 35 programs administered by SBA, 16 offer both financial and nonfinancial services, and 19 offer only nonfinancial services. In addition to administering these programs, SBA helps coordinate and manage two multiagency programs: the Small and Disadvantaged Business Utilization program and the Small Business Innovation Research program. Fourteen agencies included in our review have an Office of Small and Disadvantaged Business Utilization, these offices conduct outreach and provide consulting or other nonfinancial services to help small socially or economically disadvantaged businesses more effectively compete for federal contracting opportunities. Similarly, 11 agencies included in our review administer Small Business Innovation Research (SBIR) programs. SBIR provides funding for innovative research projects. In 2004 and 2005, the most recent data available, almost 20 percent of SBIR awards, valued at about $360 million, funded manufacturing-related research. For more information on all 127 programs, see appendixes II through XIX. Of the 18 administering agencies, only 14 collected data on the types of businesses that their small business programs served or the funding devoted to provide services through these programs. Table 3 shows the number of small business programs administered by each of the 18 agencies and the funding and participation data for the 14 agencies that tracked these data. We identified an additional 107 programs administered by 15 agencies included in our review that offer financial or nonfinancial services or both to businesses, regardless of the size or type of business. Over 60 percent of these programs are administered by agencies within Agriculture, Commerce, Defense, and Health and Human Services. As with the manufacturing-related and small business programs previously described, the services these general business programs offer are aligned with the mission of the administering agency. Specifically, we found that 7 programs in 3 agencies offer only financial services to businesses, 66 programs in 13 agencies offer only nonfinancial services, and 32 programs provide both financial and nonfinancial services. Technical, business, or management assistance was the most commonly offered nonfinancial service, and worker training was the least commonly offered service. Information on the services offered by these 107 programs by each of the 15 administering agencies is shown in table 4. For more details on these 107 programs, see appendixes II through XX. Of the 20 federal interagency efforts we identified that address the concerns of the business sector, 4 specifically focus on the challenges faced by small manufacturers, 2 focus on issues faced by manufacturers in general, and the remaining 14 focus on issues of concern to small businesses or businesses in general. Tables 5 and 6 provide detailed information on each of the interagency efforts that we identified on the basis of the primary focus of the effort. Of the remaining 14 interagency efforts that we identified, 5 focus on the concerns of small businesses and 9 focus on the concerns of all businesses in general, both of which may address some issues that are also relevant to small businesses engaged in manufacturing. For example, these efforts focus on such issues as ensuring access to federal contracting opportunities, expanding services available to small businesses through networks of service centers, streamlining electronic access to federal business opportunities, and expanding export opportunities. For more details on these 14 interagency efforts, see appendix XXI. We sent a draft of this report to the Departments of Agriculture, Commerce, Defense, Education, Energy, Health and Human Services, Homeland Security, Housing and Urban Development, the Interior, Labor, Transportation, and Veterans Affairs, as well as the Environmental Protection Agency, Export-Import Bank, National Aeronautics and Space Administration, National Science Foundation, Small Business Administration, Appalachian Regional Commission, and National Technology Transfer Center. All of the agencies except for the Appalachian Regional Commission provided technical comments that we have incorporated as appropriate. As agreed with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report for 30 days after the date of this letter. At that time, copies of this report will be sent to interested congressional committees; the Secretaries of Agriculture, Commerce, Defense, Education, Energy, Health and Human Services, Homeland Security, Housing and Urban Development, the Interior, Labor, Transportation, and Veterans Affairs; the Administrators of the Environmental Protection Agency, Export-Import Bank, National Aeronautics and Space Administration, and Small Business Administration; the Director of the National Science Foundation; the Chairman and President of the Export-Import Bank; the Executive Director of the Appalachian Regional Commission; and the Vice-President of the National Technology Transfer Center. We will also make copies available to others upon request. In addition, this report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staffs have any questions about matters contained in this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix XXIII. We identified (1) those programs that provide services to support manufacturing by U.S. small businesses and, for fiscal years 2004 through 2006, the services and funds these programs provided and their levels of participation and (2) current federal interagency efforts that support manufacturing by U.S. small businesses. In addition, we compiled a list of studies that focuses on federal programs that support small businesses engaged in manufacturing. To identify agencies and programs that provide services to support manufacturing by small businesses, we obtained documentation and interviewed officials at 17 executive departments, independent agencies, and a government corporation, including the Departments of Agriculture, Commerce, Defense, Education, Energy, Health and Human Services, Homeland Security, Housing and Urban Development, the Interior, Labor, Transportation, and Veterans Affairs; the Environmental Protection Agency; Export-Import Bank; National Aeronautics and Space Administration; National Science Foundation; and Small Business Administration (SBA). Throughout this report we refer collectively to these executive departments, independent agencies, and the government corporation as “agencies”. We selected these 17 agencies because each had participated in efforts by Commerce to foster, serve, and promote the nation’s economic development and technological advancement and in the efforts of SBA to promote small business development and entrepreneurship. We also searched the agencies’ Web sites and the Catalog of Federal Domestic Assistance and interviewed officials representing state governments and trade associations. Through these efforts, we also identified relevant federal efforts at the National Technology Transfer Center and the Appalachian Regional Commission, bringing the total to 19 federal entities that administer programs offering services to support businesses, including small manufacturers. We included assistance provided by federal entities and through contracts or other agreements with state governments as well as private and nonprofit entities that operate on the federal government’s behalf. We focused our work on federal programs that support businesses by addressing challenges in the following five areas: obtaining capital; developing or deploying new technologies; applying improved technology, business, and management practices; exporting goods; and training workers. Because agencies may not track funding and participation in a consistent manner, the information they reported to us is an estimate of the minimum funding provided and businesses served. We excluded federal research programs that focus on advancing manufacturing-related knowledge and tools but that do not offer operational services or financing to manufacturers. Because no comprehensive list of federal programs that provide direct services to manufacturers or other businesses exists, we cannot guarantee that we identified all such programs. However, we attempted to verify the accuracy and completeness of the information we gathered with relevant agency officials. Officials reviewed and verified information on over 70 percent of the programs that we identified. The information we included that was not reviewed by agency officials was obtained from agency documents and information contained in agency Web sites. The program descriptions included in this report, including the appendix descriptions, were derived from agency materials and do not reflect independent GAO legal analysis of any relevant program statutes, regulation, or administrative guidance. To identify interagency efforts to support manufacturing by small businesses, we searched the agencies’ Web sites; obtained documentation; and interviewed officials at federal agencies and organizations that represent state governments and trade associations. We included both ongoing interagency efforts that coordinate the activities of programs that operate in multiple agencies and task-specific interagency efforts that may disband upon completion of an assigned task. We attempted to verify information on these interagency efforts with relevant agency officials and reviewed available documentation on the mission, goals, and accomplishments of each effort. We consulted GAO technical experts and determined that the information and data were sufficiently reliable for this report. We conducted our work from May 2006 to April 2007 in accordance with generally accepted government auditing standards. To identify studies focused on federal programs that support domestic manufacturing by small businesses, and other relevant studies, we identified the following criteria for including a study in this report: (1) only those studies about federal programs—specifically, those that target small businesses engaged in manufacturing; any small businesses, regardless of type; or any businesses engaged in manufacturing, regardless of size; (2) only programs administered by the following agencies and departments: the Departments of Agriculture, Commerce, Defense, Education, Energy, Health and Human Services, Homeland Security, Housing and Urban Development, Labor, and Transportation and the Appalachian Regional Commission, Environmental Protection Agency, Export-Import Bank, National Aeronautics and Space Administration, National Science Foundation, and SBA; (3) only those studies that were published after October 1, 2000; and (4) only those studies that included original research. We searched the following databases in addition to GAO’s reports database: ProQuest, Nexis.com, EconLit, Tax & Accounting, WorldCat, DIALOG, Sociological Abstracted, Cited References, Expanded Academic, the Congressional Budget Office, the Congressional Research Service, the Defense Technical Information Center, the Inspectors General’s offices at each of the 16 agencies, the National Association of Public Administration, the National Academies Press, PolicyFile, and the RAND Corporation. We used such search terms as manufacture(s), manufacturing, manufacturer(s), industry, small business(es), federal, and each of the agency names. We found 66 studies that fit our criteria and completed a fatal flaw review for the 23 studies that were not completed by GAO to ensure that each was conducted with reasonable methodological soundness. Based on this review, we found that 4 of these 23 studies were outside of our scope or without a sound methodology. We combined the 19 studies that met our criteria with 43 studies completed by GAO and then organized these 62 studies by the agency that administers them and placed them in a bibliographic format. Appendix II: Department of Agriculture – Programs that Offer Services to Small Manufacturers and Types of Services The Small Business Innovation Research (SBIR) program makes competitively awarded grants to qualified small businesses for the purpose of supporting high quality research proposals containing advanced concepts related to important scientific problems and opportunities in agriculture that could lead to significant public benefit if the research is successful. The objectives of the SBIR program are to stimulate technological innovations in the private sector, strengthen the role of small businesses in meeting federal research and development needs, increase private sector commercialization of innovations derived from Agriculture-supported research and development efforts, and foster and encourage participation by women-owned and socially and economically disadvantaged small business firms in technological innovations. Eight Agriculture agencies participate in SBIR including the Cooperative State Research, Education and Extension Service, Agricultural Research Service, and the Forest Service. The Domestic Food Aid Procurements are purchases of dairy and processed commodities for domestic food assistance programs, administered by Agriculture’s Food and Nutrition Service, such as the National School Lunch Program, the Commodity Supplemental Food Program, the Emergency Food Assistance Program, and the Food Distribution Programs on Indian Reservations. The Domestic Procurement Division also purchases butter, cheese and nonfat dry milk at announced prices under the Milk Price Support Program administered by Commodity Operations. International Food Aid Procurements are purchases of processed vegetable oil and other commodities that are produced and manufactured within the United States. Commodities are shipped for overseas donation through various humanitarian feeding programs administered by the United States Agency for International Development and Agriculture’s, Foreign Agricultural Service. The Small and Very Small Plant Outreach program offers a central source for small and very small food processing plants to obtain information, technical assistance, and training to comply with food safety regulations and promote food safety. Since February 2006, an interagency council coordinates the outreach efforts across Agriculture agencies. Through the program Agriculture’s Food Safety and Inspection Service (FSIS) delivers information through partnerships with colleges, universities, and other Agriculture agencies. Its services to small manufacturers include informational materials about regulatory compliance, referrals to other sources of information, funding for university workshops, and training materials such as videos. FSIS also offers education sessions to small and very small plant owners and operators on how to improve their food safety and food defense systems, and provides guidance to small and very small plant owners who want to start operations regarding federal inspection of their product. Agriculture’s Facility Guarantee Program is designed to expand sales of U.S. agricultural products to emerging markets where inadequate storage, processing or handling capacity limit trade potential. The program targets export sales of U.S. equipment or expertise to improve ports, loading/unloading capacity, refrigerated storage, warehouse and distribution systems, and other related facilities may qualify for facility guarantees, as long as these improvements are expected to increase opportunities for U.S. agricultural exports. The program provides payment guarantees to finance commercial exports of U.S. manufactured goods and services that will be used to improve agriculture-related facilities. Under the program, Agriculture’s Commodity Credit Corporation (CCC) guarantees payments due from approved foreign banks to exporters or financial institutions in the United States. Typically, a guarantee covers 95 percent of principal and a portion of interest. The financing must be obtained through normal commercial sources. Agriculture’s Foreign Agricultural Service administers this program on behalf of the CCC. The Market Access Program (MAP) uses funds from Agriculture’s Commodity Credit Corporation (CCC) to help create, expand, and maintain foreign markets for U.S. agricultural products. MAP targets its partnerships to non-profit U.S. agricultural trade associations, U.S. agricultural cooperatives, non-profit state-regional trade groups, and small U.S. businesses. The MAP partner and CCC share the costs of approved overseas marketing and promotional activities such as consumer promotions, market research, trade shows, and trade servicing. The Foreign Market Development program, also know as the “Cooperator Program,” uses funds from the Commodity Credit Corporation (CCC) to create, expand, and maintain long-term export markets for U.S. agricultural products. Through the Cooperator program, CCC enters into trade promotion partnerships with U.S. agricultural producers and processors, who are represented by nonprofit commodity or trade associations. Under this partnership, Agriculture and the Cooperators pool their technical and financial resources to conduct approved overseas market development activities. The Cooperator Program is administrated by Agriculture’s Foreign Agricultural Service. The Emerging Markets Program is a market access program that provides funding for technical assistance activities intended to promote exports of U.S. agricultural commodities and products to emerging markets in all geographic regions, consistent with U.S. foreign policy. The program specifically targets U.S. agricultural or agribusiness organizations, universities, state departments of agriculture, Agriculture agencies, and for-profit entities. Many types of technical assistance activities that promote markets for U.S. agricultural products may be eligible for funding, including feasibility studies, market research, sectorial assessments, orientation visits, specialized training, and business workshops. Export credit guarantee programs help provide commercial financing of U.S. agricultural exports. Agriculture’s Commodity Credit Corporation (CCC) administers these programs assist U.S. exporters of agricultural products with exports to countries where credit is necessary to maintain or increase U.S. sales, but where financing may not be available without CCC guarantees. The Dairy Export Incentive Program helps exporters of U.S. dairy products meet prevailing world prices for targeted dairy products and destinations. Under the program, the U.S. Department of Agriculture pays cash to exporters as bonuses, allowing them to sell certain U.S. dairy products at prices lower than the exporter’s costs of acquiring them. The major objective of the program is to develop export markets for dairy products where U.S. products are not competitive because of the presence of subsidized products from other countries. The Export Enhancement Program is designed to help U.S. farm products meet competition from subsidizing countries, especially the European Union. Under the program, Agriculture pays cash to exporters as bonuses, allowing them to sell U.S. agricultural products in targeted countries at prices below the exporter’s costs of acquiring them. The major objectives are to expand U.S. agricultural exports and to challenge unfair trade practices. U.S. Exporter Assistance offers on-line access to Foreign Agricultural Service (FAS) resources, products, and services that can help companies explore the potential for international sales. Agriculture-FAS’s Exporter Assistance benefits primarily small and medium-sized companies. The Supplier Credit Guarantee Program (SCGP) was designed to make it easier for exporters to sell U.S. food products overseas by insuring short-term, open account financing. SCGP was active until late 2005, but has not been active since. Under the security of the SCGP, U.S. exporters could become more competitive by extending longer credit terms or increasing the amount of credit available to foreign buyers without increasing financial risk. SCGP targeted U.S. exporters of agricultural products, with an emphasis on high-value products and market potential. The small business programs administered by Agriculture’s Office of Small and Disadvantaged Business Utilization (OSDBU) are responsible for fostering the use of small and small disadvantaged businesses as federal contractors. OSDBU’s goal is to provide as much information, guidance and technical assistance as possible to assist the small business community in increasing its competitiveness through increased participation in Agriculture’s procurement and program activities. The Customer Outreach Services administered by Agriculture’s Office of Small and Disadvantaged Business Utilization help foster participation by small businesses in Agriculture’ s procurement and program activities. Specifically, the program identifies and eliminates barriers that prevent or restrict access to Agriculture procurements, educates small businesses, and conducts monthly vendor outreach. The Renewable Energy Systems and Energy Efficiency Improvements Program funds grants, direct loans, and loan guarantees to agricultural producers and rural small businesses that can demonstration financial need to purchase renewable energy systems and make energy efficiency improvements. To be eligible for grants, applicants must demonstrate financial need. Projects must be for the purchase of a renewable energy system or to make energy efficiency improvements. Eligible renewable energy projects include systems that generate energy from wind, solar, biomass, or geothermal sources or that produce hydrogen derived from biomass or water using a renewable energy source. Renewable energy projects can include the generation of electricity, heat, fuels, or hydrogen. Energy efficiency projects typically involve installing or upgrading equipment that results in a significant reduction in energy use from current operations. The purpose of the Rural Business Opportunity Grant program is to promote sustainable economic development in rural communities with exceptional needs by making grants to pay the costs of providing economic planning for rural communities, technical assistance for rural businesses, or training for rural entrepreneurs or economic development officials. Eligible applicants include public bodies, nonprofit corporations, Indian tribes, or cooperatives with members who are primarily rural residents. Applicants must be able to show that the funding will result in economic development of a rural area. In addition, applicants must include a basis for determining the success or failure of the project and assessing its impact. The Rural Business Enterprise Grants program awards grants to public bodies, private nonprofit corporations, and federally-recognized Indian tribes to finance and facilitate development of small and emerging private businesses located in rural areas. Eligible small and emerging businesses must have fewer than 50 new employees and less than $1 million in gross annual revenues. Funds may be used to finance or develop small and emerging businesses. Eligible uses include technical assistance such as marketing and feasibility studies, business plans, and training; purchases or leases of machinery and equipment; the creation of revolving loan funds small emerging businesses may use to purchase equipment or real estate; or provide working capital or funds to construct business incubators for small and emerging businesses. The Business and Industry Guaranteed Loan program helps create jobs and stimulates rural economies by providing financial backing for rural businesses. This program provides guarantees up to 80 percent of loans made by commercial lenders. Loan proceeds may be used for working capital, machinery and equipment, buildings and real estate, and certain types of debt refinancing. The primary purpose is to create and maintain employment and improve the economic climate in rural communities. Authorized lenders include federal or state chartered banks, credit unions, insurance companies, savings and loan associations, Farm Credit Banks, the National Rural Utilities Finance Corporation, and other lenders approved by Business and Cooperative Programs. A borrower may be a cooperative organization, corporation, partnership, or other legal entity organized and operated on a profit or nonprofit basis; an Indian tribe on a federal or state reservation or other federally recognized tribal group; a public body; or an individual. Individual borrowers must be U.S. citizens or legal residents. Corporations or other nonpublic borrowers must be at least 51 percent owned by persons who are either U.S. citizens or legal residents. Business and Industry loans are normally available in rural areas. Value Added Producer Grants may be used for planning activities and for working capital for marketing value-added agricultural products and for farm-based renewable energy. Eligible applicants are independent producers, farmer and rancher cooperatives, agricultural producer groups, and majority-controlled producer-based business ventures. The purpose of the Intermediary Relending Program is designed to alleviate poverty and increase economic activity and employment in rural communities, especially disadvantaged and remote communities, through financing targeted primarily towards smaller and emerging businesses, in partnership with other public and private resources, and in accordance with state and regional strategy based on identified community needs. This purpose is achieved through loans made to intermediaries that establish programs for the purpose of providing loans to ultimate recipients for business facilities and community developments in a rural area. The program targets small businesses in rural areas. The Rural Economic Development Loans and Grants program must be used exclusively to promote rural economic development and/or job creation projects, including but not limited to, project feasibility studies, start-up costs, business incubator projects, and other reasonable expenses for the purpose of fostering rural economic development. The Office of Exporter Services is responsible for counseling exporters of all sizes, conducting export control seminars and drafting and publishing changes to the Export Administration Regulations. It is also responsible for compliance actions related to the Special Comprehensive Licenses and the development of export management systems. In addition, the office processes license applications and commodity classifications. A series of increasingly detailed seminars led by Bureau of Industry and Security’s professional counseling staff to provide an in-depth examination of the Export Administration Regulations and to inform exporters how to comply with U.S. export control requirements on commercial goods. A web site to help make small and medium-sized businesses aware of the wide range of federal resources available to bolster their competitiveness in world markets. The Bureau of Industry and Security and partner agencies offer a variety of innovative programs to assist such firms. The web site provides a brief description and links to various programs, many of which are defense-related. The Defense Advocacy Program helps companies succeed in today’s highly competitive global defense market. They assist U.S. companies of all sizes. Trade and industry analysts: 1) support U.S. defense companies’ products and services in international procurement competitions, 2) identify and disseminate information on export market opportunities, 3) provide market intelligence and business counseling, and 4) generate high-level, government-to-government advocacy on behalf of U.S. firms competing for international defense projects. The purpose of the Defense Priorities and Allocations System Program is to (1) assure the timely availability of industrial resources to meet current national defense and emergency preparedness program requirements; and (2) provide an operating system to support rapid industrial response in a national emergency. The Bureau of Industry and Security conducts industry analyses to assess the capability of the U.S. industrial base to support national defense. The Office of Technology Evaluation, an office within the Bureau, uses industry-specific surveys to provide essential employment, financial, production, research and development, and other data - information unavailable from any other source. The final reports provide findings and recommendations for government policy-makers and industry leaders. These studies are conducted in cooperation with experts from the private sector and other U.S. government agencies. The goal is to enable the private sector and government agencies to monitor trends, benchmark industry performance, and raise awareness of diminishing manufacturing capabilities. Customers for these reports include the Armed Services, Congress, and industry associations. The Office of National Security and Technology Transfer Controls (NSTTC) and the Office of Nonproliferation and Treaty Compliance (NPTC) are responsible for issues related to export and reexport controls. They implement the Export Administration Regulations to control the spread of commodities, technologies, and software that have both civilian and defense uses. The offices are responsible for policy actions, export licenses, commodity classifications, license determinations, advisory opinions for affected commodities, and interagency commodity jurisdiction assessments. Part of their missions include interacting with businesses of all sizes to ensure compliance with U.S. export regulations, and supporting BIS outreach activities. Business and Industry Services (BIS) does not provide financial assistance to any business. Specifically, NSTTC implements multilateral dual-use export controls for national security reasons to comply with the Wassenaar Arrangement. NSTTC is also responsible for U.S. export control policy for high performance computers and encryption, and administers the export licensing responsibilities for foreign nationals under the “deemed export” technology rule It also administers the “short supply” provisions of the Export Administration Regulations. NPTC implements multilateral dual-use export controls for non-proliferation reasons to comply with the Australia Group, the Chemical Weapons Convention, the Missile Technology Control Regime, and the Nuclear Suppliers Group. In addition, NPTC is responsible for the Inter- American Firearms Convention, crime control, and United Nations embargo restrictions. Both offices implement export controls for anti-terrorism and regional stability reasons. In addition, NSTTC conducts outreach on export controls to various industry associations: in the areas of night vision, encryption and deemed exports. The Trade Adjustment Assistance (TAA) for Firms program is a matching funds program designed for manufacturers battling import competition. A firm may be eligible if it experienced sales and employment declines at least partially due to imports over the last 2 years. One of the 11 regional non-profit groups that manage the program (known as Trade Adjustment Assistance Centers or (TAACs) makes an initial assessment of eligibility. TAA for Firms provides financial assistance to offset the cost of projects that strengthen operations and sharpen competitiveness for manufacturers in many industries. This customized business assistance is used for a variety of projects, including consultant services in the areas of marketing, information technology, manufacturing, engineering, and quality. The Petition Counseling and Analysis Unit helps U.S. Businesses understand U.S. unfair trade laws dealing with dumping and unfair foreign government subsidies, and the process of filing a petition requesting the initiation of an investigation. It provides guidance to potential petitioners to assist them in determining what types of information will be required in order to pursue action against an industry suspected of unfair trade practices. The PCAU also assists potential petitioners in ensuring their petition is in compliance with statutory initiation standards and provides small businesses with publicly available tariff and trade data from the Departments of Commerce and Treasury, and the U.S. International Trade Commission. The primary mission of the Subsidies Enforcement Office (SEO) is to assist the private sector by monitoring foreign subsidies and identifying subsidies that can be remedied under the Subsidies Agreement of the World Trade Organization of which the United States is a member. As part of its monitoring efforts, the SEO has created a Subsidies Library, which is available to the public via the internet. The goal is to create an easily accessible one-stop shop that provides user-friendly information on foreign government subsidy practices. The Trade Remedy Compliance Staff provides assistance to U.S. businesses which feel that their trade problems may stem from unfair trade practices or the improper application of foreign unfair trade laws. The Deputy Assistant Secretary (DAS) for Textiles and Apparel oversees programs and strategies to improve the domestic and international competitiveness of the U.S. fiber, textile, and apparel industries, as well as industries which manufacture a wide range of consumer products. The DAS also serves as Chairman of the Committee for the Implementation of Textile Agreements, which determines when market disrupting factors exist in the domestic fiber, textile, and apparel marketplace. The DAS also administers U.S. textile quota agreements, formulates trade policy, performs research and analysis, compiles industry data, and promotes U.S. trade events for a whole spectrum of textiles and apparel. Through the Export Trade Certificate of Review Program, Commerce helps promote the development of Joint Ventures and the use of Export Trade Intermediaries. With this Certificate, businesses limit their domestic legal liability when Joint Exporting or when they have Joint Sales with a Trading Partner in foreign markets. Currently the more than 3,000 firms participate in the program account for an average of $12.3 billion in annual export sales. In order to provide a streamlined means for U.S. organizations to comply with the European.Union’s (E.U.) data protection directive, Commerce and the European Union negotiated the U.S.-E.U. Safe Harbor Framework. The Safe Harbor Framework allows U.S. companies to avoid data flow interruptions from the European Union to the United States. To be assured of “safe harbor,” a business must annually self certify to Commerce that it adheres to certain safe harbor requirements of the E.U. directive. Commerce maintains a list of all organizations that file self certification letters and makes both the list and the self certification letters publicly available. Market Development Cooperator Program (MDCP) awards entail financial and technical assistance from the International Trade Administration (ITA) to support projects that enhance the global competitiveness of U.S. manufacturing and services industries. An MDCP award establishes a partnership between ITA and non-profit industry groups such as trade associations or chambers of commerce. Such groups are particularly effective in reaching small- and medium-size enterprises. The non- profit groups compete for a limited number of MDCP award partnerships by proposing innovative projects that enhance the global competitive position of their industry with a special emphasis on small- and medium-sized enterprises. These industry groups pledge to pay a minimum of two-thirds of the project cost and to sustain the project after the MDCP award period ends. The Market Access and Compliance’s Office of Intellectual Property Rights has undertaken numerous activities to assist particularly small and medium sized businesses in protecting intellectual property rights, both in the United States and abroad. These activities include hotlines to file complaints; limited legal counseling; country-specific information on protecting intellectual property rights; guidance in securing supply chains against fakes; protecting intellectual property right at trade fairs; training; and information to raise consumer awareness. The U.S. Commercial Service promotes and protects U.S. commercial interests abroad and delivers customized solutions to ensure that U.S. businesses, especially small and medium sized enterprises, compete and win in the international marketplace through a global network of trade professionals. Since its creation in 1993, the Advocacy Center has helped hundreds of U.S. companies—small, medium and large enterprises—in various industry sectors win government contracts across the globe. Advocacy assistance is wide and varied but often involves helping companies communicate with foreign governments or government-owned corporations. For example, on a case-by-case basis, following its due diligence process, the Advocacy Center and, if necessary, the Advocacy Network will make a national interest determination to identify whether the project qualifies for federal support. Typically, companies must demonstrate how supporting their bid will positively benefit the U.S. economy, primarily in the form of exports of goods and services. Other factors may also be taken into consideration. Manufacturing in a Foreign Trade Zone (FTZ) may offer cost advantages to small and medium sized manufacturers. FTZ staff will provide information and assistance to companies considering whether to relocate to an FTZ, which are specific physical areas within the United States that, for customs purposes, are treated as if they are outside U.S. borders. When a company manufactures in an FTZ, the company is treated (for purposes of customs duties) as if it is located outside the United States. As a result, for export shipments of the finished product, U.S. import duties don’t have to be paid on imported components. If the finished product is ultimately shipped to the U.S. market, companies may have the option to pay the finished product duty rate rather than the component duty rate. The Minority Business Development Agency’s new Minority Business Internet Portal (website) is an e-commerce solution designed for the Minority Business Enterprise (MBE) community. This Internet platform provides MBEs with access to customized tools and business information to help them grow and thrive in an ever-changing digital economy. According to Commerce, the Minority Business Development Agency (MBDA) is the only federal agency created specifically to foster the establishment and growth of minority-owned businesses in America. The Agency’s mission is to actively promote the growth and competitiveness of large, medium and small minority business enterprises. MBDA funds a network of Minority Business Development Centers, Native American Business Development Centers, and Business Resource Centers located throughout the Nation. The Centers are staffed by business specialists who have the knowledge and practical experience needed to run successful and profitable businesses. Business referral services are provided free of charge. However, the network generally charges nominal fees for specific management and technical assistance services. Although funding for new projects was discontinued in fiscal year 2005, the Advanced Technology Program (ATP) did receive funding to continue existing projects. ATP provides cost-shared multi-year funding to single companies and industry-led joint ventures to accelerate the development and broad dissemination of challenging, high- risk technologies with the potential for significant commercial payoffs and widespread benefits for the nation. This unique government-industry partnership aids companies in accelerating the development of emerging or enabling technologies that lead to revolutionary new products and industrial processes and services that can compete in rapidly changing world markets. ATP challenges the research and development community to take on higher technical risk projects with commensurately higher potential payoffs for the nation than they would otherwise pursue. ATP does not fund product development, manufacturing, marketing, or commercialization activities. The calibration services of the National Institute of Standards and Technology (NIST) are designed to help the makers and users of precision instruments achieve the highest possible levels of measurement quality and productivity. NIST recovers the cost of providing calibration services by charging a fee for each calibration performed. Calibration services are offered to public and private organizations and individuals alike. The Manufacturing Engineering Laboratory works to satisfy the measurements and standards needs of the U.S. manufacturers in mechanical and dimensional metrology and in advanced manufacturing technology by conducting research and development, providing services, and participating in standards activities. The Hollings Manufacturing Extension Partnership (MEP) is a nationwide network of not-for-profit centers in nearly 350 locations, whose purpose is to provide manufacturers with the services they need to succeed. The centers, serving all 50 States and Puerto Rico, are linked together through Commerce’s National Institute of Standards and Technology. Centers are funded by federal, state, local and private resources to serve manufacturers that make it possible for even the smallest firms to tap into the expertise of knowledgeable manufacturing and business specialists all over the United States. These specialists are people who have had experience on manufacturing floors and in plant operations. Each center works directly with area manufacturers to provide expertise and services tailored to their most critical needs, which range from process improvements and worker training to business practices and applications of information technology. Solutions are offered through a combination of direct assistance from center staff and outside consultants. Centers often help small firms overcome barriers in locating and obtaining private-sector resources. According to the National Institute of Standards and Technology (NIST), it has the mandate to help improve the security of commercial information technology products and strengthen the security of users’ systems and infrastructures. To this end, NIST, in co-sponsorship with the Small Business Administration and the Federal Bureau of Investigation, conducts workshops on information security threats and solutions. The workshops resulting from this partnership deliver information security training and are especially designed for small businesses and not-for-profit organizations. Attendees will have the opportunity to explore practical tools and techniques that can help them to assess, enhance, and maintain the security of their systems and information. The Fisheries Finance Program provides long-term financing for the cost of construction or reconstruction of fishing vessels, shoreline facilities, and aquacultural facilities. Specifically, the program does not finance construction of new vessels, but refinances the previously paid cost of such construction. Additionally, the program provides long-term financing of individual fishing quotas in the Northwest Halibut and Sablefish fisheries. Vessel financing or refinancing that could contribute to overcapitalization by increasing harvesting capacity is prohibited by regulation. The Institute for Telecommunication Sciences (ITS) participates in technology transfer and commercialization efforts by fostering cooperative telecommunications research with industry where benefits can directly facilitate U.S. competitiveness and market opportunities. ITS has participated for a number of years in Cooperative Research and Development Agreements (CRADAs) with private sector organizations to design, develop, test and evaluate advanced telecommunication concepts. Cooperative research with private industry has helped ITS accomplish its mission to support industry’s productivity and competitiveness by providing insight into industry needs. This has led to adjustments in the focus and direction of other Institute programs to improve their effectiveness and value. While most CRADAs are with small businesses that gain access to the Institute’s facilities through the agreement, these businesses may not meet the Small Business Administration’s definition of small. These entities gain access to the Table Mountain Field Site and Radio Quiet Zone facilities to conduct radio research experiments that does not involve the transfer of technology from the government to small businesses. The Small Business Innovation Research Program (SBIR) is designed to ensure that small, high-technology firms have access to federal research and development funds to pursue advanced technologies and their commercial applications. SBIR is a competitive three-phase program that reserves a specific percentage of research and development funding at certain federal agencies for awards to small businesses. Currently eleven other federal agencies provide the grant funds and oversee the projects. The Small Business Administration monitors the SBIR program and provides guidance. Two Commerce agencies (the National Oceanic and Atmospheric Administration and National Institute of Standards and Technology) administer SBIR programs. SBIR funds the critical startup and development stages and it encourages the commercialization of the resulting technology, product, or service. In accordance with Executive Order No. 13,329, SBIR programs will give a priority, where feasible, to proposals that are directed toward innovations that will aid the nation’s manufacturing sector. The Commerce’s Information Technology Solutions Next Generation (COMMITS NexGen) levels the “playing field” as a small business Government-Wide Acquisition Contract (GWAC) that is convenient for ordering information technology (IT) services from high quality small businesses. In today’s streamlined acquisition environment, many IT requirements that once were publicly announced are now met through task and delivery order contracts. COMMITS NexGen gives small businesses the opportunity to compete and grow. The Office of Business Liaison serves as the primary point for contact between Commerce and the business community. Specifically, among other things, the Office helps guide individuals and businesses to the Commerce offices and policy experts best suited to respond to their needs; helps to develop a pro-active, responsive and effective outreach program and relationship with the business community; informs the Secretary, the Department and Administration officials of the critical issues facing the business community; and informs the business community of Commerce and Administration resources, policies and programs. Commerce’s ‘Prime Contractor Directory’ is prepared to assist all small businesses, with their marketing efforts in obtaining suitable subcontracting opportunities and presenting their capabilities to Commerce prime contractors. The Prime Contractor Directory includes product, service, and construction related contractors that have contracts with Commerce which are valued at $500,000 or more. These companies have approved subcontracting plans and their progress toward achieving its subcontracting goals is monitored by the Office of Small and Disadvantaged Utilization. The subcontracting program creates many opportunities for small, small disadvantaged, HUBZone, veteran-owned, service disabled veteran-owned small firms, and women-owned small businesses. Commerce requires contractors to establish aggressive goals for subcontracting with small businesses. The Office of Small and Disadvantaged Business Utilization monitors the progress of prime contractors in meeting the goals in their subcontracting plans. Office of Small and Disadvantaged Business Utilization is an advocacy and advisory office responsible for promoting the use of small, small disadvantaged, Section 8(a), women-owned, veteran-owned, service-disabled veteran-owned, and HUBZone small businesses within Commerce’s acquisition process. The Technology Insertion, Demonstration, and Evaluation (TIDE) program was founded to encourage and assist small manufacturers in the adoption of commercially available software and information technology. The TIDE program is specifically focused on small manufacturers that supply goods and services important to the national defense; however, much of the work of the TIDE program is broadly applicable to all small businesses. The TIDE program consists of three primary elements: (1) technology demonstration projects; (2) workforce development course; and (3) technology development projects. This program is run through Carnegie Mellon University’s Software Engineering Institute. The purpose of the Air Force Manufacturing Technical Assistance Production Program is to assist in increasing and enhancing the competitiveness of small manufacturing firms in support of the Air Force in Defense and their major prime contractors by providing technical and managerial assistance. The program focuses on: small business solutions to industrial policy issues; reducing critical shortages of spare parts; sustaining legacy weapons systems; maintaining surge production capability; reducing diminishing manufacturing sources and material shortages; increasing competition in commodity areas and providing a source of “Best in Class” suppliers for the government to increase competition, reduce manufacturing costs, reduce cycle times, and increase flexibility in the supply chain. The Industrial Base Information Center (IBIC) provides timely information about the Defense Technology and Industrial Base to directly support the planning and execution activities of the Directorate and related government users. IBIC services are available to all federal government employees and contractors requiring information on valid federal government contracts. IBIC has access to an extensive range of commercial and government information sources. On-line services include DIALOG, Haystack, AFKS, and Jane’s. Databases available to IBIC include DD350 government contract data, Standard & Poor’s Research Insight, FEDLOG, Forecast International, and others. IBIC has used these and other sources to provide analyses suited to a variety of customers’ needs. The Air Force Technology Transfer Program assures that Air Force Science and science and engineering activities promote the transfer and/or exchange of technology in a timely manner to the private and public sectors. Partnering with the Air Force can be readily accomplished through a variety of Technology Transfer agreements, such as collaborative research or licensing Air Force technologies. The Armament Retooling and Manufacturing Support (ARMS) program—a cooperative arrangement between the Army and Agriculture—offers commercial/industrial businesses the opportunity to establish business centers at eligible Army production facilities. The ARMS “Asset Management” model offers mature infrastructure and services to businesses seeking manufacturing, office, warehouse, and other industrial park resources. The facility contractor (property manager) at the participating Army site negotiates terms and conditions with these clients reaching the “fair market value” for needed asset requirements. The ARMS program offers leasehold improvements to prospective clients to upgrade the property, meet code requirements, or adapt existing infrastructure to business client needs. As with state and local economic development agencies, the aim of the ARMS Loan Guarantee Program is to assist commercial clients/tenants in capitalizing their business opportunities. This loan program provides tenants with working capital, equipment acquisition, building modification, and other business resources to locate at eligible Army industrial facilities. The Best Manufacturing Practices (BMP) program operates out of the Best Manufacturing Practices Center of Excellence (BMPCOE), a partnership of the Office of Naval Research’s BMP, Commerce, and the University of Maryland. The program helps businesses identify, research, and promote exceptional manufacturing practices, methods, and procedures to allow them to operate at a higher level of efficiency and effectiveness. BMP has three core competencies: (1) Best Practices Surveys - to identify, validate, and document best practices, and encourage government, industry, and academia to share information and implement the practices; (2) Systems Engineering - facilitated by the Program Manager’s WorkStation, a suite of electronic tools that provide risk management, engineering support, and failure analysis through integrated problem solving; and (3) Web Technologies - offered through the Collaborative Work Environment to provide users with an integrated digital environment to access and process a common set of documents in a geographically dispersed environment. The mission of the BMPCOE is to provide a national resource to foster the identification and sharing of best practices used in industry, government, and academia, and to coordinate efforts to strengthen the U.S. industrial base for global competition. The BMPCOE staff assist projects with systems engineering best practices throughout a product’s life cycle using process-based solutions to reduce risk and eliminate surprises. The Defense Small Business Technology and Readiness Resources (DSTARR) program supports Defense needs by developing a national network of technically competent small businesses. DSTARR provides technical assistance and expertise to small businesses in support of their efforts to achieve process improvements, be competitive in the global marketplace, advance information technology capabilities, development leadership skills, and achieve manufacturing excellence. The program supports small manufacturing and specialized information technology companies so they can become viable suppliers, have the appropriate infrastructure and processes, and integrate into supply chains that support Defense. Prior to December 2006 this program was known as the Manufacturing Technical Assistance Production Program. The purpose of the Next Generation Manufacturing Technology Initiative (NGMTI) is to accelerate the development and implementation of breakthrough manufacturing technologies in support of the transformation of the defense industrial base and the global economic competitiveness of U.S.-based manufacturing. With strong Congressional, federal/Defense, and industry support, NGMTI’s goal is not only to create strategic investment plans for innovative manufacturing technologies, but also to drive the implementation of those technologies through focused experiments and partnerships. TechMatch web-based portal designed to provide industry and academia a Defense- sponsored solution to find research and development opportunities, licensable patents, and information on nearly 120 Defense labs located across the United States. Registered users will receive a daily e-mail taking them to their matching research and development opportunities from FedBizOpps, Grants.gov, SBIR/STTR solicitations; calendar events; and licensable patents relevant to their business. Congress established the Technology Transition Initiative (TTI) in the Bob Stump National Defense Authorization Act for fiscal year 2003, Pub. L. No. 107-314, Section 242, 116 Stat. 2458, 2494-2495 (Dec. 2, 2002) to: (1) accelerate the introduction of new technologies into operational capabilities for the armed forces, and (2) successfully demonstrate new technologies in relevant environments. The Science and Technology and Acquisition executives of each military department and each Defense Agency and the commanders of the unified and specified combatant commands nominate projects to be funded. The TTI Program Manager identifies the projects that meet Defense technology goals and requirements in consultation with the Technology Transition Council. The transition costs can be shared by the TTI Program Manager and the appropriate acquisition executive. Service/Agency contribution can be up to 50 percent of the total project cost. The Defense Manufacturing Technology (ManTech) Program focuses on the needs of weapon system programs for affordable, low-risk development, and production capabilities. It provides a link between technology invention and development, and industrial applications. It matures and validates emerging manufacturing technologies to support low-risk implementation in industry and Defense facilities (e.g., depots and shipyards). The program addresses production issues from system development through transition to production and sustainment. The primary customers of the Program are acquisition and logistics program managers who are responsible for transitioning acquisition programs from development into production and for the repair, maintenance, and overhaul of systems currently in use. It operates in the Army, Navy, Air Force, Defense Logistics Agency, and Defense Advanced Research Projects Agency. The purpose of the Defense Industry Adjustment program is to help communities respond effectively to adverse Defense impacts, such as termination of a major defense contract. This usually means helping communities diversify defense- dependent economies by developing community strategies and initiatives to assist firms and their employees. Usually, the adjustment process revolves around identifying an organization to assume responsibility for carrying out the program, planning the adjustment, and implementing the strategy. Community responses may include any or all of the following: assistance for small and medium-sized businesses; business financing programs; procurement assistance centers; industry clusters; manufacturing extension partnerships; export assistance; workforce assistance programs; business incubators; and/or a comprehensive strategy with multiple initiatives. The TechLink Center was established in 1996 at Montana State University in Bozeman, Montana. TechLink is funded by Defense to link companies with federal laboratories for technology licensing, research, technology transfer, and technology transition. TechLink’s expertise extends to many industry areas including advanced materials and nano technology, aerospace, agricultural technologies, biomedicine and biotechnology, electronics, environmental technologies, information technologies and software, photonics and sensors. By understanding the technology needs and strengths of both industry and federal labs, TechLink develops productive partnerships for the licensing, transfer, development, and commercialization of technology. MilTech leverages TechLink’s technology transfer activities and helps companies primarily in the northwestern U.S. to transition innovative technology to Defense operational use. This program is a partnership between TechLink and the Montana Manufacturing Extension Partnership Center. MilTech provides engineering, manufacturing, and business development assistance to these companies to help accelerate the transition of new technology to the U.S. warfighter, lower the cost and cycle time of technology acquisition, and help Defense more fully benefit from its small business research and development funding. Although MilTech is primarily a regional program, it operates outside of the northwestern United States in two different circumstances: (1) to help TechLink licensees of Defense technologies to transition these technologies to the U.S. warfighter, and (2) when requested by Defense program managers to help other companies deliver critically needed technology to Defense. The Defense Technical Information Center (DTIC) is an online source of information for the acquisition, storage, retrieval, and dissemination of information about Defense’s research and scientific and technical information. Its technical information services are available to anyone at no cost and can help applicants for research funds, such as Small Business Innovation Research program participants, to prepare proposals, develop products, market, and network. DTIC provides access to citations of unclassified documents, as well as the electronic full-text of many documents. The objective of the Defense Value Engineering program is to identify improvements in defense systems that can reduce costs and increase performance. Defense seeks to promote contractor participation in the program by (1) providing informational/educational material and assistance to contractors and (2) providing program advocates who can advise and assist Defense prime contractors and their subcontractors in developing proposals to change Value Engineering contracts as well as expediting the processing of these change proposals. Contractors receive a number of benefits for their participation in Value Engineering, including a share of the savings that results from Value Engineering contract changes. Also, contractors may benefit from reduced costs, increased efficiencies, and reduced overhead, among other things. The Procurement Gateway is an integrated online collection of automated systems providing oversight for the management of procurement data. The Procurement Gateway allows prospective government contractors to perform comprehensive and detailed searches against Request for Quotation and Award documents. The goal of the Business Counseling Center (BCC) is to assist vendors in their search for business opportunities and to supply military customers with on-time quality goods. BCC has six state-of-the-art workstations that can provide easy access to view and quote on open solicitations via the Defense Supply Center Columbus Internet Bid Board System. BCC also offers contractors a free resource to access comprehensive research and logistics systems that include data on millions of parts purchased by the Department of Defense. BCC provides training sessions on the many facets of the acquisition process, in addition to a conference area for contractors and Defense Supply Center Columbus personnel to discuss acquisition issues. The Aging Systems Sustainment and Enabling Technologies (ASSET) program is a National Reinvention Laboratory initiated in 1994 by Oklahoma State University to address Defense procurement problems. ASSET is a government-academic- business partnership. Technology development, insertion activities and virtual manufacturing capabilities developed by ASSET partners have resulted in grouped parts databases, parts-demand forecasting models, parts-on-demand manufacturing, new materials technologies for ceramic bearings, new processes to reduce corrosion of aging systems, and new training materials. The technologies and processes developed in the ASSET program increase the Defense supply base, reduce the time and cost associated with parts procurement, and enhance military readiness. The Procurement Technical Assistance Program provides Procurement Technical Assistance Centers (PTACs) with Defense support so that they may provide specialized and professional assistance to individuals and businesses seeking to learn about contracting and subcontracting opportunities, actively seeking contracting and subcontracting opportunities, and/or performing under contracts and subcontracts with Defense, other federal agencies, or state and local governments. This specialized and professional assistance may consist of but is not limited to outreach and counseling type services to promote understanding of federal, state, and local government requirements applicable to contracting for services, manufacturing, or other markets, and assistance in pursuing and securing subcontracting opportunities. PTACs are to make a concerted effort to seek out and assist Small Businesses, Small Disadvantaged Businesses, Women-Owned Small Businesses, Historically Underutilized Business Zone Small Business Concerns, Service-disabled Veteran-owned Small Businesses, and Historically Black Colleges and Minority Institutions. The Defense Small Business Innovation Research program is made up of 12 participating components: Army, Navy, Air Force, Missile Defense Agency, Defense Advanced Research Projects Agency, Chemical Biological Defense, Special Operations Command, Defense Threat Reduction Agency, National Geospatial- Intelligence Agency, and the Office of the Secretary of Defense. Beginning in fiscal year 2007, the Defense Logistics Agency and the Defense Micro-Electronics Activity became participating components. The program funds early-stage research and development at small technology companies and is designed to stimulate technological innovation, increase private sector commercialization of federal research and development, increase small business participation in federally funded research and development, and foster participation by minority and disadvantaged firms in technological innovation. The Defense Small Business Technology Transfer (STTR) program is made up of 6 participating components: Army, Navy, Air Force, Missile Defense Agency, Defense Advanced Research Projects Agency, and Defense Research and Engineering. STTR competitively funds cooperative research and development projects involving a small business and a research institution, such as a university, federally-funded research and development center, or nonprofit research institution. The purpose of STTR is to create an effective vehicle for moving ideas from the nation’s research institutions to the market, where they can benefit both private sector and military customers. The Mentor-Protégé program assists certain small businesses (Protégés) to successfully compete for prime contract and subcontract awards by partnering with large companies (Mentors) under individual, project-based agreements. Mentors and Protégés are solely responsible for finding their counterpart. Many mentors have made the program an integral part of their sourcing plans; while the protégés have used their involvement to develop much needed business and technical capabilities to diversify their customer base. This program is operated through the Air Force, Army, Navy, Defense Information Systems Agency, Defense Contracts Management Agency, Defense Intelligence Agency, Defense Logistics Agency, National Geospatial-Intelligence Agency, Special Operations Command, National Security Agency, Joint Robotics Program, and Missile Defense Agency. The Defense Women-Owned Small Business (WOSB) program highlights the agency’s efforts to achieve the 5 percent goal for prime and subcontract awards to small business concerns owned and controlled by women. The program objectives are to facilitate, preserve, and strengthen full participation for WOSB concerns in the Defense acquisition programs for goods and services and, through programs and activities. It supports the growth of WOSB concerns through outreach, training, and technical assistance. All Defense subcontracting plans are required to have a separate goal for awards to WOSBs. The Defense Regional Councils for Small Business Education and Advocacy are a nationwide network of small business specialists organized to promote the national small business programs of the United States. There are eight Regional Councils sponsored by the Defense Office of Small Business Programs governed by individual by-laws: Northeast, Mid-Atlantic, District of Columbia, Southeastern, North Central, South Central, Pacific Northwest, and Western. The Council’s primary objective is to promote the national small business programs to include small, historically underutilized business zone (HUBZone) small, small disadvantaged, women-owned small, and veteran-owned small business concerns; historically black colleges and universities; minority institutions; and tribal colleges. Additional objectives include promoting the exchange of ideas and experiences, and general information among small business specialists and the contracting community; and developing closer relationships and better communication among government entities and the small business community. Some Councils invite Small Business Liaison Officers representing prime contractors in an effort to promote small business subcontracting. Defense conducts outreach to identify small business concerns that are owned and controlled by service-disabled veterans. The purpose of this outreach is to improve prime and subcontracting opportunities for service disabled veteran-owned small businesses throughout Defense including the military services and other Defense agencies. The Indian Incentive Program is a congressionally authorized program that provides a rebate to the Prime Contractor of 5-percent of the total amount subcontracted to an Indian-Owned Economic Enterprise or Indian Organization. The program motivates Prime contractors to utilize Indian organization and Indian-owned economic enterprises. Defense prime contractors, regardless of size of contract are eligible for incentive payments. This is more of a monetary incentive for primes that will contract with Indian organizations, thus benefit those organizations by giving increased opportunities. The purpose of the Historically Underutilized Business Zone Empowerment Contracting Program is to stimulate economic development and create jobs in urban and rural communities by providing federal contracting preferences to small businesses. The program provides federal contracting opportunities for qualified small businesses located in distressed areas. The mission of the Office of Small Business Programs is to (1) advise the Secretary of Defense on all matters related to small business; (2) represent the Secretary of Defense on major matters addressed at the Office of the Secretary of Defense level; (3) develop Defense-wide small business policy and provide oversight to ensure compliance by all military departments and defense agencies; and (4) provide military departments, Defense agencies, and Procurement Technical Assistance Centers with training and tools to foster an environment that encourages small business participation in defense acquisition. The Office of Small Business Programs has the full range of authority over Defense small business programs. Appendix V: Department of Education – Programs that Offer Services to Small Manufacturers and Types of Services The Small Business Innovation Research (SBIR) program helps stimulate technological innovation, utilize small business to meet federal research and development needs, and increase private sector commercialization. SBIR is a highly competitive program that encourages small business to explore their technological potential and provides the incentive to profit from its commercialization. By including qualified small businesses in the nation’s research and development arena, high-tech innovation is stimulated and the United States gains entrepreneurial spirit as it meets its specific research and development needs. Offices within Education that have SBIR programs are as follows: the Institute of Education Sciences and the Office of Special Education and Rehabilitative Services/National Institute on Disability and Rehabilitation Research. The Office of Small and Disadvantaged Business Utilization promotes and fosters opportunities for small and socioeconomically disadvantaged business concerns seeking to obtain prime contracts, subcontracts, and grants that support the programmatic and operational functions of the Department of Education. The Small Business Technology Transfer (STTR) program expands funding opportunities in the federal innovation research and development (R&D) arena through public/private sector partnerships to include joint venture opportunities for small business and nonprofit research institutions. STTR is a competitive three-phase program that reserves a specific percentage of federal R&D funding for award to small business and nonprofit research institution partners. Five federal departments and agencies (the Departments of Defense, Energy, and Health and Human Services, as well as the National Aeronautics and Space Administration and National Science Foundation) are required by STTR to reserve a portion of their R&D funds for awards to small business/nonprofit research institution partnerships. The Small Business Innovation Research (SBIR) program is designed to stimulate technological innovation, utilize small business to meet federal research and development needs, and increase private sector commercialization. SBIR is a highly competitive program that encourages small business to explore their technological potential and provides the incentive to profit from its commercialization. By including qualified small businesses in the nation’s research and development arena, high-tech innovation is stimulated and the United States gains entrepreneurial spirit as it meets its specific research and development needs. Twelve Energy components participate in the agency’s SBIR program. Inventions and Innovation (I&I) provides grants to independent inventors and small companies with sound ideas for energy efficiency technologies. I&I provides grantees not only with funding, but also with additional resources such as training, market assessments, technical assistance, access to promotional events and materials, and special contacts to aid in commercialization endeavors. In addition to the financial assistance grant, I&I provides awardees with business planning assistance and networking resources. For grantees who demonstrate a commitment to commercializing their technology, I&I also funds a market assessment and offers business strategy assistance. I&I recently launched a Web site that offers information tools and valuable network resources for the entrepreneur. Finally, awardees have the option of working with a private organization of past successful grantees that will mentor or otherwise aid new entrepreneurs graduating from I&I. Since I&I’s inception, over 34,000 proposals have been submitted, resulting in over 900 projects selected for financial and commercialization assistance. Awardees are monitored annually until their technologies are retired from the market or they abandon their efforts. Licensees are monitored as long as the technology remains on the market. The Industrial Technologies Program (ITP) works with U.S. industry to improve industrial energy efficiency and environmental performance. The program invests in high-risk, high-value research and development to reduce industrial energy use while stimulating productivity and growth. Results of this investment are seen in the many ITP-funded technologies in the marketplace today. Energy TechNet is a core collection of information and resources for anyone engaged in developing and commercializing advanced energy technologies. From idea development to market assessment, intellectual property protection to financing, the Web site addresses each stage of technology development and commercialization. FreedomCAR and Vehicle Technologies Program professionals work with industry leaders to develop and deploy advanced transportation technologies that could achieve significant improvements in vehicle fuel efficiency and displace oil with competitive manner. Program activities include research, development, demonstration, testing, technology validation, technology transfer, and education. The Office of Small and Disadvantaged Business Utilization is responsible for increasing the contracting opportunities awarded to small and disadvantaged businesses. Energy purchases billions of dollars worth of goods and services annually including remediation, research and development, management and scientific consulting, plate work manufacturing, engineering, and waste treatment and disposal. The Small Business Act Section 8(a) (Section 8(a)) Pilot Program was established in fiscal year 1991 to: 1) target Section 8(a) businesses for Energy procurement opportunities at the subcontract level. The Section 8(a) Pilot Program offers financial assistance in the form of subcontracts. The Mentor-Protégé program is designed to encourage Energy prime contractors to assist small disadvantaged firms certified by the Small Business Administration (SBA) under Section 8(a) of the Small Business Act, other small disadvantaged businesses, women-owned and service disabled veteran owned small businesses, Historically Black Colleges and Universities, and other minority institutions of higher learning, in business and technical areas. The program seeks to foster long-term business relationships between these small business entities and Energy prime contractors, and to increase the overall number of small businesses that receive Energy contract and subcontract awards. The National Institute for Occupational Safety and Health supports 16 university-based Education and Research Centers that offer short-term continuing education for occupational safety and health professionals and others with worker safety and health responsibilities. The mission of the National Institute for Occupational Safety and Health (NIOSH) Research Program for the Manufacturing sector is to eliminate occupational diseases, injuries, and fatalities among workers in manufacturing industries through a focused program of research and prevention. NIOSH believes that their research only realizes its true value when put into practice. Every research project within the NIOSH program for the Manufacturing sector formulates a strategy to promote the transfer and translation of research findings into prevention practices and products that will be adopted in the workplace. NIOSH partners with labor, industry, government, and other stakeholders to accomplish the program goals. There is also a research to practice component to the program. National Institute for Occupational Safety and Health (NIOSH) makes available brief documents, from1 to 4 pages that describe occupational hazards or NIOSH research activities. Alerts briefly present new information about occupational illnesses, injuries, and deaths. Alerts urgently request assistance in preventing, solving, and controlling newly identified occupational hazards. Workers, employers, and safety and health professionals are asked to take immediate action to reduce risks and implement controls. National Institute for Occupational Safety and Health has published more than 40 Alerts on a variety of topics. This Guide is a source of general industrial hygiene information for workers, employers, and occupational health professionals. It presents key information and data in abbreviated tabular form for 677 chemicals or substance groups that are found in many work environments. Chemical Safety Cards summarize essential safety and health information about chemicals for their use at the “shop floor” level by workers and employers. They are simpler than material safety data sheets and designed specifically for workers’ reference. Health Hazard Evaluations (HHEs) are investigations conducted by the National Institute for Occupational Safety and Health in response to concerns expressed by employees, employee representatives, or employers, to find out whether there is a health hazard to employees caused by hazardous exposures and conditions in the workplace. HHEs are provided at no cost and may be confidential. The Center for Biologics Evaluation and Research (CBER) has established a manufacturers’ assistance program to provide assistance and training to industry, including large and small manufacturers and trade associations, and to respond to requests for information regarding CBER policies and procedures. Manufacturers’ assistance is available in numerous areas including: clinical investigator information, adverse event reporting procedures, electronic submissions guidance and requirements, and information on how to submit an investigational new drug application to administer an investigational product to humans. This assistance extends to facilitating effective development of all products regulated by CBER including products to diagnose, treat or prevent outbreaks from exposure to the pathogens that have been identified as bioterrorist agents. The Manufacturers Assistance and Technical Training Branch (MATTB) informs industry and trade associations of the status of CBER policies and initiatives through regular information dissemination and training. MATTB also serves as the CBER focal point for industry and trade associations to provide meeting support, and coordinates external meetings with other Food and Drug Administration Centers. Current federal law requires that a drug be the subject of an approved marketing application before it is transported or distributed across state lines. Because a sponsor (usually the manufacturer or potential marketer) will probably want to ship the investigational drug to clinical investigators in many states, it must seek an exemption from that legal requirement. The Investigational New Drug Application is the means through which the sponsor technically obtains this exemption from the Food and Drug Administration. 21 U.S.C. § 379h authorizes the Food and Drug Administration to collect and use fees from companies that produce certain human drug and biological products. There are three types of user fees - application fees, establishment fees, and product fees. Since the passage of PDUFA, user fees have played an important role in expediting the drug approval process. The agency will waive the application fee for the first human drug application that a small business or its affiliate submits for review. The Center for Devices and Radiological Health has a small manufacturers, international and consumer advice division which offers many forms of services to small manufacturers including technical and regulatory assistance. The division participates in many workshops which may be of educational value to the general medical device community. Current federal law requires that a drug be the subject of an approved marketing application before it is transported or distributed across state lines. Because a sponsor (usually the manufacturer or potential marketer) will probably want to ship the investigational drug to clinical investigators in many states, it must seek an exemption from that legal requirement. The Investigational New Drug Application is the means through which the sponsor technically obtains this exemption from the Food and Drug Administration. 21 U.S.C. § 379h authorizes the Food and Drug Administration to collect and use fees from companies that produce certain human drug and biological products. There are three types of user fees - application fees, establishment fees, and product fees. Since the passage of PDUFA, user fees have played an important role in expediting the drug approval process. The agency will waive the application fee for the first human drug application that a small business or its affiliate submits for review. Agency description of program purpose In the Center for Drug Evaluation and Research (CDER), the Office of Training and Communication (OTCOM) provides ongoing assistance to pharmaceutical businesses with fewer than 500 employees. The assistance includes a comprehensive website, a ListServ of 2,500 subscribers, a point of contact office for specific questions, and a free annual workshop on basic Food and Drug Administration/CDER organization and processes. The OTCOM ListServ conveys important emerging information to small regulated industry, including Federal Register notices, guidance, etc., on a bi-weekly basis. Orphan Product Grants encourage clinical development of products for use in rare diseases or conditions, usually defined as affecting fewer than 200,000 people in the United States. The products studied can be drugs, biologics, medical devices, or medical foods. At this time, only clinical studies qualify for consideration. Each application should propose one discrete clinical study to facilitate Food and Drug Administration approval of the product for a rare disease or condition. The study may address an unapproved new product or an unapproved new use for a product already on the market. Small businesses are encouraged to apply. The Innovative Molecular Analysis Technologies (IMAT) Program is aimed at the inception, development, integration, and application of novel and emerging technologies in the support of cancer research, treatment, diagnosis, and prevention. The IMAT Program is part of a broader technology development initiative within the National Cancer Institute (NCI) to harness specific technologies in the fight against cancer. This initiative underscores the desire of NCI to develop and integrate novel and emerging technologies in support of cancer research, diagnosis, and treatment. In the research continuum of discovery, development, and delivery, the IMAT Program accelerates development and delivery. This specific program will therefore serve as the discovery tool of a larger NCI technology initiative by soliciting and funding highly innovative, high-risk and cancer- relevant technology development projects associated with the molecular analysis of cancer. To spur development of daring technologic improvements in cancer treatment and detection in the 21st century, the National Cancer Institute (NCI) created the Unconventional Innovations Program. This program seeks to stimulate development of radically new technologies in cancer care that can transform what is now impossible into the realm of the possible for detecting, diagnosing, and intervening in cancer at its earliest stages of development. The program began in 1999 and is targeted to invest $50 million over a ten year period. The Technology Transfer Branch (TTB) provides a complete array of services to support the National Cancer Institute’s technology development activities. TTB negotiates the following collaborative agreements for laboratories: Cooperative Research and Development Agreements, Material Transfer Agreements, Confidential Disclosure Agreements, and Clinical Trials Agreements. In addition, TTB markets technologies to outside organizations in order to foster research collaboration, gives advice on intellectual property issues, and keeps laboratories posted on the latest developments in technology development and transfer. The Office of Technology Transfer (OTT) retains title to inventions developed in National Institutes of Health’s (NIH) intramural laboratories and licensing of these inventions to private entities to ensure use, commercialization, and public availability. In a similar way, extramural recipients of NIH funds, such as universities, are allowed to seek patent protection for inventions arising from their NIH-funded basic research and license the rights to private entities to promote commercialization. Over the last 15 years, NIH has executed thousands of license agreements. These licenses transfer NIH and FDA inventions to the private sector for further research and development and potential commercialization that can lead to significant public health benefits. The Office of Acquisition Management and Policy (OAMP) is committed to acquisition excellence by providing leadership, advice and oversight for all National Institute of Health (NIH) acquisition and financial advising services. Through strategic partnership with industry, the NIH strives to acquire the best OAMP acquisition value in products and services to support the agency’s mission activities. Strategies and efforts to promote business interests and opportunities at NIH include: strategic activities for contracting and financial program policies, procedures and practices; organizational guidance in advising on acquisition and financial program activities; oversight activities to review compliance with federal, HHS and NIH acquisition regulations; outreach activities for NIH personnel and the business community; and maintaining vendor resource information. e-PIC is an e-business system designed to smartly capture the global marketplace and profile information about organizations providing products and services. The system is designed to function on a Web platform and links users of the system conducting market research or seeking sources of supplies and services to this virtual market place. It is a consolidated database for storing and maintaining vendor contact information and contract services that each can offer. Vendors can easily add and update their contact information to provide a variety of search criteria for providing sources for an organization’s acquisitions, and to make such a system user friendly and available to the organization administrators. The Small Business Innovation Research (SBIR) program was established to stimulate technological innovation, utilize small business to meet federal research and development needs, and increase private sector commercialization. SBIR is a highly competitive program that encourages small business to explore their technological potential and provides the incentive to profit from its commercialization. By including qualified small businesses in the nation’s research and development arena, high-tech innovation is stimulated and the United States gains entrepreneurial spirit as it meets its specific research and development needs. The Small Business Technology Transfer (STTR) program was established to stimulate technological innovation, utilize small business to meet federal research and development needs, and increase private sector commercialization. STTR is a highly competitive program that encourages small business to explore their technological potential and provides the incentive to profit from its commercialization. By including qualified small businesses in the nation’s research and development arena, high-tech innovation is stimulated and the United States gains entrepreneurial spirit as it meets its specific research and development needs. STTR requires research partners at universities and other non-profit institutions to have a formal collaborative relationship with the small business concern. The Office of Small & Disadvantaged Business Utilization has organized its responsibilities, programs, and activities under three lines of business: Advocacy, outreach and unification of the business process. The results achieved under all three lines of business support the accomplishment of Health and Human Services’ (HHS) strategic goal - to encourage and assist the participation of all small businesses in HHS’ contracts and grants. All of the activities carried out by the HHS Office of Small & Disadvantaged Business Utilization are done in support of its mission to give small businesses equal consideration in contracting opportunities and to increase the number of awards that are made to small businesses. Health and Human Services’ Small Business Program Manual (SBPM) supplements the Federal Acquisition Regulation (FAR) and the Health and Human Services Acquisition Regulation (HHSAR). It is non-regulatory in nature and provides uniform procedures to support and encourage small business participation in the Department’s efforts to acquire goods and services. The SBPM is not a stand-alone document and must be read with the FAR and HHSAR. The Office of Small and Disadvantaged Business Utilization (OSDBU) ensures Homeland Security complies with federal laws, regulations, and policies to provide opportunities in its acquisitions to small business, including socially and economically disadvantaged small businesses. OSDBU is also responsible for Homeland Security’s subcontracting program. OSDBU has small business specialists at the Center for Domestic Preparedness, U.S. Citizenship and Immigration Services, Customs and Border Protection, Federal Law Enforcement Training Center, Immigration and Customs Enforcement, Transportation Security Administration, Federal Emergency Management Agency, U.S. Secret Service, and U.S. Coast Guard. The Mentor-Protégé program is designed to motivate and encourage large prime contractors to provide developmental assistance to small businesses, including socially and economically disadvantaged small businesses. The program is also designed to (1) improve the performance of contracts and subcontracts, (2) foster the establishment of long-term business relationships between large prime contractors and small business subcontractors, and (3) strengthen subcontracting opportunities and accomplishments through incentives. For certain acquisitions, mentors may receive credit in the source selection/evaluation criteria process and a post-award incentive for the costs incurred by a mentor firm in providing assistance to a protégé firm. In addition to the benefits available to mentors, protégés may receive technical, managerial, financial, or any other mutually agreed upon benefit from mentors. Small Business Vendor Outreach Sessions are a series of pre-arranged 15-minute appointments with Small Business Specialists from various components of the Homeland Security procurement offices. These sessions provide small businesses the opportunity to discuss their capabilities and learn of potential procurement opportunities. The Office of the Coordinator for Gulf Coast Rebuilding was created to help devise a long- term plan for rebuilding the region devastated by hurricanes Katrina and Rita. One of its initiatives is to provide support for small business throughout the region through disaster loans and other relief. The overarching mission is to identify the priority of needs for long- term rebuilding; communicate those realities to decision makers in Washington; and advise the President and senior leadership on effective, integrated, and fiscally responsible federal strategies to support a full recovery. The Small Business Innovation Research (SBIR) program’s principal objectives are to (1) stimulate technological innovation by small business; (2) increase small business participation in meeting federal research and development needs; (3) foster and encourage participation by socially and economically disadvantaged small business; (4) increase the commercialization of technology development through federal research and development; and (5) enhance outreach efforts to ensure that all qualified small businesses are aware of the SBIR program and the many benefits it provides. Agency description of program purpose The Homeland Security Small Business Technology Transfer (STTR) program began in early 2006 to help build partnerships among small businesses, universities and research institutions for research and development efforts. The program encourages the transfer of intellectual concepts and ideas from research institutions through the entrepreneurship of small business concerns, as part of a larger goal to develop innovative solutions to challenging Homeland Security scientific and engineering problems. Due to a decline in its extramural research budget for fiscal year 2007, Homeland Security does not meet the statutory requirement to have an STTR program. The Loan Guarantee program was established to stimulate and increase Indian entrepreneurship and employment through establishment, acquisition or expansion of Indian-owned economic enterprises. The Job Corps Civilian Conservation Centers provides individuals, in addition to other training and assistance, programs to gain work experience designed to conserve, develop, or manage public natural resources, or public recreational areas, or to develop community projects in the public interest. The Centers are located primarily in rural areas. The Office of Small and Disadvantaged Business Utilization program strives to improve and increase Interior’s performance in utilizing small, small disadvantaged, HUBZone, women-owned, and veteran-owned businesses as contractors and subcontractors. Bureaus in Interior Bureaus in Interior collectively spend over $2 billion in contracts with the private sector, annually. The Small Disadvantaged Business (SDB) Program is designed to treat small companies equitably and help them to pursue business in both the private and public sector contract arena. Once a business is certified as SDB, it is eligible for specific procurement benefits. The Small Business Act Section 8(a) Business Development Program allows the government to contract, on a noncompetitive basis, with socially and economically disadvantaged small businesses. The HUBZone Empowerment Contracting program provides federal contracting opportunities for qualified small businesses located in distressed areas. The program encourages economic development in historically underutilized zones (HUBZones) and through the establishment of contract preferences for businesses in historically underutilized business zones. The Woman-Owned Small Business program is designed to assist women-owned small businesses pursue business in both the public and private contract arena. Dream It. Do It is a campaign launched by the Manufacturing Institute of the National Association of Manufacturers, to help inform young people, their parents, and educators of career opportunities in advanced manufacturing. A grant from Labor supports the development of tools and partnerships between employers, training providers, and local Workforce Investment Boards in Kansas City, southwestern Virginia, northeastern Ohio, the Dallas-Fort Worth metro area, southeastern Indiana, and Washington State. The High Growth Job Training initiative is a strategic effort to prepare workers to take advantage of new and increasing job opportunities in 14 high growth, high demand and economically vital sectors of the American economy. Grants are available to develop and implement numerous industry specific solutions. The federal Workforce Investment Act of 1998, Pub. L. No. 105-220, 112 Stat. 936 (Aug. 7, 1998) offers a comprehensive range of workforce development activities through statewide and local organizations. These activities can benefit job seekers, laid off workers, youth, incumbent workers, new entrants to the workforce, veterans, persons with disabilities, and employers. The purpose of these activities is to improve the employment, job retention, earnings, and occupational skills of participants. This, in turn, improves the quality of the workforce, reduces welfare dependency, and improves the productivity and competitiveness of the nation. Businesses play an active role in ensuring that the system prepares people for current and future jobs. Project GATE promotes individual entrepreneurship, seeks to energize local small business creation and help diverse urban and rural populations create, and support and expand small businesses. Labor teams with the Small Business Administration through a microloan program that is offered to small start-up companies. Labor provides microenterprise training and assistance in One-Stop Centers. The Apprenticeship Program is a voluntary, industry-driven initiative sponsored by employer and labor groups. The federal government encourages and promotes the establishment of apprenticeship programs and to provide technical assistance to program sponsors. Small and new businesses may find the Compliance Assistance Quick Start Web site useful as an introduction to compliance assistance available on Occupational Safety & Health Administration’s (OSHA) Web site. It offers a step-by-step guide to identify many of the major OSHA requirements and guidance. The Occupational Safety & Health Administration’s On-site Consultation Program provides services to help employers, particularly small businesses, identify and correct hazards at their worksites, and establish, maintain, or enhance their safety and health management system. The Occupational Safety & Health Administration’s Small Business Handbook helps small business employers meet the legal requirements imposed by the Occupational Safety and Health Act of 1970, Pub. L. No. 91-596, 84 Stat. 1590 (Dec. 29, 1970), and create and maintain effective safety and health management systems. The Occupational Safety & Health Administration’s Training Institute and Training Education Centers provide basic and advanced courses in safety and health at locations throughout the country. The Small Business Resource Center is a Web site designed to assist small business owners understand the rules and regulations that Labor administers. The Office of Small and Disadvantaged Business Utilization seeks to increase opportunities for small businesses to participate in the agency’s contract and grant activities; conduct outreach to increase awareness and availability of qualified providers; develop and issue information on Labor’s procurement needs and procedures; train agency staff on program requirements and capabilities; and monitor, evaluate, and report results of the agency’s efforts. Small Business Vendor Outreach Sessions offer small businesses the opportunity to market their capabilities directly to Office of Small Business Programs and agency program officials and learn about potential Labor procurement opportunities. Conversely, Labor procurement officials can learn more about the diverse small business resources available to meet their procurement needs. The Small Business Procurement Power Web site is designed to assist small businesses interested in procurement opportunities with Labor. Job Corps is a no-cost education and vocational training program that helps young people ages 16 through 24 get a better job, make more money, and take control of their lives. Students enroll to learn a trade, earn a high school diploma or General Education Development certificate, and get help finding a good job. Students are paid a monthly allowance that increases the longer they stay with the program. Job Corps provides career counseling and transition support to its students for up to 12 months after they graduate from the program. The Small Business Development Office develops and implements programs that help small businesses, including small businesses owned and controlled by socially and economically disadvantaged individuals, obtain procurement opportunities with the Federal Aviation Administration. The Office of International Programs in cooperation with the Affiliate Programs Team coordinates and arranges for international training and professional development activities. These activities inform the U.S. transportation community of technological and innovative programs abroad, promote U.S. transportation expertise internationally, and increase technology sharing between the U.S. and the international community. The Maritime Administration (MARAD) established the National Maritime Resource and Education Center (NMREC) in April 1994 to help improve the international competitiveness of the U.S. shipbuilders, ship repairers, ship owner/operators and marine suppliers. NMREC’s services include: 1) conferences and workshops; 2) energy technologies information; 3) MARAD guideline specifications for merchant ship construction; 4) marine industry standards library; 5) standards organizations and information; and 6) Title XI information. Transportation’s Office of Small and Disadvantaged Business Utilization (OSDBU), works closely with the Small Business Administration (SBA) and its Procurement Center Representative (PCR) to coordinate policy direction and develop new initiatives on subcontracting issues. A substantial amount of Transportation subcontracting opportunities are awarded to small businesses. To maintain a strong subcontracting program, OSDBU, in conjunction with the SBA/PCR evaluate, review, and make recommendations on subcontracting plans. OSDBU also helps large prime contractors in identifying potential small businesses (including veteran- owned, service-disabled veteran-owned, HUBZone, disadvantaged, and women- owned businesses) to help attain subcontracting goals. Prime contractors report their achievements annually and semi-annually using an electronic subcontracting reporting system at Esrs.gov. Small Business Transportation Resource Centers: 1) disseminate information to small and disadvantaged businesses on business opportunities in Transportation- direct and Transportation-funded activities; 2) carry out market research, and business analyses, to identify the training and technical assistance needs of small businesses to help them become better prepared to compete for and receive transportation-related contracts; 3) design and carry out training and technical assistance programs to encourage, promote, and help minority entrepreneurs and businesses in obtain contracts, subcontracts, and projects related to business opportunities in Transportation-direct and Transportation-funded activities; 4) develop support mechanisms to help minority entrepreneurs and businesses take advantage of those business opportunities; 5) assist minority entrepreneurs and businesses by identifying opportunities for obtaining investment capital and debt financing, including Transportation’s Short Term Lending Program; 6) participate in and cooperate with federal and other programs designed to provide financial management and other forms of support and assistance to minority entrepreneurs and businesses; and 7) conduct outreach and disseminate information to small and disadvantaged business across the nation at local, regional and national transportation and business related conferences, seminars and workshops. The Disadvantaged Business Enterprise Program is designed to encourage, promote and assist minority and women entrepreneurs and businesses to obtain training and technical assistance services. State Departments of Highways and Transportation receive supportive services funds from Transportation to provide in- house supportive services or hire consultants to provide supportive services for disadvantages business enterprises. These supportive services help disadvantages business enterprises compete in winning contracts. The Short Term Lending Program provides loan guarantees to enhance the lending opportunities for disadvantaged business enterprises and other small and disadvantaged businesses to increase the number of such businesses that engage in transportation-related contracts and to strengthen the competitive and productive capabilities of such businesses that currently do business with Transportation, and its grantees, recipients, contractors and subcontractors. The Women’s Procurement Assistance Committee (WPAC), managed by Transportation’s Office of Small and Disadvantaged Business Utilization (OSDBU), consists of at least one representative from each Transportation Operating Administration. The purpose of the OSDBU WPAC is to promote, coordinate, and monitor the plans and programs towards achievement of the five percent procurement goals in its direct contracting activities. The Committee works to provide forums, workshops and best practices in order to contribute to the growth and economic development of women. In addition, the Committee seeks to enhance awareness of women-owned businesses and ensure full participation in the Transportation procurement process. The Office of Small and Disadvantaged Business Utilization (OSDBU) ensures that small and disadvantaged business policies and goals are developed and implemented in a fair, efficient, and effective manner to serve small and disadvantaged businesses. To achieve this goal OSDBU provides services through the Procurement Assistance Division, the Short Term Lending Program, and Regional Small Business Transportation Resource Centers. In addition, OSDBU organizes, co-sponsors, and participates in local, regional, and national outreach events. The National Information Clearinghouse (NIC) serves as a central point of contact for the dissemination of program and procurement information, procurement forecasts, forms, data, public laws, orders, and other similar information of interest to the small business community. NIC customer service representatives respond to inquiries and questions received through a dedicated toll-free number, by written correspondence, or by e-mail in the feedback section of the Office of Small and Disadvantaged Business Utilization Web site. The Small Business Innovation Research Program (SBIR) is designed to stimulate technological innovation, utilize small business to meet federal research and development needs, and increase private sector commercialization. SBIR is a highly competitive program that encourages small business to explore their technological potential and provides the incentive to profit from its commercialization. By including qualified small businesses in the nation’s research and development arena, high- tech innovation is stimulated and the United States gains entrepreneurial spirit as it meets its specific research and development needs. The Office of Small and Disadvantaged Business Utilization at Veterans Affairs advocates to maximize participation of small, small disadvantaged, veteran- owned, women-owned, and empowerment zone businesses in contracts awarded by Veterans Affairs and in subcontracts which are awarded by Veterans Affairs’ prime contractors. The Historically Underutilized Business Zones Program provides federal contracting assistance for qualified small businesses in historically underutilized business zones to: increase employment opportunities, stimulate capital investment in those areas, and help communities leverage and reinvest their wages and taxes within the community. The Women-Owned Small Business Program directs acquisition officials to facilitate, preserve, and strengthen women’s business enterprises and to ensure full participation by women in the free enterprise system by awarding prime contracts and subcontracts to women-owned businesses and by providing counseling to such businesses. The Office of Small and Disadvantaged Business Utilization is responsible for negotiating annual goals with Veteran Affairs acquisition officials to increase federal prime contracts with women-owned small businesses. The Small Business Program implements the requirements to aid, counsel, assist, and protect the interests of small businesses to ensure that they account for a fair proportion of Veterans Affairs’ total purchases, contracts, and subcontracts for property and services. The Small Business Act Section 8(a) Business Development Program was created to help small disadvantaged businesses compete in the American economy through business development and access to the federal procurement market. The Small Disadvantaged Business Program is responsible for the award of contracts to small business concerns owned and controlled by socially and economically disadvantaged individuals and encouraging greater economic opportunity for minority entrepreneurs. The Veteran-Owned and Operated Small Business program identifies small businesses for inclusion Veterans Affairs’ existing acquisition programs; although it is not authorized to set aside contracts for veterans. Veterans Affairs is the only agency which sets a goal and tracks participation of veteran- owned small businesses. Beginning in 2007, it will place a greater emphasis on such businesses. Under the authority granted in section 308 of the Veterans Benefits Act of 2003, Pub. L. 108-183, Stat. 2651, 2662 (Dec. 16, 2003) Veterans Affairs is authorized to set aside contracts and/or award sole source contracts for service-disabled veterans. Veterans Affairs’ goal for participation in procurement by service-disabled veterans is 3 percent. Beginning in 2007, it will place a greater emphasis on such businesses. Veterans Affairs’ subcontracting program promotes the involvement of small businesses at the subcontract level. Recognizing that small firms often do not have the capability to perform as a prime contractor on certain large contracts, Veterans Affairs requires that any contractor receiving a contract for more than $10,000 to agree that small business concerns have the maximum practicable opportunity to participate in contracts that Veterans Affairs has awarded. Furthermore, all prime contracts for construction that exceed $1,000,000 and all other types of contracts that exceed $500,000 that are not awarded to small businesses and that offer subcontracting opportunities, must contain a subcontracting plan that includes percentage goals for participation by small businesses, small disadvantaged businesses, and women owned small businesses. The Center for Veterans Enterprise is a Web site that assists veterans in starting and building businesses. The Web site serves as the federal government’s portal for veteran-owned businesses known as VETBIZ.gov. The purpose of the Mentor-Protégé Program is meant to stimulate and impact the number of small disadvantaged businesses and women-owned businesses engaged in Environmental Protection Agency contracts. The Environmental Protection Agency, working with industry, academic institutions, environmental groups, and other agencies, sponsors Compliance Assistance Centers that address the requirements of specific sectors. Each Web-based Center provides businesses, local governments, and federal facilities with information and guidance on environmental requirements and ways to save money through pollution prevention techniques. The National Environmental Performance Track is a voluntary partnership program that recognizes and rewards facilities that consistently exceed regulatory requirements, work closely with their communities, and excel in protecting the environment and public health. The Environmental Protection Agency provides exclusive regulatory and administrative benefits to Performance Track members, places them at low priority for routine inspections, and offers public recognition, networking opportunities, and other benefits. The Sector Strategies Program seeks to improve performance and reduce burdens in 13 important business sectors by addressing their unique issues and challenges in a collaborative setting. Through informal dialogue, stakeholder teams design tailored strategies to improve environmental performance and reduce regulatory burden. Sector strategies may include targeted regulatory changes, sector-based environmental management system programs, and easier links to assistance services. Currently, the program services the following manufacturing sectors: Agribusiness; Cement Manufacturing; Iron and Steel; Metal Casting; Metal Finishing; Oil and Gas Exploration and Refining; Paint and Coatings; Ports; Shipbuilding and Ship Repair; and Specialty-Batch Chemicals. The purpose of the Design for Environment (DfE) program is to work in partnership with a broad range of stakeholders to reduce risk to people and the environment by preventing pollution. DfE focuses on industries that combine the potential for chemical risk reduction with a strong motivation to make lasting, positive changes. DfE convenes partners, including industry representatives and environmental groups to develop goals and guide the work of the partnership. Partnerships evaluate the human health and environmental considerations, performance, and cost of traditional and alternative technologies, materials and processes. As incentives ford participation and change DfE offers unique technical tools, methodologies, and expertise. Through the Federal Technology Transfer Act program, federal agencies conduct joint research with non-federal partners and protect intellectual property that may be developed. Program partners benefit from cooperative research and development agreements by tapping into EPA’s resources and knowledge base to conduct joint research and technology commercialization. The program is conducted in accordance with the Federal Technology Transfer Act of 1986 and preceding legislation. The goal of the Environmental Technology Verification (ETV) program, a public-private partnership, is to provide credible performance data for commercial-ready environmental technologies to speed their implementation for the benefit of vendors, purchasers, permitters, and the public. The ETV program develops testing protocols and verifies the performance of innovative technologies with the potential to improve protection of human health and the environment. The Superfund Innovative Technology Evaluation (SITE) program was established to address the need for an alternative or innovative hazardous waste treatment technology research and demonstration program. The SITE Demonstration Program encourages the development and implementation of innovative treatment technologies for hazardous waste site remediation and monitoring and measurement. The Environmental Protection Agency (EPA) is one of 11 federal agencies that participate in the Small Business Innovation Research (SBIR) program established by the Small Business Innovation Development Act of 1982, Pub. L. No. 97-219, 96 Stat. 217 (July 22, 1982). The purpose of this Act was to strengthen the role of small businesses in federally funded research and development and help develop a stronger national base for technical innovation. EPA issues annual solicitations for Phase I and Phase II research proposals from science and technology-based firms. Through a phased approach to SBIR funding, EPA can determine whether the research idea, often on high-risk advanced concepts, is technically feasible, whether the firm can do high-quality research, and whether sufficient progress has been made to justify a larger Phase II effort. Phase II contracts are limited to small businesses that have successfully completed their Phase I contracts. The objective of Phase II is to commercialize the Phase I technology. The purpose of the Office of Small and Disadvantaged Business Utilization (OSDBU) is to stimulate and improve the involvement of small businesses and socially and economically disadvantaged small businesses in the overall EPA procurement process. OSDBU monitors and evaluates EPA’s performance in achieving the Agency’s contracting and subcontracting goals, and recommends the assignment of the Small Business Representatives from the Small Business Administration (SBA), who carries out SBA’s procurement oversight duties pursuant to applicable laws and mandates. The purpose of the Small Business Ombudsman is to serve as a conduit for small business to access Environmental Protection Agency and facilitates communications between the small business community and the agency. The office reviews and resolves disputes between small businesses and the Environmental Protection Agency and works with Environmental Protection Agency personnel to increase their understanding of small businesses in the development and enforcement of environmental regulations. The Clean Air Act Amendments of 1990, Pub. L. No. 101-549, 104 Stat. 2399 (Nov. 15, 1990), required that all states develop a program to assist small businesses in meeting the requirements of the Act. Such assistance includes, but is not necessarily limited to, adequate mechanisms to assist small businesses with compliance assistance, pollution prevention and accidental release detection and prevention, permit assistance and obligations. Section 507 of the Clean Air Act discusses specifics of the Small Business Assistance Program (SBAP). The SBAP is non-regulatory in nature and all services are confidential and free of charge. The SBAP is divided in to three major components: the Compliance Advisory Panel, the Ombudsman, and the Assistance Program. Due to geography, demographics and the unique environmental issues in each state, the structure of each program may vary. WasteWise is a free, voluntary partnership program through which organizations eliminate costly municipal solid waste and select industrial wastes, benefiting their bottom line and the environment. WasteWise is a flexible program that allows partners to design their own waste reduction programs tailored to their needs. Large and small businesses from any industry sector may participate. Institutions, such as hospitals and universities, non-profits, and other organizations, as well as state, local and tribal governments, may also participate. The Goddard Space Flight Center and the Jet Propulsion Laboratory, both within the National Aeronautics and Space Administration, are the training centers for technical training courses in the fabrication, assembly and inspection of flight and ground support equipment. Regional Technology Transfer Centers expedite technology transfer and spur economic development. The program divides the nation into six regions and relies on a network within each region to provide direct and timely services to companies and other institutions nationwide. The cooperative agreements covering these Centers expired at the end of 2006. Services will be offered through an outside contractor beginning in March 2007. The Innovative Partnerships Program fosters technology partnerships, commercialization and innovation in support of the agency’s overall mission and national priorities. The Innovative Partnerships Program includes the Office of Technology Transfer, which has a mission to (1) facilitate the transfer of technology developed by the National Aeronautics and Space Administration and for which the agency has title to the private sector for commercial application and other benefits to the nation; (2) facilitate partnerships with the private sector and other external entities to jointly develop technology with both defense and civilian uses and infuse such technology into the agency’s missions; and (3) protect the government’s rights in its inventions. Tech Briefs provides on a monthly basis any technologies releasable for dissemination to the public resulting from research funded by the National Aeronautics and Space Administration (NASA). It is not restricted to commercially significant technologies. Tech Briefs are typically cutting edge reports on research and emerging technologies. Until fiscal year 2007 NASA funded the publication under a cooperative agreement. The Enterprise Engine program created a venture capital fund to provide the National Aeronautics and Space Administration with earlier and broader exposure to emerging technologies and to leverage external venture capital to develop products likely to support the agency’s mission. Effective fiscal year 2007 the program was terminated. The Small Business Innovation Research (SBIR) program funds research by small businesses to meet many of the agency’s research and development requirements. The SBIR program was established in 1982 to provide small businesses with increased federal research and development opportunities. Modeled after the Small Business Innovation Research program, the Small Business Technology Transfer program is aimed specifically at technology transfer. The goal is to translate basic research into economic advantage by advancing productivity growth and international economic competitiveness. The NASA Acquisition Internet Service (NAIS) is a Web site, from which industry has immediate access to current acquisition information over the Internet. Users may subscribe to receive email notifications on acquisitions of interest. NAIS is a feeder system for Federal E-Gov Systems like the Federal Business Opportunities. NAIS provides industry links to reference such information as regulations, provisions, handbooks and guidance. NAIS also provides industry with a center location to find each NASA field Center’s procurement home page. The Mentor Protégé Program is designed to provide incentive to the agency’s major prime contractors to assist small disadvantaged business concerns, Historically Black Colleges and Universities), minority institutions, and women-owned small businesses in expanding their technical capabilities into high technology areas where such firms are currently under-represented. The Office of Small Business Programs is responsible for integrating all categories of small businesses (small businesses, small disadvantaged businesses, woman-owned small businesses, veteran- and service-disabled veteran-owned small businesses, Historically Underutilized Business Zone (HUB Zone) small businesses, and minority- serving institutions) into the competitive base of contractors from which the National Aeronautics and Space Administration and its various centers regularly purchase goods and services. TechFinder is a resource that enables commercial and private users to perform simple or advanced searches or request more detailed information for technology opportunities, licensing opportunities, past success stories, and featured technologies leads. Appendix XVIII: Small Business Administration – Programs that Offer Services to Small Manufacturers and Types of Services The Pollution Control Loans program is designed to provide financing to eligible small businesses for the planning, design, or installation of a pollution control facility. This facility must prevent, reduce, abate or control any form of pollution, including recycling. The loans are Small Business Act Section 7(a) loans with a special purpose of pollution control. The Qualified Employee Trusts Loan Program is designed to provide financial assistance to employee stock ownership plans. The employee trust must be part of a plan sponsored by the employer company and qualified under regulations set by either the Internal Revenue Service Code (as an Employee Stock Ownership Plan or ESOP) or the Department of Labor (the Employee Retirement Income Security Act or ERISA). Section 7(a) Loan Guarantees help creditworthy small businesses, including manufacturers, meet financing needs when dealing with commercial bankers. These firms are often denied conventional financing because the loans they seek are too small for private banks to pursue or because they need loans for a longer period of time than a lender is willing to accept. This is the Small Business Administration’s most flexible business loan, and can be used for a variety of general business purposes including working capital machinery and equipment, furniture and fixtures, land and building, leasehold improvements, and debt refinancing (under special conditions). The U.S. Community Adjustment and Investment Program was established to assist U.S. companies that are doing business in areas of the country that have been negatively affected by North American Free Trade Agreement (NAFTA) based on job losses and the unemployment rate. Funds administered by Treasury allow for the payment of fees on eligible loans. These fees include the Section 7(a) program guarantee fee (and subsidy) and the Section 504 Certified Development Company (CDC) Program guarantee, CDC and lender fees. This reduces borrower costs and increases the availability of these business assistance programs. Eligibility is limited to businesses that reside in one of the more than 230 counties in 29 states that are currently designated as negatively affected by NATA. The Export Legal Assistance Network offers free initial consultations with international trade attorneys from the Federal Bar Association to small businesses interested in exporting. Attorneys help businesses navigate international legal issues, such as patents, copyrights, and trademarks; help clients understand basic contractual, tax and regulatory requirements; provide an indication of priorities among them; and give businesses basic information on programs at other institutions that may be able to help, such as international departments of near-by banks, freight forwarders, insurance companies with international experience, and other government programs. The Export Working Capital Program is a line of credit for financing foreign accounts receivable and export inventory. It is a transaction-based program and can be revolving or non-revolving. The Small Business Administration provides a 90 percent guarantee to the lender. Recipients are usually businesses that have been operating for at least 12 months prior to the application. Proceeds can be used to finance materials and labor needed to manufacture or to purchase goods and services for sale in foreign markets. Funds cannot be used to purchase long term fixed assets. Loans are generally for 12 or fewer months, but can be reissued for additional 12-month periods. International Trade Loans help small businesses engaged in exporting, preparing to engage in exporting, or adversely affected by competition from imports. Small Business Administration guarantees as much as $1.25 million in combined working-capital and facilities and equipment loans. Proceeds can be used for fixed assets or working capital. The Export Express provides loans to assist small businesses in developing or expanding export markets. Eligible use of proceeds include: 1) financing export-development activities such as participation in a foreign trade show or translation of product literature, 2) transaction-specific financing for overseas orders, 3) revolving lines of credit for export purposes, 4) acquiring, constructing, renovating, improving or expanding facilities or equipment used in the U.S. to produce goods or services for export. U.S. Export Assistance Centers are multi-federal agency offices that provide marketing, product assistance and financial assistance to small- and medium-size U.S. businesses that would like to export. Twenty centers are nationwide. Through the Surety Guarantee program, the Small Business Administration (SBA) can guarantee bonds for contracts up to $2 million, covering bid, performance and payment bonds for small and emerging contractors who cannot obtain surety bonds through regular commercial channels. SBA’s guarantee gives sureties an incentive to provide bonding for eligible contractors, and thereby strengthens a contractor’s ability to obtain bonding and greater access to contracting opportunities. A surety guarantee, an agreement between a surety and the SBA, provides that SBA will assume a predetermined percentage of loss in the event the contractor should breach the terms of the contract. Small Business Investment Companies (SBIC) are privately owned and managed investment firms that provide venture capital and startup financing to new and already established small businesses to ensure they have access to long-term financing and venture capital they need to maintain and expand their operations. Small Business Administration (SBA) licenses and regulates the SBICs, and supports them with government backed funds that are invested in small enterprises. SBICs are profit-motivated; they use their own capital and with funds borrowed at favorable rates through the federal government to invest in small businesses in exchange for a share in the success of the small business if it grows and prospers. CAPlines is a loan umbrella program that provides short-term and cyclical working-capital needs. There are five short-term working capital loan programs for small businesses under this program: Seasonal Line; Contract Line; Builders Line; Standard Asset-based Line, and; Small Asset-Based Line. This is a Section 7(a) program. The Certified Development Company Guaranteed Loans (Section 504) loans are long-term financing tools for economic development within a community. The Section 504 program provides growing businesses with long-term, fixed-rate financing for major fixed assets, such as land and buildings. A Certified Development Company (CDC) is a nonprofit corporation set up to contribute to the economic development of its community. CDCs work with the Small Business Administration (SBA) and private-sector lenders to provide financing to small businesses. The maximum SBA debenture is $1,500,000 when meeting the job creation criteria or a community development goal. Generally, a business must create or retain one job for every $50,000 provided by the SBA except for “Small Manufacturers” which have a $100,000 job creation or retention goal (see below).The maximum SBA debenture is $2.0 million when meeting a public policy goal. Proceeds from 504 loans must be used for fixed asset projects such as: purchasing land and improvements, including existing buildings, grading, street improvements, utilities, parking lots and landscaping; construction of new facilities, or modernizing, renovating or converting existing facilities; or purchasing long-term machinery and equipment. The Section 504 Program cannot be used for working capital or inventory, consolidating or repaying debt, or refinancing. The Defense Loan and Technical Assistance Program (DELTA) program is designed to help eligible small business contractors shift from defense to civilian markets. Small businesses are eligible for financial and technical assistance if they are prime contractors, subcontractors, or suppliers detrimentally impacted by the closure, or substantial reduction, of a Defense installation or program, or if the community they are in has been detrimentally impacted by such actions. Financial assistance is provided through the Small Business Administration’s existing Section 7(a) and Section 504 programs. Technical assistance is provided through small business development centers, SCORE, and other federal agencies and other providers. It is a joint program with Defense. To be eligible for this program, small businesses must have derived at least 25 percent of its revenues from Defense or Defense- related Energy contracts or subcontracts in support of defense prime contracts in any one of five prior operating years. The Prequalification Loan is a pilot program that helps low income borrowers, disabled business owners, new and emerging businesses, veterans, exporters, rural and specialized industries develop viable loan application packages and secure loans up to $250,000. The program is administered by the Small Business Administration’s (SBA) Office of Field Operations and SBA district offices. Intermediary organizations work with applicants to make sure their business plans are complete and that their applications are eligible and have credit merit. If the intermediary organization is satisfied that the application has a chance for approval, it will send it to SBA for processing. Small Business Development Centers serving as intermediaries do not charge fees for loan packaging, while for-profit organizations charge fees. The Microloan Program provides very small loans to start-up, newly established, or growing small businesses. Under this program, the Small Business Administration makes funds available to nonprofit community based lenders (intermediaries) which, in turn, make loans to eligible borrowers in amounts up to a maximum of $35,000. The average loan size is about $13,000. Applications are submitted to the local lender and all credit decisions are made at the local level. Individuals and small businesses applying for microloans may be required to fulfill training and/or planning requirements before a loan application is considered, and lenders are required to provide business training and technical assistance. The Disaster Loan Program offers low-interest, long-term financial loans to homeowners, renters, and businesses of all sizes that are trying to rebuild their homes and businesses in the aftermath of a disaster. Two types of loans––Physical Disaster and Economic Injury–– are available. Physical Disaster Loans are available to businesses of all sizes and nonprofit organizations for permanent rebuilding and replacement of uninsured or underinsured disaster- damaged privately-owned real and/or personal property. This is the only SBA assistance that is not limited to small businesses. Economic Injury Disaster Loans are available only to small businesses to provide necessary working capital until normal operations resume after a disaster. SCORE uses the management experience and business acumen of retired and active corporate professionals and small business owners who volunteer their time and expertise to assist small businesses and prospective businesses. Counseling services are free and business workshops and seminars are at low cost. SCORE pioneered online counseling with the launch of its Cyber Chapter in 1996. Small Business Development Centers (SBDC) serve as central sources for disseminating information and guidance to small businesses, company owners, and entrepreneurs, many of whom cannot afford private consulting services. There is at least one SBDC in each state, each with a network of service locations, to ensure that they are easily accessible. At least 50 percent of clients are small businesses. The program is a cooperative effort of the private sector, the educational community and federal, state and local governments. Most SBDCs can help with marketing, financing, feasibility studies, technical problems, and financing issues. Special SBDC programs and economic development activities include international trade assistance, technology and manufacturing technical assistance, procurement assistance, venture capital formation and rural development. SBDCs focus on providing extended-term counseling to small businesses rather than short-term assistance. There are currently 99 Women's Business Centers (WBC) in 44 states and 3 territories. The mission of the WBC program is to target the economically and socially disadvantaged population. WBCs promote the growth of women-owned businesses through training, counseling, mentoring and technical assistance programs. Each WBC provides assistance or training in finance, management, marketing and procurement. In addition, each WBC tailors its program to the needs of its constituency and many offer programs and counseling in two or more languages. The Small Business Training Network provides online training to meet the informational needs of prospective and existing small businesses. The Prime Contracts Program helps increase small businesses’ share of government contracts. Small Business Administration (SBA) procurement center representatives (PCR), located at SBA procurement area offices and federal buying centers across the country, help small businesses obtain federal contracts. There are two types of PCRs: traditional and breakout. Traditional PCRs work to increase the number of procurements set aside for small businesses. Breakout PCRs work to remove components or spare parts from sole source procurements to procurements through open competition, which generates savings for the federal government. The Subcontracting Assistance Program promotes the prime contractors’ use of small businesses. Small Business Administration’s commercial marketing representatives review the subcontracting plans of prime contractors that have one or more contracts that exceed $500,000 to identify opportunities for small businesses to serve as subcontractors. The Business or Procurement Matchmaking Initiative helps increase small businesses’ access to federal contracting opportunities. Federal, county and state agencies, as well as private sector contractors, are matched with small business sellers either in person or through facilitated phone conferences. The Natural Resources Assistance Program is intended to ensure that small businesses obtain a fair share of government property sales and leases through small business set- asides. The Small Business Administration also provides counseling and other assistance to small businesses on government sales and leasing. The program covers five categories of federal resources: 1) timber and related forest products, 2) strategic materials from the national stockpile, 3) royalty oil, 4) leases involving rights to minerals, coal, oil, and gas, 5) surplus real and personal property. The Historically Underutilized Business Zone (HUBZone) Empowerment Contracting Program stimulates economic development and creates jobs in urban and rural communities by providing federal contracting preferences to small businesses. These preferences go to small businesses that obtain HUBZone certification by, among other things, employing staff and maintaining a principal office in a designated HUBZone. The Small Business Act Section 8(a) Program focuses on business development and is designed to foster the growth and competitive viability of Section 8(a) firms through technical assistance delivered over a 9-year period. One of the benefits of the program is that Section 8(a) firms, through their own self-marketing efforts, can obtain sole source contracts of up to $5.5 million for manufacturing and $3.5 million for all other purposes that federal agencies make available for the Section 8(a) program. Qualified firms can also participate in restricted competitions for federal contracts. The Mentor Protégé program enhances the capability of Section 8(a) participants to compete more successfully for federal contracts. The program encourages private sector relationships and expands the Small Business Administration’s efforts to identify and respond to the developmental needs of Section 8(a) clients. Mentors provide technical and management assistance, financial assistance in the form of equity investments and/or loans, subcontract support, and assistance in performing prime contracts through joint venture arrangements with Section 8(a) firms. The Small Disadvantaged Businesses (SDB) Certification Program makes qualified small businesses eligible for special bidding benefits in federal procurement. Under new federal procurement regulations, the Small Business Administration certifies SDBs for participation in federal procurements to help overcome the effects of discrimination. Evaluation credits available to prime contractors increase subcontracting opportunities for SDBs. While the Section 8(a) Program offers a broad scope of assistance to socially and economically disadvantaged firms, SDB certification strictly pertains to benefits in federal procurement. Section 8(a) firms automatically qualify for SDB certification. The Certificate of Competency Program allows a small business to appeal a contracting officer’s determination that it is unable to fulfill the requirements of a specific government contract on which it is the apparent low bidder. When the small business applies for a Certificate of Competency, the Small Business Administration industrial and financial specialists conduct a detailed review of the firm’s capabilities to perform on the contract. The Small Business Innovation Research (SBIR) program is designed to ensure that small, high-technology firms have access to federal research and development (R&D) funds to pursue advanced technologies and their commercial applications. SBIR is a competitive three-phase program that reserves a specific percentage of R&D funding at certain federal agencies for awards to small businesses. Currently 11 other federal agencies provide the grant funds and oversee the projects. The Small Business Administration monitors the SBIR program and provides guidance. SBIR funds the critical startup and development stages and it encourages the commercialization of the resulting technology, product, or service. The Small Business Technology Transfer Program (STTR) expands funding opportunities in the federal innovation research and development (R&D) arena through public/private sector partnerships to include the joint venture opportunities for small business and nonprofit research institutions. STTR is a competitive three-phase program that reserves a specific percentage of federal R&D funding for award to small business and nonprofit research institution partners. Five federal departments and agencies (the Departments of Defense, Energy, and Health and Human Services as well as the National Aeronautics and Space Administration and the National Science Foundation) are required by STTR to reserve a portion of their R&D funds for awards to small business/nonprofit research institution partnerships. Sub-Net is a Web site where prime contractors post subcontracting opportunities. These may or may not be reserved for small businesses, and they may include either solicitations or other notices. Small businesses can review this Web site to identify opportunities in their areas of expertise. While the Web site is designed primarily as a place for large businesses to post solicitations and notices, it is also used by federal agencies, state and local governments, non-profit organizations, colleges and universities, and even foreign governments for the same purpose. Tech-Net is an electronic gateway of technology information and resources for and about small high tech businesses. It includes a search engine for researchers, scientists, state, federal and local government officials; a marketing tool for small firms; and links to potential investment opportunities for investors and other sources of capital. The system is also linked to technology sources of information, assistance, and training. Section 7(j) of the Small Business Act authorized the Small Business Administration to enter into grants, cooperative agreements or contracts, with public or private organizations to deliver management or technical assistance to individuals and enterprises eligible for assistance under the Act. This assistance is delivered through the Section 7(j) Management and Technical Assistance Program to Section 8(a) certified firms, small disadvantaged businesses, businesses operating in areas of high unemployment or low-income or firms owned by low-income individuals. The Section7 (j) program grants, cooperative agreements or contracts are awarded to qualified service providers who have the capability to provide business development assistance to the eligible clients. The Section 7(j) program funding is not available to finance a business; purchase a business; or use as expansion capital for an existing business. Financial assistance under the program may be given for projects that respond to needs outlined in a Section 7(j) program solicitation announcement, or for an unsolicited proposal that could provide valuable business development assistance for Section 8(a) and other socially and economically disadvantaged small businesses. Technical and management assistance includes an executive education program for owners and senior officers. The Service-Disabled Veteran Owned Small Business Concern (SDVOSBC) Program establishes the criteria to be used in federal contracting to determine service-disabled veteran status; business ownership and control requirements; guidelines for establishing sole source and set-aside procurement opportunities; and procurement protest and appeal procedures for service-disabled veteran owned small business. Appalachian Regional Commission’s Information Age Appalachia telecommunications and information technology program was created to promote the development of telecommunications in Appalachia, with a special focus on helping the Region’s distressed counties. The focus of Information Age Appalachia is not only on access to infrastructure, but also, and more importantly, on applications that use that access. Instead of simply promoting technology by itself, the program seeks to stimulate economic growth and improve the standard of living in the Region through technology- related avenues. Two key areas of the program are e-commerce and technology- sector job creation. The focus of the program is to ensure rural areas of Appalachia have access to broadband services. Training and education are included in activities. The program has provided broadband awareness training and general e-commerce training throughout the Appalachian Region. It has also worked with broadband service providers in helping rural communities obtain broadband access in unserved and underserved areas. Small, homegrown businesses play an important role in creating self-sustaining local economies and improving the quality of life in Appalachia. The Entrepreneurship Initiative is a multi-year, $31 million effort that seeks to provide communities with tools to assist entrepreneurs in starting and expanding local businesses. Two key activities of the Initiative include giving entrepreneurs greater access to capital and educating and training entrepreneurs. The Business Development Revolving Loan Fund is a pool of money used by an eligible grantee for the purpose of making loans to create and/or retain jobs. As loans are repaid by the borrowers, the money is returned to the revolving loan fund to make other loans that becomes an ongoing or “revolving” financial tool to retain and create private-sector jobs. Expanding trade opportunities for Appalachian businesses is an important strategy for increasing job opportunities and per capita income in the Region. The Export Trade Advisory Council (ETAC) advises the Commission on trade policy issues, promotes advocacy in national and regional venues, and recommends specific programs for promoting rural export trade in Appalachia. The ETAC has initiated a number of projects designed to help small and medium-sized Appalachian businesses increase their export sales. Its activities include education and training, market entry for small and medium-sized firms, advocacy, and research. Appalachian Regional Commission’s Area Development Program seeks to augment the Highway Program and bring more of Appalachia’s people into America’s economic mainstream. The Asset-Based Development Initiative seeks to help communities identify and leverage local assets to create jobs and build prosperity while preserving the character of their community. Development strategies include, among other things, capitalizing on traditional arts, culture, and heritage; adding value to farming through specialized agricultural development, including processing specialty food items, fish farming, and organic farming; and converting overlooked and underused facilities into industrial parks, business incubators, or educational facilities. The Robert C. Byrd National Technology Transfer Center (NTTC), a 501 (c)(3) organization, was established in 1989 to link U.S. industry with federal laboratories and universities that have the technologies facilities and researchers that industry needs to maximize product development opportunities. The NTTC provides technology assessment services and serves its clients with an experienced professional staff that includes intellectual-property management experts, scientists and engineers, computer information specialists and programmers, market analysts, Web designers, security experts, outreach specialists and technology transfer negotiators. In addition, the NTTC houses a demonstration and training laboratory in which software and other technologies are tested and demonstrated. Twelve participating agencies including 5 agencies in the Department of Commerce (International Trade Administration’s Export Assistance Centers, Economic Development Agency, National Institute of Standards and Technology’s Manufacturing Extension Partnership, Minority Business Development Agency, and Office of Intellectual Property Rights), the Small Business Administration’s Small Business Development Centers as well as the Export-Import Bank, Environmental Protection Agency, and the Departments of Agriculture, Defense, Energy, and Labor. Interagency Network of Enterprise Assistance Providers brings together federal agencies to explore the concept, feasibility, and framework to develop a coordinated network of assistance programs that meets the needs of small businesses and manufacturers. Group members meet monthly to learn about each other’s programs and discuss mutually beneficial opportunities for pilot collaborations. To date, the group has discussed such topics as successful export strategies for small manufacturers, the development of a Web site for the coordinated network and available small business innovation information. Eleven participating agencies including the Departments of Agriculture, Commerce, Defense, Energy, Health and Human Services, Homeland Security, Interior, Transportation, Veterans Affairs, as well as the Environmental Protection Agency and National Aeronautics and Space Administration. The Interagency Working Group on Technology Transfer is a longstanding interagency effort that includes senior policy officials from most of the federal science and technology agencies. The group’s activities are coordinated through the Office of Technology Policy in the Department of Commerce. The group meets monthly to discuss policy issues and related topics of significant interest to the federal laboratory technology transfer community. One of the continuing discussion interests over the last several years has been the extent to which existing federal technology transfer mechanisms and programs work effectively to facilitate interaction with the private sector in such areas as the transfer of intellectual property rights, cooperative research and development relationships, and new technology development activities. Thirteen participating organizations including the Army, Navy, Air Force, and Defense Logistics Agency in the Department of Defense, National Aeronautical and Space Administration, Department of Energy, Department of Labor, National Institute of Standards and Technology in the Department of and Commerce, Federal Aviation Administration, General Services Administration, National Security Agency, U.S. Postal Service, as well as, the Canadian Department of Defense. GIDEP is a cooperative activity between government and industry participants that seek to share technical information during research, design, development, production and operational phases of the life cycle of systems, facilities and equipment for the development, thereby reducing or eliminating expenditures of resources, lowering total ownership cost, and increasing reliability, readiness and safety. Seven participating agencies including the National Institute of Standards and Technology in the Department of Commerce, Department of Energy, Defense Advanced Research Projects Agency in the Department of Defense, two agencies in the Department of Health and Human Services (Food and Drug Administration and National Institutes of Health), the National Aeronautics and Space Administration, and the National Science Foundation. The Multi-Agency Tissue Engineering Science (MATES) Interagency Working Group helps keep federal agencies involved in tissue engineering informed of each other’s activities and helps the agencies better coordinate their efforts in this rapidly growing field. The MATES Interagency Working Group was organized under the auspices of the Subcommittee on Biotechnology of the National Science and Technology Council. Twenty-eight federal agencies participate in the NNI including the Office of Science Technology Policy, Office of Management and Budget, Consumer Product Safety Commission, Environmental Protection Agency, Intelligence Technology Innovation Center, International Trade Commission, National Aeronautics and Space Administration, National Science Foundation, Nuclear Regulatory Commission, two agencies within the Department of Agriculture (Cooperative State Research Extension and Education Service and Forest Service), the Department of Defense, four agencies within the Department of Commerce (Bureau of Industry and Security, National Institute of Standards and Technology, Technology Administration, and U.S. Patent and Trademark Office), the Departments of Education and Energy, three agencies within the Department of Health and Human Services (Food and Drug Administration, National Institutes of Health, and National Institute for Occupational Safety and Health), one agency within Department of the Interior (U.S. Geological Survey), and the Departments of Homeland Security, Justice, Labor, State, Transportation, and Treasury. The National Nanotechnology Initiative (NNI) is a federal research and development (R&D) program established to coordinate multi-agency efforts in nanoscale science, engineering, and technology. Thirteen participating agencies have an R&D budget for nanotechnology. Other Federal organizations contribute with studies, applications for the results from agencies that perform R&D, and other collaborations. The NNI is managed within the framework of the National Science and Technology Council, the Cabinet-level council that coordinates science, space, and technology policies across the federal government. In addition to funding research, federal support through the NNI provides crucial funds for the creation of university and government nanoscale R&D laboratories, and helps educate the workforce necessary for the future of nanotechnology. The NNI also plays a key role in fostering cross-disciplinary networks and partnerships, and disseminating information. Finally, it enables small businesses to pursue opportunities offered by nanotechnology, and encourages all levels of business to exploit those opportunities. Six participating agencies within the Department of Commerce including the Technology Administration, the National Institute of Standards and Technology, the National Telecommunications and Information Administration, the International Trade Administration, the Economics and Statistics Administration and the U.S. Patent and Trademark Office. Commerce has a leading role within the federal government to ensure that RFID is understood, that both industry and consumer concerns and views are heard, and that accurate information about the features and abilities of RFID are disseminated. Twenty participating agencies including the Departments of Agriculture, Commerce, Defense, Energy, Homeland Security, Interior, Labor, Transportation, and Treasury, and the Council of Economic Advisors, Environmental Protection Agency, Export-Import Bank of the United States, National Economic Council, National Security Council, Office of Management and Budget, Overseas Private Investment Corporation, Small Business Administration, U.S. Agency for International Development, U.S. Trade and Development Agency, and United States Trade Representative. The Trade Promotion Coordinating Committee (TPCC) is composed of all the federal government’s agencies involved in export promotion. The present TPCC was formed in 1993 by Executive Order No. 12870, 58 Fed. Reg. 51753 (Sept. 30, 1993), pursuant to the Export Enhancement Act of 1992, Pub. L. No. 102-429 § 201, 106 Stat. 2186 (Oct. 21, 1992); 15 U.S.C. § 4727. The Trade Promotion Coordinating Committee (TPCC) is composed of all the federal government’s agencies involved in export promotion. The Secretary of Commerce is the designated chairperson. The TPCC is mandated to streamline export programs, leverage resources across agencies, develop a national export strategy, and report annually to Congress. Recent initiatives include joint marketing such as Export.gov; joint training such as the TPCC Interagency Trade Officer Training Program; program integration such as through the Small Business Administration and Export- Import Bank Co-Guarantee Program; strategic partnerships to broaden business outreach, such as with states, associations, and corporate partners; and coordination in priority markets, such as in key emerging markets. Davila, Natalie A. “Evaluating Manufacturing Extension: A Multidimensional Approach.” Economic Development Quarterly. Vol. 18, no. 3 (2004): 286-302. Ehlen, Mark A. “The Economic Impact of Manufacturing Extension Centers.” Economic Development Quarterly. Vol. 15, no. 1 (2001): 36-44. Feldman, Maryann P., and Maryellen R. Kelley. “Leveraging Research and Development: Assessing the Impact of the U.S. Advanced Technology Program.” Small Business Economics. Vol. 20, no. 2 (2003): 153-165. Commerce Information Technology Solutions Next Generation Governmentwide Acquisition Contract. GAO-06-791R. Washington, D.C.: June 14, 2006. Trade Adjustment Assistance: Experiences of Six Trade-Impacted Communities. GAO-01-838. Washington, D.C.: August 24, 2001. Trade Adjustment Assistance: Impact of Federal Assistance to Firms Is Unclear. GAO-01-12. Washington, D.C.: December 15, 2000. Reeder et al. The National Institute of Standards and Technology’s Manufacturing Extension Partnership Program – Report 1: Re-examining the Core Premise of the MEP Program. Washington, D.C.: National Academy of Public Administration, 2003. Shapira, Philip. “US Manufacturing Extension Partnerships: Technology Policy Reinvented?” Research Policy. Vol. 30 (2001): 977-992. Voytek, Kenneth P., Karen L. Lellock, and Mark A. Schmit. “Developing Performance Metrics for Science and Technology Programs: The Case of the Manufacturing Extension Partnership Program.” Economic Development Quarterly. Vol. 18, no. 2 (2004): 174-185. Ahmad, Mohamad, Radostin Krastev, and Arkadiusz Puciato. Military Business Success. MBA Professional Report. Monterey, CA: Naval Postgraduate School, 2004. Contract Management: Benefits of the DoD Mentor-Protégé Program Are Not Conclusive. GAO-01-767. Washington, D.C.: July 19, 2001. Defense Commissaries: Additional Small Business Opportunities Should Be Explored. GAO-03-160. Washington, D.C.: December 12, 2002. Defense Manufacturing Technology Program: More Joint Projects and Tracking of Results Could Benefit Program. GAO-01-943. Washington, D.C.: September 28, 2001. Green, Gregory Sean. Army Small Business Innovation Research: A Survey of Phase II Awardees. Thesis. Monterey, CA: Naval Postgraduate School, 2001. Peete, Danny A., and Paul J. Componation. “Predicting USMC SBIR Phase I to II Transition Success by Evaluating Use of Systems Engineering Capabilities.” Engineering Management Journal. Vol. 15, no. 3 (2003): 21- 27. Department of Energy: Achieving Small Business Prime Contracting Goals Involves Both Potential Benefits and Risks. GAO-04-738T. Washington, D.C.: May 18, 2004. DOE Contracting: Improved Program Management Could Help Achieve Small Business Goal. GAO-06-501. Washington, D.C.: April 7, 2006. Small Business Participation in the Alaska Natural Gas Pipeline Project. GAO-05-860R. Washington, D.C.: August 4, 2005. Technology Transfer: Several Factors Have Led to a Decline in Partnerships at DOE’s Laboratories. GAO-02-465. Washington, D.C.: April 19, 2002. Toole, Andrew A., and Dirk Czarnitzki. Biomedical Academic Entrepreneurship Through the SBIR Program. Cambridge, MA: National Bureau of Economic Research, 2005. Disadvantaged Business Enterprises: Critical Information Is Needed to Understand Program Impact. GAO-01-586. Washington, D.C.: June 1, 2001. Greenwood Consulting Group, Inc. A Survey of Business Incubators in Appalachia. A report prepared at the request of the Appalachian Regional Commission. July 2005. Plishker, Laurie, Gary Silverstein, and Joy Frechtling. Evaluation of the Appalachian Regional Commission’s Vocational Education and Workforce Training Projects. A report prepared by Westat at the request of the Appalachian Regional Commission. January 2002. Export-Import Bank: Changes Would Improve the Reliability of Reporting on Small Business Financing. GAO-06-351. Washington, D.C.: March 3, 2006. Archibald, Robert B., and David H. Finifter. “Evaluating the NASA small business innovation research program: preliminary evidence of a trade-off between commercialization and basic research.” Research Policy. Vol. 32, no. 4 (2003): 605-619. U.S. National Aeronautics and Space Administration. Commercial Technology Division. Office of Aerospace Technology. NASA SBIR Program: Commercial Metrics. Washington, D.C.: 2002. Major Management Challenges and Program Risks: Small Business Administration. GAO-03-116. Washington, D.C.: January 1, 2003. SBA Disaster Loan Program: Accounting Anomalies Resolved but Additional Steps Would Improve Long-Term Reliability of Cost Estimates. GAO-05-409. Washington, D.C.: April 14, 2005. Small Business: HUBZone Program Suffers From Reporting and Implementation Difficulties. GAO-02-57. Washington, D.C.: October 26, 2001. Small Business: More Transparency Needed in Prime Contract Goal Program. GAO-01-551. Washington, D.C.: August 1, 2001. Small Business: Status of Small Disadvantaged Business Certifications. GAO-01-273. Washington, D.C.: January 19, 2001. Small Business Administration: Actions Needed to Provide More Timely Disaster Assistance. GAO-06-860. Washington, D.C.: July 28, 2006. Small Business Administration: Disaster Loan Program. GAO-02-210R. Washington, D.C.: November 16, 2001. Small Business Administration: Improvements Made, but Loan Programs Face Ongoing Management Challenges. GAO-06-605T. Washington, D.C.: April 6, 2006. Small Business Administration: Management Practices Have Improved for the Women’s Business Center Program. GAO-01-791R. Washington, D.C.: June 13, 2001. Small Business Administration: Model for 7(a) Program Subsidy Had Reasonable Equations, but Inadequate Documentation Hampered External Reviews. GAO-04-9. Washington, D.C.: March 31, 2004. Small Business Administration: New Service for Lender Oversight Reflects Some Best Practices, but Strategy for Use Lags Behind. GAO-04- 610. Washington, D.C.: June 8, 2004. Small Business Administration: Observations on the Disaster Loan Program. GAO-03-721T. Washington, D.C.: May 1, 2003. Small Business Administration: Response to September 11 Victims and Performance Measures for Disaster Lending. GAO-03-385. Washington, D.C.: January 29, 2003. Small Business Administration: Section 7(a) General Business Loans Credit Subsidy Estimates. GAO-01-1095R. Washington, D.C.: August 21, 2001. Small Business Administration: SBA Followed Appropriate Policies and Procedures for September 11 Disaster Loan Applications. GAO-04-885. Washington, D.C.: August 31, 2004. Small Business Administration: Small Business Government Contracting Programs; Subcontracting. GAO-05-268R. Washington, D.C.: January 24, 2005. Small Business Administration: The Commercial Marketing Representative Role Needs to Be Strategically Planned and Assessed. GAO-03-54. Washington, D.C.: November 1, 2002. Waivers of the Small Business Administration’s Nonmanufacturer Rule Have Limited Effect. GAO-03-311R. Washington, D.C.: December 19, 2002. Ong, Paul M. “Set-aside contracting in SBA’s 8(a) program.” The Review of Black Political Economy. Vol. 28, no. 3 (2001): 59-71. Anonymous. “Analyzing SBIR.” Regulation Magazine. Vol. 23, no. 4 (2000): 14-15. Audretsch, David B., Juergen Weigand, and Claudia Weigand. “The Impact of the SBIR on Creating Entrepreneurial Behavior.” Economic Development Quarterly. Vol. 16, no. 1 (2002): 32-38. Contract Management: Impact of Strategy to Mitigate Effects of Contract Bundling on Small Business Is Uncertain. GAO-04-454. Washington, D.C.: May 27, 2004. Export Promotion: Government Agencies Should Combine Small Business Export Training Programs. GAO-01-1023. Washington, D.C.: September 21, 2001. Export Promotion: Trade Promotion Coordinating Committee’s Role Remains Limited. GAO-06-660T. Washington, D.C.: April 26, 2006. Federal Procurement: Trends and Challenges in Contracting With Women-Owned Small Businesses. GAO-01-346. Washington, D.C.: February 16, 2001. Federal Research: Observations on the Small Business Innovation Research Program. GAO-05-861T. Washington, D.C.: June 28, 2005. Federal Research and Development: Contributions to and Results of the Small Business Technology Transfer Program. GAO-01-867T. Washington, D.C.: June 21, 2001. Federal Research and Development: Contributions to and Results of the Small Business Technology Transfer Program. GAO-01-766R. Washington, D.C.: June 4, 2001. Information on the Number of Small Business Set-Asides Issued and Successfully Challenged. GAO-03-242R. Washington, D.C.: November 1, 2002. International Trade: Experts’ Advice for Small Businesses Seeking Foreign Patents. GAO-03-910. Washington, D.C.: June 26, 2003. Small and Disadvantaged Businesses: Most Agency Advocates View Their Roles Similarly. GAO-04-451. Washington, D.C.: March 22, 2004. Small Business Contracting: Concerns About the Administration’s Plan to Address Contract Bundling Issues. GAO-03-559T. Washington, D.C.: March 18, 2003. Small Business Innovation Research: Agencies Need to Strengthen Efforts to Improve the Completeness, Consistency, and Accuracy of Awards Data. GAO-07-38. Washington, D.C.: October 19, 2006. Small Business Subcontracting Report Validation Can Be Improved. GAO-02-166R. Washington, D.C.: December 13, 2001. Pretorius, Jacob V.R., and Christopher L. Magee. “Observations on collaborative practices and relative success of small technology-innovating firms supported by the US SBIR initiative.” Entrepreneurship and Innovation Management. Vol. 5, nos. 1/2 (2005): 4-19. van der Vlist, Arno, Shelby Gerking, and Henk Folmer. “What Determines the Success of States in Attracting SBIR Awards?” Economic Development Quarterly. Vol. 18, no. 1 (2004): 81-90. In addition to the contact named above, Cheryl Williams, (Assistant Director); Stephen Cleary; Bernice Dawson; Holly Gerhart; Cindy Gilbert; Nicole Harris; Matt Michaels; Rosario Montemayor; Alison O’Neill; and Jerome Sandau made key contributions to this report. | Small businesses engaged in manufacturing, typically those with 500 or fewer employees, comprise about 90 percent of all U.S. manufacturers and employ 6 million workers. Recent studies have shown that small manufacturing businesses face a number of challenges in their efforts to remain competitive, including the inability to obtain operating and investment capital, a lack of familiarity with new business practices, and difficulty in finding independent advice and skilled employees. To help these businesses overcome such challenges, many federal agencies provide financial and nonfinancial technical services through targeted or general programs or create interagency work groups to better coordinate their efforts and more effectively support these businesses. In this context, GAO identified (1) federal programs that provide services to support small businesses engaged in manufacturing and (2) federal interagency efforts that focus on issues of concern to small manufacturing businesses. To identify these programs and efforts, GAO obtained documentation from 19 federal agencies. In commenting on a draft of this report, 18 of the 19 agencies made technical comments that we have incorporated as appropriate. GAO is not making recommendations in this report. GAO identified 254 federal programs that provide services to support the business sector, of which 5 provide services specifically to small businesses engaged in manufacturing and an additional 15 target manufacturers, regardless of their size. Seven of the 20 programs had data on the level of services provided to small manufacturing businesses, and between fiscal years 2004 through 2006 these programs provided over $35 million and served from about 8,000 small manufacturing businesses in 2004 to over 9,000 in 2006. The 5 programs that target small businesses engaged in manufacturing provide primarily nonfinancial technical assistance to help firms improve the efficiency of their manufacturing operations and their quality control processes as well as to solve specific manufacturing problems. These 5 programs also offer small manufacturing businesses general assistance with their strategic and business planning, accounting and financing, and sales and marketing. In addition, 1 of the 5 programs offers financial assistance. Of the 15 programs that provide services to manufacturers, regardless of their size, 9 offer only nonfinancial services similar to the 5 that target small manufacturing firms, and 6 also provide financial services. Small businesses engaged in manufacturing also can obtain services from 127 other federal programs that are available to all small businesses, regardless of their business type. Many of these programs provide general business and management services, and about 35 percent also offer financial services, such as loans or grants. Finally, small manufacturing businesses can obtain general business, export, and financial services from an additional 107 federal programs designed to help the business sector in general, regardless of the size or type of the business involved. Because not all of these programs gather data on the size of the businesses they serve, it is unclear how many small manufacturing firms received services from these general programs. GAO identified 20 federal interagency efforts that focus on supporting the business sector. Of these 20 efforts, 4 were created specifically to focus on the challenges that small businesses engaged in manufacturing face, and 2 were created to focus on issues relevant to manufacturers in general, regardless of their size. The agencies involved in 3 of the 4 interagency efforts that focus on the concerns of small manufacturing businesses collaborate to expand and coordinate their services through national networks of technical assistance centers. The 4th effort involves efforts to help small manufacturing businesses improve the efficiency of their operations. The 2 interagency efforts that focus on issues relevant to manufacturers in general focus on developing strategies to improve the competitiveness of manufacturers and resolving issues associated with manufacturing-related research and development policies, programs, and budgets. The remaining 14 interagency efforts that GAO identified focus on the concerns of small businesses or of all businesses in general, which may include some issues that also are of concern to small manufacturing businesses. |
In July 1996, the USDA IG concluded that the Forest Service’s financial statements for fiscal year 1995 were unreliable. The IG’s report cited numerous shortcomings in the Forest Service’s accounting and financial data and information systems that precluded the agency from presenting accurate and complete financial information. For example, in reporting its fiscal year 1995 financial results, the Forest Service could not determine for what purposes $215 million of its $3.4 billion in operating and program funds were spent. In December 1996, we reported on how the inaccuracy of the financial statement data precluded the agency and the Congress from using this financial data to help make informed decisions about future funding for the Forest Service and raised questions about the reliability of program performance measures and certain budget data. Forest Service officials determined that corrective actions could not be completed in time to improve the Forest Service’s fiscal year 1996 financial data. As a result, the agency did not prepare financial statements for fiscal year 1996. Instead, the Forest Service agreed to a three-party effort (the Forest Service, USDA’s Office of the Chief Financial Officer (OCFO), and the IG) to correct the problems identified in the fiscal year 1995 IG audit report. On December 23, 1994, the Office of the Chief Financial Officer purchased a new accounting system, the Foundation Financial Information System (FFIS), to implement USDA-wide. Because of the reported financial deficiencies at the Forest Service, it was decided that the Forest Service would be one of the first USDA agencies to implement FFIS. While the overall responsibility and oversight for implementing FFIS rests with the USDA OCFO, implementation at the Forest Service is a joint effort between the Forest Service and the USDA OCFO. Forest Service management is responsible for the other corrective measures that are required to achieve financial accountability. The Forest Service’s goal was to correct some of the deficiencies during fiscal year 1997 and to achieve financial accountability by the end of fiscal year 1999. In August 1997, we reported to your Committee that it is doubtful that the Forest Service can achieve financial accountability by the end of fiscal year 1999 if management and staff commitment waver, planned tasks are not accomplished, and sufficient resources are not provided. Our objectives were to monitor and report on the Forest Service’s (1) implementation of a new financial accounting system, (2) correction of certain accounting deficiencies, (3) resolution of key staffing and financial management organizational issues, and (4) commitment to achieving financial accountability. We reviewed steps taken by the Forest Service, USDA OCFO, and USDA IG to correct deficiencies in the Forest Service’s accounting and financial data and systems since we last reported to you on August 29, 1997. To assess the status of the Forest Service’s (1) effort to improve the reliability of its accounting and financial data and (2) its commitment to improvement, we reviewed the Forest Service’s financial health monitoring reports, the Forest Service’s Financial Management Strategy and Action Plan, project management plans, and other documents outlining improvement initiatives and their status. We also attended planning conferences where progress and critical tasks were identified, and interviewed regional and headquarters Forest Service officials. In addition, we reviewed two internal USDA assessments of FFIS implementation problems. We also interviewed IG officials and USDA’s Acting CFO about the status of the Forest Service’s corrective actions. We performed our review from September 1997 through February 1998 in accordance with generally accepted government auditing standards. We requested comments on a draft of this report from the Special Assistant to the Chief, Forest Service; the Acting Deputy Chief of Operations, Forest Service; the Acting Director of Financial Management, Forest Service; the Acting Chief Financial Officer, USDA, and his staff; and staff from the IG’s office. These comments are discussed in the “Agency Comments and Our Evaluation” section. The new accounting system, FFIS, being implemented at the Forest Service is designed to be a fully integrated financial accounting and reporting system that the Forest Service is counting on to correct many of the agency’s current financial shortcomings. FFIS was piloted at the Forest Service in three units, representing about one-third of all Forest Service transactions, as scheduled on October 1, 1997. However, the pilot units experienced many problems, primarily related to transferring data from other feeder systems to the new FFIS system. For example: FFIS initially rejected 45 percent of the data transferred to it from the procurement system, and the data had to be re-entered. These rejects occurred for various reasons, including the two systems maintaining inconsistent vendor data such as different purchase order numbers for the same item. The timber sales system could not transfer data to FFIS; therefore, sales data had to be entered into FFIS manually. About 1,200 outstanding travel orders had to be rekeyed because the data in the new travel system could not be automatically transferred to FFIS. The agency is continuing to address these types of problems as they are identified during the implementation process. In addition, the agency’s fiscal year 1998 first quarter budget execution reports that are required by the Office of Management and Budget contained estimated rather than actual amounts because FFIS could not generate actual information. Unforeseen problems also have precluded the pilot units from using FFIS to produce other critical budgetary and financial reports that the Congress and the agency need to track obligations, assets, liabilities, revenues, and costs. These problems occurred, in part, because budgetary information had not yet been brought forward from the old accounting system, which is no longer functional in the pilot units. Also, the FFIS system generates accounting information at the detailed transaction level, but is currently unable to produce summary-level data needed to carry prior year balances forward as well as to determine current balances. The Forest Service subsequently discovered that its reports contained errors in the logic used to compute summary balances. These errors are being corrected. The overall problems with the system implementation are reflective of the lack of complete integrated testing of the system, including its reporting capability, prior to implementation in the pilot units. Our prior work at other agencies has shown that the lack of adequate testing of systems before piloting and implementation is one of the primary causes of new systems implementation failures. As a result of the reporting problems, the OCFO and the Forest Service are revising the scheduled completion of FFIS implementation in the pilot units from February 23, 1998, to March 30, 1998. The Acting Director of Financial Management has a team, including region and forest-level staff, at the National Finance Center working on correcting the identified reporting deficiencies. According to the USDA Acting CFO, the team plans to initially focus on monthly and quarterly reporting requirements and will address year-end and other reporting demands later in the fiscal year. If these problems are not resolved, FFIS cannot be successfully implemented in the remaining units as scheduled on October 1, 1998. Further, the inability to produce budget and financial reports for the three pilot units subjects the assets of these units to a high level of risk and vulnerability to misuse. Another issue that must be addressed is to ensure that FFIS, as well as all other mission critical computer systems, is Year 2000 compliant. The Year 2000 problem is rooted in the way dates are recorded and calculated in many computer systems. For the past several decades, systems have typically used two digits to represent the year in order to conserve on electronic data storage and reduce operating costs. With this two-digit format, however, the year 2000 is indistinguishable from the year 1900. As a result, system or application programs that use dates to perform calculations, comparisons, or sorting may generate incorrect results when working with years after 1999. The version of FFIS purchased in 1994 and piloted in October 1997 is not Year 2000 compliant. Forest Service officials and the USDA Acting CFO told us that the FFIS pilot would have been delayed up to 1 year if the agency had waited for the vendor to release a Year 2000 compliant version of FFIS. We did not assess the decision-making process for procuring FFIS or the level of effort required to make the system Year 2000 compliant. The Office of Management and Budget reported that as of November 15, 1997, USDA had demonstrated insufficient evidence of adequate Year 2000 progress. However, the USDA Acting CFO said that USDA is taking steps to ensure that FFIS, as well as all other mission critical financial systems, becomes Year 2000 compliant before January 1, 2000. He further stated that FFIS will be Year 2000 compliant by the summer of 1998. We are initiating another assignment which will examine Year 2000 issues in USDA. The Forest Service has corrected some of the accounting deficiencies identified in the IG’s fiscal year 1995 audit report, but many of the serious shortcomings that we reported on in December 1996 still remain. The agency has implemented procedures and begun cleaning up some of the erroneous data recorded in its old accounting system, such as amounts other agencies owe to the Forest Service for work performed on a reimbursable basis. This process should help ensure that invalid data are not transferred to the new FFIS system. However, the reported $7.8 billion in land, buildings, roads, and equipment is still questionable because reliable values and quantities for many of these assets have not been established. Therefore, as we reported in December 1996, the Forest Service continues to be exposed to mismanagement and misuse of these assets. Each region was scheduled to complete equipment inventories verifying that all items are accounted for by July 31, 1997. Written certifications were due from the units to Financial Management staff by September 30, 1997. However, one of the Forest Service’s 10 regions did not complete its certification until February 12, 1998. The remaining inventories of land, buildings, and roads are to be completed and certified by June 30, 1998. Until these counts are completed and recorded in the accounting records, the correct quantities and costs of these assets will not be determinable. Therefore, the Congress cannot be assured that Forest Service requests for funds related to roads and buildings are fully warranted. In addition, the Forest Service still lacks supporting records (a subsidiary ledger system) to substantiate, at a detailed level, amounts the agency owes to others (accounts payable) or is owed by others (accounts receivable). Also, an IG official told us in February 1998 that the Forest Service still lacks adequate controls to ensure that all billings for timber sales and other revenue-generating activities are submitted and accurately recorded and recognized as income in a timely manner. Good internal controls over accounts payable and accounts receivable are critical to effective cash management. For example, if the Forest Service underbills a customer, does not bill a customer, or does not collect from a customer because of weak controls over its accounts receivable, it may have fewer funds to carry out its mission or it may require additional appropriations from the Congress. For fiscal year 1995, the Forest Service reported accounts receivable of $192 million and accounts payable of $298 million. Further, until the Forest Service completes its asset inventories and valuations and implements better controls over receivables and payables, Forest Service managers’ ability to accurately report program performance measures as well as monitor revenue and spending levels will be hampered. The Forest Service has a designated staff person to direct Forest Service aspects of FFIS implementation activities on a full-time basis. This individual is responsible for working closely with the Forest Service, OCFO, and an outside contractor to oversee implementation of the new system. In addition, key vacant financial management positions have been advertised and job offers have been made to some applicants. However, the Director and Deputy Director positions for Financial Management in Washington, D.C., have been vacant since October 3, 1997, and January 1, 1998, respectively, due to retirements. A Regional Fiscal Director is currently serving as Acting Director until this Senior Executive Service position, which is not within the exclusive hiring authority of the Forest Service, is filled. These positions require staff possessing a strong financial management background, including experience in accounting, budgeting, and financial systems. These positions are important to the implementation of FFIS as well as continuation of day-to-day Forest Service operations. Forest Service officials said they anticipate that all key financial management vacancies will be filled by March 1998. The Forest Service still has not concluded its evaluation of the agency’s overall financial management structure and workload requirements at all levels. Under the Forest Service’s current financial management organizational structure, the budget office reports to the Deputy Chief for Programs and Legislation, while the financial management office reports to the Deputy Chief for Operations. An accounting firm is currently evaluating the financial management organizational structure, workload, and staffing levels for the Forest Service. According to the Acting Director of Financial Management, this firm is scheduled to issue its report in March 1998. As we reported in August 1997, until this evaluation is completed, the Forest Service cannot determine if its current overall financial management organizational structure and resources are sufficient to accomplish the remaining tasks required to achieve financial accountability within established time frames. Top management (Forest Service Chief, Special Assistant to the Chief, and Acting Deputy Chief of Operations) has taken several steps to make needed improvements. For example, Forest Service officials have dedicated resources to implement corrective measures, participated in numerous planning sessions where critical tasks were discussed and milestones were established, and emphasized to staff the need to establish financial accountability. In addition, top management has initiated bimonthly meetings with Fiscal Directors from the 10 Forest Service regions and 7 Research Stations to monitor the overall financial management improvement effort, including FFIS implementation activities, and ensure that (1) initiatives are implemented as planned and (2) obstacles are identified and removed. Further, management has continued to stress the importance of financial management by including it as a performance rating element for both Fiscal Directors and Regional Foresters. Fiscal Directors or other key fiscal staff from 9 of the 10 regions participated in a recent planning meeting where the three pilot units presented information on implementation problems and provided advice on how the remaining regions could better prepare for successful FFIS implementation. Participants also reviewed the agency’s FFIS implementation plan, identified and discussed remaining activities, and discussed ways to address staff shortages. Given the importance of this meeting to the success of implementing FFIS agencywide, the absence of one region—which accounts for 12 percent of the Forest Service’s budget—raises concern about the region’s commitment and top management’s ability to effectively lead this effort toward financial accountability. The Forest Service’s autonomous structure may hinder top management’s ability to get all Regional Fiscal Directors to participate. Regional Fiscal Directors are under the direct authority of their respective Regional Foresters, who report to the Chief of the Forest Service rather than to the Deputy Chief of Operations. The Deputy Chief of Operations, located in the national office, oversees implementation of FFIS for the Forest Service. We were told that the Fiscal Director from the one region—the same region that was about 5 months late certifying equipment inventories—was absent due to other priorities of the Regional Forester. Strong leadership in resolving the remaining obstacles and participation by all regions are required throughout the effort for the Forest Service to achieve and sustain financial accountability by the end of fiscal year 1999 and thereafter. While corrective measures are underway, few of the problems reported by the IG in the fiscal year 1995 audit report and that we analyzed in our initial report to you have been fully resolved. In addition, new hurdles such as FFIS’ current inability to generate budgetary and financial reports and the need to satisfactorily resolve the Year 2000 issue must be addressed. It is not yet clear whether the Forest Service will be successful in its efforts to resolve these problems by the end of fiscal year 1999. Much work still remains to be done before this goal can be achieved. We received oral comments from the Special Assistant to the Chief, Forest Service; the Acting Deputy Chief of Operations, Forest Service; the Acting Director of Financial Management, Forest Service; the Acting Chief Financial Officer, USDA, and his staff; and staff from the IG’s office. The following issues were raised during our discussion. Forest Service and OCFO officials stated that, as with any major system implementation, they anticipated problems with implementing FFIS in the pilot units and stated they are moving to deal with problems identified. Forest Service officials did not agree with our assessment of the agency’s autonomous structure and how that might hinder top management’s ability to ensure that all Regional Fiscal Directors participate in the financial management improvement effort. The Acting Deputy Chief of Operations said that the current structure does not prevent him from achieving his financial management goals because he works very closely with all the Regional Foresters on these issues. Also, he stated that discussions have been held with the Regional Forester and Fiscal Director of the region that has not fully participated in the financial improvement efforts. He added that this region will fully participate from now on. Forest Service and OCFO officials believed that more emphasis on the progress they have made in correcting the identified financial management deficiencies should have been included in the report. Regarding the first issue, the number and nature of problems encountered during the FFIS pilot indicate that additional testing was needed. Such testing should have identified many of these problems, which could have been resolved before FFIS was piloted. Second, we believe that for this effort to be successful the Deputy Chief for Operations must take whatever action is necessary to ensure that Regional Fiscal Directors focus their priorities on correcting the identified financial management deficiencies. Finally, we agree that some progress has been made in correcting financial management deficiencies and revised certain sections of the report to better reflect this. These officials also provided clarifying comments that we incorporated into our report as appropriate. We are sending copies of this report to the Ranking Minority Member of your Committee; the Secretary of Agriculture; the Chief of the Forest Service; the Special Assistant to the Chief; USDA’s Acting Chief Financial Officer; the Acting Deputy Chief of Operations; the Acting Director of Financial Management; the Director of the Office of Management and Budget; and other interested parties. Copies will also be made available to others upon request. We will continue to monitor the Forest Service’s effort and report to you. If you have any questions about this report, please call me at (202) 512-8341 or McCoy Williams, Assistant Director, at (202) 512-6906. Major contributors to this report are listed in appendix I. Anita Lenoir, Auditor-in-Charge Maria Rodriguez, Auditor The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO reviewed the Forest Service's efforts to correct the financial problems identified in the Department of Agriculture (USDA) Inspector General's (IG) audit report on its fiscal year (FY) 1995 financial statements, focusing on the Forest Service's: (1) implementation of a new financial accounting system; (2) correction of certain accounting deficiencies; (3) resolution of key staffing and financial management organizational issues; and (4) commitment to achieving financial accountability. GAO noted that: (1) the Forest Service has taken some positive steps to address the accounting deficiencies cited in the IG's FY 1995 audit report; (2) however, serious problems have been encountered in the initial implementation of the new financial accounting system; (3) while the Office of the Chief Financial Officer (OCFO) and the Forest Service piloted the Foundation Financial Information System (FFIS) in three units as scheduled on October 1, 1997, problems with FFIS processing data and transferring data between FFIS and other feeder systems have hampered the implementation efforts; (4) also, the pilot units have not been able to use FFIS to produce certain critical budgetary and accounting reports that track the Forest Service's obligations, assets, liabilities, revenues, and costs; (5) these problems occurred because: (a) while most individual components of the system were tested, a complete integrated test was not accomplished prior to implementation; (b) the FFIS reporting mechanism, which was not fully tested prior to implementation, was not functioning properly; (c) certain report specifications and calculations were incorrect; and (d) budget balances had not yet been brought forward from the old accounting system, which is no longer functional for the pilot units; (6) failure to correct these problems will jeopardize successful implementation of FFIS in the remaining Forest Service units; (7) the Forest Service's ability to produce reliable financial reports hinges on successful operation of the new system; (8) the version of FFIS purchased by the UDSA OCFO in December 1994 is not year 2000 compliant; (9) the Forest Service has corrected some of the accounting deficiencies cited in the IG's 1995 audit report, it continues to have certain accounting problems, in addition to those related to the FFIS system, that will hamper its ability to produce reliable financial information and could expose the agency to mismanagement and misuse of its assets; (10) the Forest Service still lacks supporting records to substantiate, at a detailed level, amounts the agency either owes or is owed by others; (11) the Forest Service has not yet completed an evaluation of its financial management structure and workload requirements at all levels; (12) the Forest Service's top management has taken some steps to correct the financial problems reported by the IG in the FY 1995 audit report; and (13) however, the Forest Service's autonomous organizational structure may hinder top management from making needed improvements by FY 1999. |
Section 501(c) of the I.R.C. grants an exemption from federal income taxes to organizations that meet certain requirements. Exempt organization data provided by IRS indicated that nearly 1.8 million organizations in various classifications are currently recognized as being tax exempt. Charitable organizations (I.R.C. § 501(c)3) constitute the largest classification, accounting for over 60 percent of all exempt organizations as of September 30, 2006. Other classifications of exempt organization include civic and business leagues, labor organizations, recreational clubs, domestic fraternal societies, and credit unions. Differences between the various classifications include whether donations to the exempt organization are tax deductible and whether the exempt organization has to submit an application to IRS for specific recognition of its tax exempt status. Specifically, donations to certain exempt organizations, such as charitable and religious organizations, certain veteran’s organizations, and certain cemetery companies, are deductible on the donor’s individual tax return. Donations to other organizations not specifically recognized as such are not deductible. Organizations that are qualified to receive deductible donations, with the exception of churches, are required to apply to IRS and receive a formal determination of their exempt status. Generally, each exempt organization is required to file an annual informational return that provides IRS with information about the organization and its operations, officers and directors, and whether it is required to obtain specific IRS recognition of its exempt status. An exempt organization’s annual information return (Form 990) also provides the public with the primary or sole source of information about the organization. The determination of exempt status and monitoring of exempt organizations is the responsibility of the Tax Exempt and Government Entities Division (TE/GE) of IRS. The division’s responsibilities include accepting applications for and determining whether organizations qualify as exempt under the I.R.C., monitoring exempt organizations for continued compliance with the I.R.C., and when appropriate, revoking the exempt status of an organization that no longer meets requirements for exemption. Like all other employers, exempt organizations with employees are required to pay payroll taxes that they withhold from employees’ wages “in trust” for the federal government, as well as other applicable federal taxes. Payroll taxes withheld from employees consist of income taxes; Old Age, Survivors, and Disability Insurance (OASDI), commonly referred to as Social Security; and Medicare. OASDI is taxed at 6.2 percent on the first $94,200 of an employee’s salary, and Medicare is taxed at 1.45 percent with no income cap. The employer is also taxed, at the same rate, for OASDI and Medicare on employee wages. To the extent that payroll taxes are withheld and not forwarded to IRS, individuals within the business (e.g., exempt organization officials) may be held personally liable for the withheld amounts not forwarded, and they can be assessed a civil monetary penalty known as a trust fund recovery penalty (TFRP). Willful failure to remit payroll taxes is a felony under U.S. law punishable by a fine, imprisonment, or both, and the failure to properly segregate payroll taxes can be a criminal misdemeanor offense. Within TE/GE, the Exempt Organization (EO) Examinations Office is charged with promoting compliance with the I.R.C. The EO Examinations Office’s activities include analyzing the operational and financial activities of exempt organizations and developing other processes to identify areas of noncompliance, developing corrective strategies, and assisting other exempt organization functions in implementing these strategies. In the process of performing the analysis, the EO Examinations Office may assess exempt organizations’ payroll or other taxes. If the EO Examinations Office assesses taxes and the taxpayer does not make payment, the matter is referred to IRS’s Small Business / Self-Employed (SB/SE) Collections Office. SB/SE Collections Office becomes responsible for collecting the delinquent debt and may use means such as federal tax liens, levies, and seizures, and may assess a TFRP against an organization’s officials. A federal grant is an award of financial assistance from a federal agency to an organization to carry out an agreed-upon public purpose. As such, federal grants are not used for the direct acquisition of goods or services for the federal government. Based on our analysis of fiscal year 2004 and 2005 data from FAADS, federal agencies collectively awarded grants of approximately $300 billion annually. Further analysis of the FAADS data indicates that approximately 80 percent of all federal grants are pass- through grants, that is, they are federal grants provided to the state and local governments, which, in turn, disburse the grants to the ultimate recipients. Consequently, only about 20 percent of grants are provided directly from the federal government to the organization that ultimately spends the money. Grant applicants that apply directly to the federal government are required to complete Standard Form (SF) 424. The SF 424 requires grant applicants to certify whether they are delinquent on any federal debt, including federal tax debt. As of September 2006, nearly 55,000 exempt organizations had nearly $1 billion in unpaid payroll and other federal taxes. The amount of taxes owed by exempt organizations ranged from $101 to $16 million, and the number of delinquent tax periods ranged from a single period to more than 80 tax periods. However, the dollar amount of federal taxes owed by exempt organizations is understated because some organizations underreport their tax liability or fail to file returns altogether. Further, we excluded certain classifications of exempt organizations, tax debts for current periods, and disputed tax debts. As shown in figure 1, about 71 percent of the nearly $1 billion in unpaid federal taxes comprised payroll taxes and related penalties and interest. About 19 percent, or over $180 million, related to annual reporting penalties. IRS imposes reporting penalties on entities that fail to file annual returns at all or in a timely manner or that file inaccurate returns. The remaining 10 percent of the nearly $1 billion in delinquent taxes consisted of unrelated business income, excise, and other types of taxes. A significant amount of the unpaid federal taxes by exempt organizations has been outstanding for several years. As reflected in figure 2, while the majority of the nearly $1 billion in unpaid federal taxes was from tax periods 2001 through 2005, over a quarter of the unpaid taxes are for tax periods prior to 2001. Our previous work has shown that as unpaid taxes age, the likelihood of collecting all or a portion of the amount owed decreases. This is, in part, because of the continued accrual of interest and penalties on the outstanding tax debt. Similarly, tax problems such as the tax gap are aggravated over time if not addressed early on. Our analysis of IRS data found that nearly 1,500 of the almost 55,000 delinquent exempt organizations owed in total over $600 million of the nearly $1 billion in unpaid federal taxes of exempt organizations we identified. All of these nearly 1,500 exempt organizations owed over $100,000 each, with some owing more than $10 million. Another 8,400 owed from $10,000 to $100,000 each. Although the largest group—nearly 45,000—owed less than $10,000 in delinquent taxes, the majority of the debt in this group of exempt organizations is related to payroll taxes withheld from employees and not remitted to the federal government and annual reporting penalties. Further, many exempt organizations in this group repeatedly failed to remit taxes in multiple tax periods. Although the nearly $1 billion in unpaid federal taxes we identified that were owed by exempt organizations as of September 30, 2006, is a significant amount, it understates the full extent of unpaid taxes. This amount does not include amounts due IRS from exempt organizations that did not file payroll taxes (nonfilers) or underreported payroll tax liability (underreporters). Also, we did not include exempt organization tax debt from 2006 tax periods, tax debt for entities owing $100 or less, or tax debt for certain entities listed in IRS’s database of exempt organizations. Limiting our ability to more fully estimate the extent of exempt organizations with unpaid federal taxes is the fact that IRS’s tax database reflects only the amount of unpaid taxes reported by the exempt organization on a tax return or assessed by IRS through various enforcement programs. IRS’s tax database does not reflect amounts owed by exempt organizations that have not filed tax returns and for which IRS has not assessed tax amounts due. Additionally, our analysis did not account for exempt organizations that underreported payroll taxes and had not been identified by IRS. As reported previously and as indicated in our case study investigations, some exempt organizations underreported payroll taxes or failed to file returns. IRS estimates that underreporting accounts for more than 80 percent of the gross tax gap. We also took a number of steps in determining the amount of tax debt owed by exempt organizations to avoid overestimation. For example, some recently assessed tax debts that appear as unpaid taxes through a matching of IRS unpaid tax and exempt organization records may involve matters that are routinely resolved between the exempt organization and IRS, with the taxes paid, abated, or both within a short period. We eliminated these types of debt by including only unpaid federal taxes for tax periods prior to calendar year 2006. Further, we did not include exempt organizations with tax debt of $100 or less because these small debts likely do not represent abusive behavior. We also eliminated all tax debt IRS identified as not agreed to by the exempt organization. Further, the amount of exempt organization tax debt excludes amounts owed by exempt organizations for which the statutory collection period expired. Generally, there is a 10-year statutory collection period beyond which IRS is prohibited from attempting to collect tax debt. Consequently, if exempt organizations owe federal taxes beyond the 10- year statutory collection period, the older tax debt may have been removed from IRS’s records. We were unable to determine the amount of tax debt that had been removed. For all 25 cases involving exempt organizations with delinquent tax debts that we audited and investigated, we found abusive activity, potentially criminal activity, or both related to the federal tax system. These cases reiterate the need for IRS to improve its enforcement of tax laws as previously noted by GAO. The amount of unpaid taxes associated with these cases ranged from over $300,000 to nearly $30 million. All 25 exempt organizations had unpaid payroll taxes, some dating as far back as the late 1980s. In one instance, an exempt organization had not remitted payroll taxes to IRS for 14 years, thereby accumulating unpaid federal taxes of nearly $8 million at the time of our audit. Rather than fulfill their role as “trustees” of this money and forward it to IRS as required by law, the officials responsible for these exempt organizations diverted the money to fund the organizations’ operations, which sometimes included millions of dollars in management fees to related entities, or for personal benefits, such as their own salaries. At the time of our audit, IRS had completed TFRP assessments on officials of 15 of the 25 exempt organizations. However, as we have previously reported, collections of TFRP assessments are generally minimal. Further, available data show that IRS has taken some collection action and placed liens on the assets of 23 of the 25 entities or their officials. However, IRS initiated actions to seize assets of only 1 of the 25 exempt organizations in our case studies. Our investigations revealed that despite owing substantial amounts of federal taxes to IRS, top officials of some exempt organizations received substantial salaries—often in the six-figure range and in one case in excess of $1 million—and had substantial personal assets, including multimillion- dollar homes and luxury cars. Our investigations found that 3 of these exempt organizations are related to other exempt organizations, for-profit entities, or both that are also tax delinquent. The related entities were primarily discovered because of common top officials. Combined, the 3 exempt organizations and their related entities owed nearly $40 million in delinquent taxes. Further, 4 of the 25 case study organizations we investigated had key officials and other employees who were convicted of criminal activities, including tax evasion and operating an illegal gambling establishment, at the same time the organizations continued to benefit from a tax exempt status. One entity was fined by a state for employing convicted felons in positions of trust. Table 1 highlights 10 of the 25 organizations with unpaid taxes that we investigated. Appendix II provides a summary of the other 15 cases we examined. We are referring all 25 cases we examined to IRS for further collection activity and criminal investigation, if warranted. The following provide illustrative detailed information on several of these cases: Case 1: This exempt organization is related to several for-profit entities that provide health care and other services, all of which have tax debts. The related entities appear to be set up under complex forms of ownership designed to shield income and assets, such as limited liability companies and offshore entities. Combined, these entities owe nearly $30 million in federal taxes, of which more than $10 million is attributable to the exempt organization. The exempt organization in particular had not paid federal taxes since the late 1990s, despite receiving millions in federal payments. At the same time, the exempt organization paid millions in management fees to a contractor that, according to available public records, is affiliated with the exempt organization. IRS has not placed a TFRP on any individual with respect to this exempt organization’s tax debt. Case 2: This industry association owes more than $6 million in tax debt dating back to the late 1990s. A top official of the association admitted that he intentionally failed to remit payroll taxes in order to fund operations, which in a recent year included providing more than 10 officials with six- figure salaries, with one receiving a salary in excess of $500,000. At the same time, another top officer owned a multimillion-dollar luxury estate and purchased luxury vehicles. IRS has assessed a multimillion-dollar TFRP against an officer of the organization. Case 3: This health care organization owes more than $15 million in tax debt dating back to the early 2000s. While not paying its payroll taxes, the organization paid several employees large amounts of annual compensation, including a total compensation package for a top official in excess of $1 million annually, and several other employees with combined compensation of over $1 million. The top official also made several hundred thousand dollars in cash transactions at banks and casinos while the organization owed millions in unpaid taxes. Despite holding the organization’s top office and earning seven-figure compensation, this official told IRS that he was not responsible for the exempt organization’s unpaid taxes. Case 5: This children’s services organization owes more than $500,000 primarily related to payroll taxes dating back to the late 1980s. The top official of this exempt organization was convicted of attempting to bribe an IRS employee. Other organization employees have criminal records, including records for violent crimes. Further, organization officials allegedly requested that some payments to it be made in cash. Case 6: This community services organization owes almost $3 million in tax debt dating from the late 1990s. The organization was fined for employing convicted felons in positions responsible for public safety. In addition, an organization employee was engaged in criminal activity at one of the organization’s job sites. To date, IRS has not assessed a TFRP against organization officials. The organization has been replaced by a related entity that is operating out of the same facility. Many of the contracts awarded to the exempt organization have been transferred to this entity. Despite continuing to abuse the federal tax system, all of the 25 case study organizations continued to retain their tax exempt status. Existing federal statutes do not authorize IRS to revoke exempt status based on an organization’s tax delinquency. However, the I.R.C. provides IRS with the authority to approve and monitor exempt organizations and also stipulates the circumstances under which IRS can revoke an organization’s tax exempt status. Specifically, IRS can revoke exempt status when it determines the organization has ceased to operate in a manner consistent with the purpose for which it was granted the tax exempt status. For example, if an organization was granted tax exempt status because it was established to provide employment or other services to underprivileged individuals, and it ceases to do so, IRS can revoke the organization’s tax exempt status. In addition, if an organization engages in excess benefit behavior, IRS has the authority to assess a tax against the individual who received the benefit. The I.R.C. provides IRS authority to revoke an organization’s tax exempt status if it repeatedly engages in excess benefits behavior, including excess compensation. However, the I.R.C. does not provide IRS the authority to revoke tax exempt status based on failure to pay taxes. According to IRS officials, organizations whose exempt status is revoked may have delinquent debts, but that was not the criteria for revocation. IRS officials also informed us that revocation is an action of last resort, arrived at after evaluation of many factors and after imposing intermediate sanctions to try and correct the problem. Similarly, in cases of excess compensation, IRS generally tried to impose a tax on the individual who received the excess benefits, rather than revoke the exempt status of the organization. Based on analysis of limited grant payment data, we found that exempt organizations with unpaid federal taxes received over $14 billion in direct federal grant payments from three federal agency disbursement systems in fiscal years 2005 and 2006. Grant applicants are required to self-certify on the grant application whether they are delinquent on any federal debt, including federal taxes. Our audit of six case study organizations with delinquent taxes that also received federal grants found that five of the six appear to have violated the False Statements Act because they did not declare their delinquent federal taxes on their grant applications. Based on our analysis, we determined that of the nearly 55,000 exempt organizations with federal tax debt, more than 1,200 received over $14 billion in federal grants from HHS, Education, the Department of Energy, the National Aeronautics and Space Administration, and other federal agencies in fiscal years 2005 and 2006. The more than 1,200 exempt organizations owed over $70 million in tax debt yet received substantial amounts in federal grants. However, our estimate of over $14 billion in federal grants received by exempt organizations with federal tax debt is likely understated. First, because our analysis was limited to data from the three federal grant payment systems, our analysis did not include all federal grant disbursements. Further, our analysis included only data on direct recipients of federal grant payments, that is, payments provided directly by the federal government to the end user. Based on our analysis of data from FAADS, we estimated that these grants account for only about 20 percent of the total grants awarded by the federal government. The remaining 80 percent of federal grants are provided to states and local governments, which, in turn, disburse them to end users. Organizations that are applying for federal grants complete SF 424s to provide granting agencies with entity information, such as name, employer identification number, address, and a descriptive title of the project for which the grant will be used. The SF 424 also requires that the grant applicant provide information as to whether the applicant has any delinquent federal debts. The instructions that accompany the SF 424 define federal debt to include taxes owed. The applicant is required to certify that the information provided on the SF 424 is true and correct. We examined information provided on the SF 424 for six of our case study tax exempt organizations that received grants, all of which had substantial tax debts outstanding. We found that five of the six that received federal grants failed to disclose that they had federal tax debts on the SF 424s filed with the granting agencies. The six entities applied for and received over $13 million in total grant payments in fiscal years 2005 and 2006. In a recent 3-year time span, one of the exempt organizations we audited applied for multiple grants to provide community services. Even though the entity had an outstanding balance of unpaid federal taxes, the entity did not disclose its tax liability on the SF 424s. The organization subsequently received several million dollars in grant payments during 2 recent fiscal years. Figure 3 provides excerpts of an SF 424 for this organization where the applicant appears to have violated the False Statements Act by not disclosing its delinquent tax debt. Appendix IV contains a copy of the entire SF 424. We found that while granting agencies can ask prospective grantees for consent to verify federal tax debt information with IRS, granting agencies do so only in a few cases where the grant applicant discloses having federal debts. Agencies do not confirm with IRS the accuracy of applicant information related to federal tax debts because of strict taxpayer privacy laws. Officials at three granting agencies informed us that procedurally, if tax debt is declared on the SF 424, the agencies would request further information to determine if any action needs to be taken. Without accurate debt information, granting agencies are limited in their ability to fully evaluate whether the grantee is a responsible party, the grantee should receive the grant, additional action needs to be taken, or a combination of these. The majority of exempt organizations appear to pay their federal taxes. However, our work has shown that tens of thousands of exempt organizations and their officers have taken advantage of the opportunity to avoid paying their federal taxes, in part because IRS does not have the authority to revoke exempt status for failure to pay taxes. In many cases, officers of these delinquent organizations are responsible for diversion of payroll tax money—a felony offense—to pay their substantial salaries and accumulate substantial personal wealth. It is likely that many of these exempt organizations have provided significant and positive services to those in need; but it is also important that they comply with federal tax law. We have referred all 25 of the cases we investigated to IRS for collection and criminal investigation. We provided a draft of this report to the Commissioner of IRS for review and comment on April 6, 2007. Officials in IRS’s TE/GE provided oral comments on the draft on April 24, 2007. The oral comments highlighted several planned actions to enhance exempt organizations’ tax compliance efforts. The planned actions cited included analyzing discrepancies between payroll data reported to the Social Security Administration and data reported to IRS, and piloting a new modeling program to identify exempt organizations with a high risk of employment tax noncompliance. In its oral comments, IRS also agreed with the draft report’s finding that IRS does not have authority to revoke an organization’s exempt status for nonpayment of employment taxes, except under extraordinary circumstances which rarely occur. IRS planned actions, if implemented effectively, should help IRS avoid additional payroll and other tax compliance issues by exempt organizations. For IRS to ensure that tax exempt organizations comply with tax law it will be important to use the full range of available enforcement tools and hold tax exempt organizations and associated key officials accountable for noncompliance. As discussed in the body of this report, we identified a number of exempt organizations and their officials that were delinquent in paying significant dollar amounts in federal payroll and other taxes. As agreed with your office, unless you announce the contents of this report earlier, we will not distribute it until 30 days after its date. At that time, we will send copies to the Secretary of the Treasury, the Commissioner of the Financial Management Service, the Commissioner of Internal Revenue, and interested congressional committees and members. We will also make copies available to others upon request. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. Please contact me at (202) 512-9505 or [email protected] if you or your staff have any questions concerning this report. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix V. Our objectives were to determine whether and, if so, to what extent (1) exempt organizations have unpaid federal taxes, including payroll taxes; (2) selected case study organizations and their executives are involved in abusive or potentially criminal activity; and (3) exempt organizations with unpaid federal taxes received direct grants from certain federal agencies. To determine whether and to what extent exempt organizations have unpaid payroll and other federal taxes, we first identified the population of exempt organizations to be included in our analysis. These organizations include those that either received a formal determination of their exempt status or met basic criteria to be considered exempt. To perform this step, we obtained the exempt organization business master file from the Internal Revenue Service (IRS) as of September 30, 2006. This database contained information on over 2.5 million entities, each with a code indicating the most recent “exempt” status. In consultation with IRS, we identified nearly 1.8 million entities with status codes indicating that they are currently tax exempt. To identify exempt organizations with unpaid federal taxes, we obtained IRS’s September 30, 2006, unpaid assessments file and matched it to the 1.8 million entities we identified as currently tax exempt using taxpayer identification numbers (TIN). To avoid overstating the amount owed by exempt organizations with unpaid federal tax debts and to capture only significant tax debt, we excluded tax debts meeting specific criteria. The criteria we used to exclude tax debts are as follows: tax debts IRS classified as compliance assessments or memo accounts for financial reporting, tax debts from calendar year 2006 tax periods, and exempt organizations with total unpaid taxes of $100 or less. The criteria above were used to exclude tax debts that might be under dispute or generally duplicative or invalid and tax debts that are recently incurred. Specifically, compliance assessments or memo accounts were excluded because these taxes have neither been agreed to by the taxpayers nor affirmed by the court, or these taxes could be invalid or duplicative of other taxes already reported. We excluded tax debts from calendar year 2006 tax periods to eliminate tax debt that may involve matters that are routinely resolved between the taxpayers and IRS, with the taxes paid or abated within a short period. We also excluded tax debts of $100 or less because they are insignificant for the purpose of determining the extent of taxes owed by exempt organizations. To prepare case studies of selected exempt organizations and their directors or senior officers for abuse of the federal tax system, we selected 25 exempt organizations using a nonrepresentative selection approach based on data-mining results, our judgment, and a number of other criteria, including the amount of unpaid taxes, number of unpaid tax periods, amount of payments reported by IRS, and indications that key officials might be involved in multiple entities with tax debts. We obtained copies of automated tax transcripts and other tax records (for example, revenue officers’ notes) from IRS as of September 30, 2006, and reviewed these records to exclude exempt organizations that had recently paid off their unpaid tax balances and considered other factors before reducing the selection of exempt organizations to 25 case studies. For the selected 25 cases, we performed searches of criminal, financial, and public records. Our investigators contacted several of the exempt organizations and performed interviews. To determine whether and to what extent exempt organizations with tax debt received federal grants, we obtained and analyzed federal grant payment databases from the Department of Education’s (Education) Grant Administration and Payment System (GAPS), the Department of the Treasury Financial Management Service’s (FMS) Automated Standard Application Payment system (ASAP), and the Department of Health and Human Services’ (HHS) Payment Management System (PMS) for fiscal years 2005 and 2006. These three agencies process grants on behalf of many other federal agencies and, in fiscal years 2005 and 2006, processed the majority of direct and pass-through grants, excluding Medicare and Medicaid. We then matched the grant payment data to the exempt organizations with federal tax debt using the TINs. Of the 25 case studies of exempt organizations with unpaid federal taxes, 6 submitted grant application forms related to grant payments made during fiscal years 2005 and 2006. We requested and reviewed the grant application forms for all 6 entities. We also interviewed officials from HHS, Education, and the Department of Agriculture on whether tax debts are considered in their decisions on whether to provide grants to particular grant applicants. We conducted our audit work from August 2006 through March 2007 in accordance with U.S. generally accepted government auditing standards, and we performed our investigative work in accordance with standards prescribed by the President’s Council on Integrity and Efficiency. For IRS unpaid assessments data, we relied on the work we performed during our annual audits of IRS’s financial statements. While our financial statement audits have identified some data reliability problems associated with the coding of some of the fields in IRS’s tax records, including errors and delays in recording taxpayer information and payments, we determined that the data were sufficiently reliable to address our report’s objectives. Our financial audit procedures, including the reconciliation of the value of unpaid taxes recorded in IRS’s master file to IRS’s general ledger, identified no material differences. To help ensure reliability of the exempt organization data, we interviewed IRS officials concerning the reliability of the data provided to us. In addition, we performed electronic testing of specific data elements in the database that we used to perform our work. For the GAPS, ASAP, and PMS data, we interviewed officials from Education, FMS, and HHS responsible for the databases. In addition, we performed electronic testing of specific data elements that we used to perform our work. Based on our discussions with agency officials, our review of agency documents, and our own testing, we concluded that the data elements used for this report were sufficiently reliable for our purposes. We briefed IRS officials on March 27, 2007, on the details of our audit, including our findings and their implications. On April 6, 2007, we requested comments on a draft of this report from the Commissioner of IRS. We received oral comments from the Tax Exempt and Government Entities Division of IRS on April 24, 2007, and have summarized these comments in the Agency Comments and Our Evaluation section of this report. Table 1 provides data on 10 detailed case studies. Table 2 provides details of the remaining 15 exempt organizations we selected as case studies. As with the 10 cases discussed in the body of this report, we also found abuse, potential criminal activity, or both related to the federal tax system during our audit and investigations of these 15 case studies. The case studies primarily involved exempt organizations with unpaid payroll taxes, one for as many as 14 years. Section 501(c) of the Internal Revenue Code (I.R.C.) lists several types of organizations that qualify for exemption from federal income taxes. The types of exempt organizations are summarized in table 3. Phone Number (give area code) Fax Number (give area code) 8. TYPE OF APPLICATION: 7. TYPE OF APPLICANT: (See back of form for Application Types) If Revision, enter appropriate letter(s) in box(es) (See back of form for description of letters.) Other (specify) Other (specify) TITLE (Name of Program): 12. AREAS AFFECTED BY PROJECT (Cities, Counties, States, etc.): 14. CONGRESSIONAL DISTRICTS OF: a. Applicant 15. ESTIMATED FUNDING: 16. IS APPLICATION SUBJECT TO REVIEW BY STATE EXECUTIVE ORDER 12372 PROCESS? . a. Yes. THIS PREAPPLICATION/APPLICATION WAS MADE . AVAILABLE TO THE STATE EXECUTIVE ORDER 12372 PROCESS FOR REVIEW ON .. b. No. PROGRAM IS NOT COVERED BY E. O. 12372 . OR PROGRAM HAS NOT BEEN SELECTED BY STATE . 17. IS THE APPLICANT DELINQUENT ON ANY FEDERAL DEBT? . Yes If “Yes” attach an explanation. 18. TO THE BEST OF MY KNOWLEDGE AND BELIEF, ALL DATA IN THIS APPLICATION/PREAPPLICATION ARE TRUE AND CORRECT. THE DOCUMENT HAS BEEN DULY AUTHORIZED BY THE GOVERNING BODY OF THE APPLICANT AND THE APPLICANT WILL COMPLY WITH THE ATTACHED ASSURANCES IF THE ASSISTANCE IS AWARDED. a. Authorized Representative Prefix Middle Name First Name c. Telephone Number (give area code) In addition to the contact named above, the following individuals made major contributions to this report: Tuyet-Quan Thai, Assistant Director; Gary Bianchi; Ray Bush; Shafee Carnegie; William Cordrey; Jessica Gray; Ken Hill; Aaron Holling; Leslie Jones; Shirley Jones; Jason Kelly; John Kelly; Rick Kusman; Barbara Lewis; Andrew McIntosh; Aaron Piazza; John Ryan; Barry Shillito; and Michael Zola. | As of September 2006, nearly 1.8 million entities were recognized as tax exempt organizations by the Internal Revenue Service (IRS). As such, they do not have to pay federal income taxes. Exempt organizations are still required to remit amounts withheld from employees' wages for federal income tax, Social Security and Medicare, as well as other taxes. Previous GAO work identified numerous government contractors, Medicare providers, and charities participating in the Combined Federal Campaign (CFC) with billions in unpaid federal taxes. To follow up on the CFC work, the subcommittee requested that GAO determine whether and to what extent (1) exempt organizations have unpaid federal taxes, including payroll taxes; (2) selected case study organizations and their executives are involved in abusive or potentially criminal activity; and (3) exempt organizations with unpaid federal taxes received direct grants from certain federal agencies. GAO reviewed unpaid taxes and exempt organization data from IRS and selected 25 case studies for audit and investigation. GAO also reviewed data from 3 major grant disbursement systems. GAO referred all 25 cases to IRS for collection activity and criminal investigation, if warranted. In its oral comments on a draft of this report, IRS noted several actions it is taking to enhance exempt organizations' tax compliance. Nearly 55,000 exempt organizations had almost $1 billion in unpaid federal taxes as of September 30, 2006. About 1,500 of these entities each had over $100,000 in federal tax debts with some owing tens of millions of dollars. The majority of this debt represented payroll taxes and associated penalties and interest dating as far back as the early 1980s. Willful failure to remit payroll taxes is a felony under U.S. tax law. The $1 billion figure is understated because some exempt organizations have understated tax liabilities or did not file tax returns. GAO selected 25 exempt organizations for investigation based primarily on amount of tax debt and number of periods delinquent. For the 25 cases investigated GAO found abusive and potentially criminal activity, including repeated failure to remit payroll taxes withheld from employees. Officials diverted the money to fund their operations, including paying themselves large salaries ranging from hundreds of thousands of dollars to over $1 million. Many of the 25 case studies accumulated substantial assets, such as million-dollar homes and luxury vehicles. Key officials and employees at 4 exempt organizations were engaged in criminal activities, including attempted bribery of an IRS official and illegal gambling. Despite repeatedly abusing the federal tax system, these entities continued to retain their exempt status. IRS does not have the authority to revoke an organization's exempt status because of unpaid federal taxes. Over 1,200 of these exempt organizations with unpaid federal taxes received over $14 billion in federal grants in fiscal years 2005 and 2006. Six of the 25 exempt organizations GAO investigated received grants; of those 6 entities, 5 appear to have violated the False Statement Act by not disclosing their tax debt as required. For example, one entity that received millions of dollars in grants did not disclose unpaid taxes on multiple applications. Taxpayer privacy statutes prevent granting agencies from verifying an applicant's tax status with IRS unless the taxpayer authorizes such disclosure. |
Since HUD began its reform efforts, we have developed a body of knowledge on the management functions that are key to becoming a high- performing agency and elements that are necessary for an organization to sustain management reform. Figure 1 shows the major management functions that are key to becoming a high-performing organization, based on the work we have done in identifying and defining performance and accountability challenges. Figure 2 shows the six elements we have identified as crucial factors to building and sustaining successful management reform initiatives at federal agencies. A successful reform effort requires leadership who articulates a clearly defined vision for reform and communicates this vision through a department’s strategic plans and goals and desired outcomes. It also requires commitment from career, politically appointed, and congressional leadership, as well as rank-and-file employees. Thoughtful human capital policies are also needed to produce an empowered workforce with the skills and training needed to meet an agency’s challenges. Further, sustaining the progress made by the reform effort requires well-planned information technology strategies that give key decision-makers the automated support and tools they need to carry out the agency’s mission. And where an agency relies on private contractors and other external partners to carry out its mission, as HUD does in many of its activities, sustaining the reform effort requires that mechanisms are in place to ensure that the agency and its partners have the same understanding of the goals and objectives desired. Together, these two models provide criteria useful for evaluating the sustainability of management reform initiatives and an agency’s progress toward becoming a high-performing organization. Strategic planning Budget formulation and execution Organizational alignment and control Performance measurement Human capital strategies Financial management Employee involvement Strategic planning Thoughtful and rigorous planning for human capital and information systems In the past, we have identified management functions such as financial management and information technology as “high-risk” because of their greater vulnerabilities to waste, fraud, abuse, and mismanagement. In January 2001, we identified strategic human capital management as a new governmentwide high-risk area. We reported that federal programs involving billions of dollars rely for their success on the performance of the federal government’s people—its human capital. However, after a decade of government downsizing and curtailed investments in human capital, it is becoming increasingly clear that current human capital strategies are not appropriately constituted to meet current and emerging needs of government and its citizens in the most effective, efficient, and economical manner possible. For many years, HUD has been the subject of sustained criticism for management and oversight weaknesses that have made it vulnerable to fraud, waste, abuse, and mismanagement. In 1994, we designated all of HUD’s programs as high-risk because of four long-standing management deficiencies: weak internal controls; inadequate information and financial management systems; an ineffective organizational structure, including a fundamental lack of management accountability and responsibility; and an insufficient mix of staff with the proper skills. At one point in the mid- 1990s, some suggested that the Congress should consider dismantling HUD if it were unable to operate with a clear legislative mandate and in an effective, accountable manner. HUD had undertaken other reorganization and downsizing efforts in 1993 and 1994. However, HUD’s 2020 Management Reform Plan intended to finally resolve its managerial and operational deficiencies, among other things, and to ensure HUD’s relevance and effectiveness into the 21st century. HUD’s 2020 Management Reform Plan was a complex and wide-ranging plan to change the negative perception of the agency by updating its mission and focusing its energy and resources on eliminating fraud, waste, and abuse in its programs. The reform plan presented two interrelated missions for HUD: (1) empower people and communities to improve themselves and succeed in the modern economy and (2) restore public trust by achieving and demonstrating competence. With these two missions, HUD’s goals were to become more collaborative with its partners, move from process-oriented activities to an emphasis on performance and product delivery, and develop a culture within HUD of zero tolerance for waste, fraud, and abuse. The plan also indicated that HUD would demand accountability from its employees, grantees, and private- and public-sector customers. To achieve its new missions, HUD developed six specific reform efforts to substantially overhaul the way it did business. These six reforms, which focused on numerous organization and program changes, are stated in the plan as follows: Reorganize by function rather than program ‘cylinders.’ Where needed consolidate and/or privatize. Modernize and integrate HUD’s outdated financial management systems with an efficient, state-of-the-art system. Create an enforcement authority, with one objective: to restore public trust. Refocus and retrain HUD’s workforce to carry out its revitalized mission. Establish new performance-based systems for HUD programs, operations, and employees. Replace HUD’s top-down structure with a new customer-friendly structure. (See app. I for a description of the six reforms and the specific organizational and program changes required by each reform. See app. II for a summary of HUD’s accomplishments and work that remains to be done for the major activities associated with the six reforms.) In September 2000, we testified on HUD’s progress in addressing its major management challenges as it tries to transform itself from a federal agency whose major programs were designated as “high-risk.” In January 2001, we recognized that HUD’s top management had given high priority to implementing the 2020 Management Reform Plan, HUD’s reorganization was substantially complete, and that the Department’s efforts had resulted in some improvements in its operations. Considering HUD’s progress toward improving its operations through the management reform plan and consistent with our criteria for determining high-risk, we reduced the number of HUD programs deemed to be high-risk to two of its major program areas—single-family mortgage insurance and rental housing assistance. HUD’s reorganization and consolidation efforts have achieved some successes, but not all inefficiencies that these efforts were intended to address have been eradicated. As we have reported, the Department’s reorganization stemming from the 2020 Management Reform Plan is substantially complete. HUD accomplished the restructuring in a little over 2 years, having announced the plan in June 1997 and establishing, staffing, and benefiting from the new entities by the end of fiscal year 1999. Most notably, HUD consolidated and streamlined some of its operations into new specialized centers, such as the Homeownership Centers (HOC), the Real Estate Assessment Center (REAC), and the Troubled Agency Recovery Centers (TARC). However, not all of the new centers are operating as envisioned because of staffing problems, delays in implementing supporting systems, and imbalanced workloads. Further, HUD has not been able to reduce the number of programs it manages. As a result, HUD is not achieving all of the efficiencies it had anticipated. Of the actions undertaken as part of HUD’s 2020 Management Reform Plan, the effort to consolidate and streamline some of its oversight and processing functions has perhaps been the most successful since the centers are operational and achieving results. For example, specialized centers have assumed responsibility for HUD’s single-family mortgage insurance program, physical and financial assessments, and enforcement activity. Some of the specific accomplishments in streamlining HUD’s operations include establishing four single-family HOCs to consolidate single-family housing mortgage insurance activities previously carried out in 81 field offices. The HOCs have reduced the average time for processing single-family mortgage insurance endorsements from 4 to 6 weeks to an average of 2 to 3 days. an REAC to consolidate physical assessments of assisted multifamily properties previously done by two program offices that were using different standards. In fiscal year 2000, REAC examined over 27,200 properties and reported that 83 percent of public housing developments and 85 percent of insured multifamily properties met HUD’s general physical condition standards. a Section 8 Financial Management Center (FMC) to consolidate budgeting, financial, and payment functions for both project-based and tenant-based Section 8 contracts. The FMC currently provides financial management support for about 10,000 Section 8 contracts from the Office of Public and Indian Housing (PIH) and the Office of Housing (Housing). two TARCs to assist failing public housing agencies in correcting major physical, financial, and management deficiencies. The TARCs currently have responsibility for about 50 troubled or poorly performing agencies. a Grants Management Center (GMC) to consolidate the processing, reviewing, and awarding of categorical and formula grants for PIH. All of the new centers have not assumed the responsibilities as HUD envisioned and as a result, problems remain that affect the operations of HUD’s programs. Specifically, our work indicates that HUD has not yet resolved issues pertaining to the effective use of staff and distribution of workload among centers and field offices, such that some offices may be understaffed, and others may be overstaffed. Staffing imbalances still exist at the HOCs. According to the reform plan, HUD would consolidate its single-family operations and about 70 percent of its field staff into the HOC’s. However, as of January 2001, about 44 percent of the single-family staff remained in 71 field offices because HUD subsequently decided not to force staff to relocate from the field offices. As a result, some of the HOCs are understaffed while single-family staff located in some field offices are not utilized in an effective and productive manner. HUD’s internal studies have also noted this problem: for example, a review conducted by HUD at one office reported that eight single-family staff located at that office were not fully utilized. In addition, HOC managers told us that they must ship case files to field offices for review, cannot assign large projects to offices with small numbers of staff, and that limiting field office staff to a single activity, such as answering telephone calls, can adversely affect staff morale. We reported that the effort required on the part of the center managers to delegate work in this manner hinders the centers’ operations. Also, the two TARCs have only about 10 percent of their planned workload that was envisioned under the 2020 Management Reform Plan (about 575 troubled public housing agencies) largely because of delays in implementing a new assessment system for rating the performance of public housing agencies. The Office of Inspector General (OIG) reported that as of September 1999, the TARCs had been assigned responsibility for only about 50 troubled agencies. The report said that existing TARC staffing levels could not be fully justified and that HUD does not always effectively identify housing authorities that should be designated as troubled and sent to the TARCs for processing. In an August 2001 review of the Memphis TARC, the OIG again reported that a similar situation existed, and the staff were still not fully utilized. The OIG reported that the TARCs were accepting oversight of nontroubled agencies while awaiting full implementation of the new assessment system, partly to keep the staff busy. Additionally, some HUD managers told us they believe that the field offices were more effective than the TARCs at working with troubled agencies because the TARC staff lack adequate training and experience. In January 2000, the Office of Troubled Agency Recovery, which oversees the TARCs, contracted with a consulting firm to standardize processes and operations at the TARCs to enable them to maximize the number of housing authorities each could manage. According to HUD, the contractor analyzed the maximum number of housing authorities that the TARCS could manage with maximum efficiency and developed a process to realize that goal. HUD has also revised the TARCs potential estimated workload to about 300 troubled public housing agencies. While staff in some areas have not been effectively utilized, others have seen their workload and responsibilities increase. For example, Multifamily Housing field office staff may not have experienced all of the workload reductions expected because they continue performing some functions pertaining to the project-based Section 8 contracts that were supposed to have moved to the FMC and to HUD’s new performance- based section 8 contract administrators. The FMC has experienced difficulties with converting the Multifamily Housing project-based Section 8 contracts to its information system, which was created for PIH’s tenant- based Section 8 program. HUD reported that as a result, as of March 2001, Multifamily Housing decided that the financial management responsibilities for about 16,000 of 20,000 Section 8 project-based contracts would not be done by the FMC as planned. Also, HUD elected to address Section 8 control weaknesses through the transfer of functions to contract administrators. During fiscal year 2000, HUD started contracting with Section 8 contract administrators who would be responsible for conducting management and occupancy reviews as well as performing financial management functions. However, about 6,500 of these contracts have not yet been transferred to contract administrators and are being managed by HUD field staff. Thus, the work remains in the field and has not reduced the field office workload as intended. HUD reports that some Section 8 contracts have not been transferred to the contract administrators because of complex issues associated with contract renewals, negotiations with property owners, or designation as troubled properties. The GMC was originally envisioned to be a fully self-sufficient office staffed with many outside contractors to help award categorical and formula grants for PIH. Because the center was to absorb the grants management workload from the field offices, the staffing level of field offices was decreased accordingly. However, HUD subsequently decided that some of the functions at the GMC were core government business functions that could not be assigned to contractors. The center does not have the capacity to handle all of these functions, so the contract management and oversight functions returned to field offices that no longer had the staff to handle the workload. The results from REAC’s physical inspections of multifamily properties may not be effectively used at the field office level. Our recent review of HUD’s evaluation of the results of REAC inspections found that field offices frequently did not follow the Department’s procedures for ensuring that property owners correct all physical deficiencies. This contributed to HUD’s physical inspections database overstating the number of properties for which repairs had been completed. On the basis of site visits we performed, we estimated that for about half of the properties covered in our review, at least 25 percent of the deficiencies that REAC classified as “major” or “severe” were not repaired. Field offices did not always comply with inspection follow-up procedures because of insufficient guidance and because HUD headquarters allowed the field staff to use discretion in implementing agency guidance without ensuring the proper exercise of this discretion. HUD also did not have a system in place for verifying owners’ correction of physical deficiencies. In July 2000, we reported that questions remained about the reliability of REAC’s physical inspections and that REAC has gaps or weaknesses in some of its quality assurance procedures that substantially limited their effectiveness. For example, while REAC performed on-site reviews to assess the adequacy of physical inspections, it did not have procedures for ensuring that these reviews were performed systematically and that problems such as damaged flooring and exposed electrical wiring were resolved quickly and appropriately. Since our reports were issued, HUD has been taking steps to address the problems we identified. In addition to the problems associated with the new centers, HUD has been unable to consolidate or reduce the total number of programs (about 300) it manages. HUD planned that the reorganization and congressional actions would reduce the number of programs to about 70, thus limiting potential problems associated with staffing reductions. However, not all the legislative changes that HUD proposed to assist in this consolidation gained congressional approval. The Congress and HUD’s OIG have also raised concerns that HUD has continued to add its own programs and initiatives, some of which might not be related to its mission and that further strain HUD’s staffing and resources. For example, the Department initiated a Gun Buy-Back Program during fiscal year 2000 to provide funds to housing authorities to purchase guns. It also started Teacher Next Door and Officer Next Door Programs that allow teachers and police officers to purchase foreclosed properties at reduced prices in certain neighborhoods. Consequently, HUD has not experienced all of the workload reductions expected from program consolidations. Increasing accountability in HUD’s programs was a primary goal of the 2020 Management Reform Plan, but the goal has not yet been fully met. HUD’s efforts to “restore public trust” by improving the accountability of its people and programs underlay the six major reforms. HUD’s numerous actions to improve accountability included developing a strategic planning process, enhancing its monitoring ability, improving information and financial management systems, improving contracting procedures, and creating new centralized entities, specifically an enforcement authority. Substantial challenges nevertheless remain to ensure that actions taken to date achieve the desired results. HUD faces challenges in improving its strategic planning process, improving monitoring, developing complete and reliable information and financial management systems, improving its contracting procedures and oversight, and ensuring the new centers achieve the efficiencies intended. HUD’s management reforms were a key component in HUD’s strategic plan, which helps to hold HUD, its staff, and partners accountable for results achieved by HUD’s programs. To comply with the Government Performance and Results Act of 1993 (GPRA), HUD issued its first strategic plan in September 1997, which included linkages to the 2020 Management Reform Plan for each strategic objective. Beginning with the fiscal year 2000 annual performance plan, HUD included the reform goal to “restore public trust” as one of its five strategic goals, with supporting performance goals and measures. In subsequent performance plans, reports, and the updated strategic plan, HUD revised this strategic goal to “ensure public trust”. HUD reported this change reflected the evolution of its goals as HUD experienced results from the implementation of the 2020 management reforms. HUD developed a business and operating plan (BOP) process that established specific performance objectives for each program and field office and collected data measuring each office’s contribution to achieving the established goals. We reported that HUD has continued to improve the presentation of its annual performance plans and reports, including developing more quantifiable measures, improving the discussion of data limitations, and incorporating information on HUD’s human capital initiatives. The 2020 Management Reform Plan stated that there was contradiction in having the same employees help grantees and customers access HUD’s programs and then monitor the activities of those grantees and clients. HUD therefore established a “community builder” position to perform community outreach functions and designated other program staff as “public trust officers” with responsibility to conduct monitoring and oversight. Both community builder and public trust staff were given specialized training to assist in their refocused responsibilities. HUD also took actions to strengthen program administration and reduce weaknesses in its monitoring and oversight. For example, HUD created a Risk Management Division in the Office of the Chief Financial Officer (CFO) to manage risk assessments of programs, track progress toward resolving audit findings, and coordinate with HUD management on financial management issues. HUD furthermore implemented a new training program on compliance and monitoring to emphasize consistent monitoring practices and procedures. HUD provided training to over 1,500 employees in 13 4-day sessions. The Department also implemented a Quality Management Review program during fiscal year 2000 to help improve its operations and identify best practices at field offices. HUD sends teams of staff to field offices to review processes and procedures and identify issues that affect the offices’ ability to do its work. These reviews also serve as a means to provide and obtain feedback from managers and staff on specific problems. Through fiscal year 2001, HUD has conducted 21 reviews at various field offices across the United States. HUD also implemented the Credit Watch and Neighborhood Watch programs that enable the Federal Housing Administration to analyze trends in claim and default data by lender and impose sanctions on problem lenders, which assisted in its monitoring of lenders. Information and financial management systems can improve accountability and control of programs if they provide reliable and complete data. HUD’s management reform plan articulated a goal to modernize and integrate the Department’s financial management systems into a single financial management system. HUD was engaged in a long- term effort to integrate its systems, eventually termed the Financial System Integration (FSI) project, which evolved into a plan to integrate as many as 100 separate systems into 9 new integrated systems. During fiscal year 2000, HUD reassessed the FSI project, determined it was over budget and did not meet its needs or achieve the results desired, and narrowed the scope of the project to completing a core general ledger system that is compliant with federal financial systems requirements. HUD reported that it completed the general ledger in November 2000 and declared the FSI project complete. HUD also developed a Financial Management Vision statement to begin its work on the next generation of financial management systems. In commenting on a draft of our report, HUD stated it analyzed the vision statement developed by HUD’s prior administration and is moving forward with two initiatives to improve its financial management systems. In addition, HUD implemented the prototype of an information system and data warehouse, known as the Enterprise Information System, formerly known as the Empowerment Information System, that is to provide HUD users and business partners with access to reports and analytical information across a large variety of program data sources. This system uses selected financial data and data from the Community 2020 software used for planning, mapping, and communication. HUD also reported it reduced the number of systems that do not conform to federal financial standards from 18 of 73 systems reported in fiscal year 1999 to 11 of its 67 financial management systems as reported in its fiscal year 2000 financial statements. HUD reported that five systems were discontinued or reclassified as nonfinancial systems and the Department corrected deficiencies in three of these nonconforming systems. HUD also established the Office of Chief Information Officer (CIO) and has expanded the role of the office to assume responsibility for the planning and acquiring of nonfinancial systems, such as the executive information system and the departmental grants management system. As we reported in January 2001, HUD has also made some progress toward improving its internal control environment and addressing its long- standing financial material internal control weaknesses identified by the HUD OIG in its audits of HUD’s consolidated financial statements. As of March 2001, the HUD OIG reported that HUD successfully addressed issues associated with a major systems conversion effort that had caused the OIG to disclaim an opinion on HUD’s fiscal year 1999 consolidated financial statements. The OIG stated that its ability to conclude that HUD’s fiscal year 2000 financial statements were reliable is noteworthy. HUD has made some progress since fiscal year 1999 in reducing the number of material internal control weaknesses that the OIG reports. The OIG reported eight material internal control weaknesses in fiscal year 1998 but downgraded, recategorized, or eliminated some—such that four remained as of the end of fiscal year 2000. This change was partially due to HUD establishing the REAC to assist in monitoring its multifamily property inventory, as well as the OIG’s determination that HUD’s human resource issues were better addressed as contributing factors to other material weaknesses than as stand-alone issues. We also reported that HUD has actions under way or planned to address other material internal control weaknesses, including an income verification process to determine the extent of overpayments and underpayments in its assisted housing programs. In response to our recommendations, HUD has also taken actions to improve its information technology investment management process. These steps include (1) establishing project scoring and selection process and a Technology Investment Executive Board Committee to make project selection and funding decisions; (2) developing procedures and a control process and establishing a senior review board to perform reviews of ongoing projects; (3) and developing evaluation procedures to determine whether information technology investments are achieving the expected benefits and to identify opportunities for further improvements. HUD has also taken steps to improve its acquisition process and hold contractors accountable for their work. For example, HUD created an Office of the Chief Procurement Officer, a contract management review board, provided training for contract technical representatives and managers, and increased the use of performance-based contracts. HUD provided training to its Government Technical Representatives (GTR) to improve their ability to manage and monitor contractors. In addition, the specialized centers help HUD improve accountability and control in its programs. These centers are separate entities whose functions are to collect data about HUD’s and HUD partners’ operations, evaluate the data, refer problems to field or program offices or take legal action when necessary, consolidate functions, and centralize activities. Most importantly, to help emphasize HUD’s renewed focus on reducing fraud, waste, and abuse, HUD created the Departmental Enforcement Center (DEC) to consolidate all noncivil rights compliance enforcement functions for HUD’s program offices. The DEC currently focuses on the problems of distressed multifamily properties that have failed physical and/or financial inspections that require corrective actions by owners, lenders, and management agents. HUD reported that, during fiscal year 2000, DEC actions resulted in the restoration of 41,344 housing units to decent, safe, and sanitary conditions, versus 968 in fiscal year 1999. In addition, DEC enforces administrative and regulatory business agreements by debarring or suspending lenders in noncompliance with HUD requirements and by imposing monetary penalties. HUD reported that DEC actions resulted in savings of $29.7 million to the federal government in fiscal year 2000, through recoveries obtained, savings in program funds, and avoidance of insurance claims. Despite all of these efforts, we have identified some areas in which HUD could improve accountability in its management and programs. Specifically, HUD faces challenges in improving its strategic planning process, monitoring activities, information and financial management systems, contracting procedures and oversight, and center operations. While HUD has continued to improve its strategic planning process, it is unclear whether HUD achieved its goal to use the GPRA to increase accountability for results. We have reported that HUD’s annual performance report does not clearly articulate the contribution of HUD’s programs to the desired outcomes. Specifically related to the 2020 management reform, HUD’s progress toward reducing waste, fraud, and abuse in its programs is not clear, based on the results shown in the performance report. Also, we continue to raise concerns about the completeness and reliability of the performance data used to report HUD’s activity. In a review of HUD’s compliance with GPRA, the HUD OIG also reported that although HUD has improved its plans and reports, HUD is not fully complying with the requirements of GPRA. Therefore, the OIG concluded that the President, the Congress, and taxpayers are unable to fully use the plans and reports to measure the results and scope of HUD’s operations. Monitoring of HUD’s programs has been hampered by staffing issues associated with HUD’s reorganization: staff reductions, a lack of experienced staff, and insufficient resources—such as travel funds— hinder effective monitoring. For example, the 2020 Management Plan noted that staff reductions of 23 percent in one program had prevented adequate monitoring. In this program, one field office from which HUD performed a quality management review in fiscal year 2000 reported that staff shortages have reduced monitoring visits so that only 3 out of 40 homeless grants could be monitored during a 1-year cycle. As discussed earlier, workload problems at the field offices and new centers hinder monitoring efforts. In some cases, the creation of new programs and regulations have further burdened the staff and may have adversely affected their ability to monitor their programs. In our survey of HUD’s PIH managers, PIH managers told us they are not currently adequately prepared to assist housing agencies improve their performance because of the field office workload and workload relative to staff qualifications and training, among others. The HUD headquarters official in charge of field office operations acknowledged that field offices need additional training in part because of the numerous new and revised program requirements resulting from recent public housing reforms. HUD announced a program called the Teacher Next Door in December 1999, which was an expansion of its Officer Next Door Program introduced in 1997, that allows teachers to purchase HUD-owned homes at 50 percent off the list price in HUD-designated revitalization neighborhoods. In a February 2001 interim report, HUD’s OIG concluded that the management control procedures that HUD had in place for the Officer/Teacher Next Door programs were not adequate, which significantly increased the risk of program fraud and abuse. In response, in April 2001, HUD announced that it would suspend these programs for 120 days while it strengthened its oversight measures. In its June 2001 final report, the OIG reported, among other things, that in 23 of the 108 cases reviewed, homebuyers abused the program by not fulfilling the occupancy requirements. Also, homes were sold outside of the designated areas and were therefore improperly discounted by about $1.2 million. The programs resumed on August 1, 2001 with new procedures for oversight. The separation of community outreach functions from compliance functions encountered difficulties that limited improvement of the monitoring process. The community builder function, intended to help communities access HUD’s services more efficiently and allow program staff to focus on “public trust functions,” drew criticism from the Congress, HUD’s OIG, and employees. According to HUD’s OIG, to establish the community builder function, HUD had to allocate resources (salary, training, travel dollars, and personnel) from program monitoring and enforcement actions at a time when HUD was significantly decreasing its general workforce. HUD created 850 community builder positions, 390 were permanent career positions and 460 were 2- or 4-year temporary appointments, or fellowships. The majority of the HUD managers we interviewed told us that they did not believe that HUD was successful in separating community outreach from public trust responsibilities. Some managers and staff told us that because the community builders lacked expertise about HUD programs, program staff had to take time away from their responsibilities to educate the community builders. Confusion about the role of the community builders and their authority also surfaced. A HUD consultant reported on the resistance and resentment surrounding the introduction of the community builders into the Department and recommended that HUD work with employees to address these issues and clarify the results the community builders were to produce. The Congress subsequently terminated the fellowship aspect of the program. HUD has not yet achieved the 2020 management reform goal of having state-of-the-art and fully integrated financial management systems. In its audits of HUD’s consolidated financial statements, the OIG reported that the most critical need faced by HUD in improving its control environment is to complete development of adequate systems. While it was a reasonable decision to refocus and terminate HUD’s FSI project given the expense and problems experienced, after a decade of efforts to improve them, HUD’s information and financial management systems are still not sufficient to meet its needs and may not be for some time. Our work and that of the OIG show that despite HUD’s various efforts to improve its financial systems, its systems do not provide sufficient support to its programs and business processes. For example, we found that the HOCs use a combination of older systems, called legacy systems, and newer information systems that are not integrated with each other. To compensate for insufficient systems, costly manual analysis and other inefficient practices are required to perform routine day-to-day work. Four years after the single family consolidation began, some systems still record data by field office or contract area rather than by HOC, requiring that the staff generate multiple reports and make manual calculations to perform analyses and develop reports. HUD also reported in its resource study, that information systems were not integrated, and data often had to be retrieved from multiple systems. As a result, systems do not efficiently provide the information that managers need to carry out program activities. These inefficient systems also reduce the staff’s ability and amount of time available to focus on other activities, such as monitoring. The OIG furthermore reported in its audit of HUD’s fiscal year 2000 consolidated financial statements that the Department’s financial management systems, including its core financial system, do not fully comply with federal financial system requirements. In addition, the OIG reported that— Weaknesses remained in the supporting financial systems and that delays in integrating the financial systems continued. Although some improvements were made, management plans for additional systems improvements were not clear and had not been supported by adequate analysis. General control weaknesses remained in HUD’s systems pertaining to its controls over the computing environment, administration of personnel security operations, and the reliability and security of critical financial systems. HUD’s systems remained vulnerable to unauthorized access, and HUD’s Central Accounting and Program System was vulnerable to errors and systems failures because of weak maintenance practices. Two material internal control weaknesses remained related to HUD’s systems: HUD needs to (1) complete improvements to its financial systems and (2) enhance the Federal Housing Administration’s (FHA) information technology systems to support its business processes. HUD’s OIG also reported that the Department has two other material internal control weaknesses pertaining to oversight and monitoring of housing subsidy determinations and ensuring that subsidies are based on correct tenant income. HUD spent about $19 billion in fiscal year 2000 to provide rent and operating subsidies. Errors made in rent calculations and the misreporting of income by tenants result in HUD making higher subsidy payments than necessary. A recently completed study of rent determinations estimated that errors made by project owners and housing authorities resulted in about $1.7 billion in subsidies overpaid on behalf of households paying too little rent and about $0.6 billion in subsidies underpaid on behalf of households paying too much rent. Additionally, HUD performed computer matching of income reported by tenants with data from sources such as the Internal Revenue Service and Social Security Administration and estimated that housing subsidy overpayments from incorrectly reported tenant income totaled about $617 million, plus or minus $10 million, during calendar year 1999. Although, the 2020 management reform plan noted the importance of accurate subsidy payments as a means to reduce fraud, waste, and error in its programs, according to HUD, it did not fully address the nature and scope of the high incidence of program error and improper payments. However, HUD has since developed a more comprehensive corrective action strategy. In 2001, HUD instituted a Rental Housing Integrity Improvement Project to address HUD’s high-risk status and material weaknesses in the rental housing assistance programs area. High-quality software is essential for HUD’s information systems to provide reliable management, financial, and administrative information and support for the Department’s many programs. We recently found weaknesses with HUD’s software acquisition practices in four key process areas: requirements development and management, project management, contract tracking and oversight, and software evaluation. We found that HUD’s software acquisition processes are undeveloped and are not repeatable on a project-by-project basis because of the many weaknesses it has in the specific management and oversight activities related to each process. Strong performance of these activities is essential for achieving effective, repeatable, and lasting implementation and institutionalization of the key process areas we reviewed. Currently, HUD’s success or failure in acquiring software depends largely on specific individuals, rather than on well-defined and disciplined software acquisition management practices. As a result, HUD is exposed to a higher risk that software- intensive acquisition projects will not consistently meet mission requirements, perform as intended, or be delivered on schedule and within budget. HUD’s seeming inability to reform its information and financial management systems are troubling, not only because they raise concerns about data reliability, but also because they challenge the ability of the various entities and staff within HUD to effectively perform their jobs and communicate within the organization. Inadequate systems could affect HUD’s ability to collect accurate and adequate information and effectively report on its program results. HUD has experienced problems with contractor performance and its oversight of contractors. HUD and its staff face a number of challenges managing the workload associated with contractors. For example, the HOC’s reliance on contractors has grown, but the ability of HUD staff to monitor contractors has not kept pace. Some HOC managers told us that it was a challenge for their staff to shift from performing insurance endorsement and property disposition activities to monitoring the performance of contractors. Although HUD’s resource study identified contract management as becoming a significant workload issue for field offices during the first phase of the study, our recent work shows that data needed to manage contracting costs and monitor contractor performance are not readily available and cannot be readily extracted from HUD’s systems without extensive manual analysis by the HOC staff. The resource estimation study also reports that conflicting work priorities limit time for contractor oversight and monitoring tasks in PIH. According to HUD’s Chief Procurement Officer, HUD has not comprehensively assessed the functions that need to be contracted out. HUD uses contractors for a number of reasons, but more importantly it uses them to supplement staff shortages. For example, the GMC uses contractors to assist in its categorical grant review process. However, using contractors to perform these functions creates additional oversight issues that the staff might not be able to manage. As discussed above, the specialized centers were established to centralize functions and increase program accountability. Yet, implementation difficulties at some centers adversely affect HUD’s ability to ensure accountability in its programs. Specifically, the DEC that was created to reduce fraud, waste, and abuse has not taken on the workload expected. To date, it has received few referrals from program offices other than Housing. PIH reported that they have not sent referrals to DEC from the TARCs because of delays in implementing the new Public Housing Assessment System (PHAS). Officials from Community Planning and Development (CPD) told us that they prefer to handle problems with grantees first and send them to the DEC only as a last resort. This lack of referrals affects the operations of the DEC, as well as the program and field offices that continue to manage work that was supposed to have been removed from their responsibility. However, because we have also reported that DEC officials believe it lacks sufficient and experienced staff, these are issues that may need to be resolved before this additional workload could transfer to the DEC. HUD’s efforts to refocus and retrain its staff have achieved some success. The Department’s human capital has been an area of concern since we first identified staffing issues as a management deficiency that contributed to HUD’s designation as a high-risk area in 1994. HUD’s efforts to refocus and retrain its workforce included reducing the number of employees, moving staff to other positions, increasing training for staff, developing an appraisal system, and implementing a process to improve its estimation of resource needs and allocation of staff for its programs. Still, HUD is left with residual morale issues and skill gaps. Furthermore, HUD’s human capital problems could be exacerbated by projected high levels of future retirements. As a part of the 2020 Plan, HUD was to refocus and retrain its staff to ensure it had the skills and resources, where needed, using buyouts and staff movements. With the implementation of the 2020 Management Reform Plan, HUD planned to reduce staffing from 10,500 at the end of fiscal year 1996 to 7,500 by fiscal year 2002 through buyouts, attrition, and outplacement services in lieu of reductions in force through the year 2002. HUD reduced staffing to about 9,000 full-time positions by March 1998, when the downsizing effort was terminated. During this time, HUD initiated various personnel actions to implement the reforms and refocus its staff, including notifying about 3,000 staff that their jobs were unaffected by the notifying over 3,100 staff that they would be voluntarily reassigned to what HUD termed as substantially similar positions in the same geographic area. These positions had similar duties, critical elements, and qualifications, and could be performed by employees with little loss in productivity; offering buy-outs that resulted in about 1,000 employees leaving the placing over 1,000 staff in new positions under a merit staffing plan. HUD’s expenditures for technical and management training increased from $0.32 million in fiscal year 1997 to $5.9 million in fiscal year 2000. According to HUD officials, the Department has trained staff who were reassigned through expanding its training curriculum and introducing computer-based and satellite training. The majority (74 percent) of the managers who participated in our August 2000 survey of 155 managers were satisfied with the quality of training. HUD also established a new employee appraisal system for its senior executives and managers in December 1999. The Department reports that its Executive Performance and Accountability Communication System for Senior Executives is designed as a results-oriented and performance-based system, which links directly to the Department’s strategic plan and organizational goals and objectives. The Performance Accountability and Communication System for managers and supervisors intends to establish accountability for individual performance by linking performance appraisals to the Department’s strategic goals through its Business and Operating Plan process. According to HUD, the system emphasizes two- way communication of performance requirements and results, as well as continuous improvement of individual and organizational performance. In May 2001, we reported that 79 percent of the HUD managers reported that they were held accountable for results to a great or very great extent.We also reported that HUD managers reported that employees receive positive recognition for helping the agency achieve its strategic goals; managers are held accountable for results; they have outcome and output measures; and performance information is used to set program priorities, allocate resources, coordinate program efforts, and set job expectations. By a statistically significant margin, there were a higher percentage of HUD managers who held these opinions than at the other 25 agencies surveyed in the rest of the federal government, excluding the General Services Administration and the Small Business Administration. In August 2000, HUD initiated a process to systematically estimate the number of employees it needs, based on its workload and operations. As of July 2001, HUD completed the first two phases of this resource estimation process, which covered about 83 percent of its staff and 12 of its major areas such as: PIH; Administration; Community Planning and Development; Multifamily Housing; Single-Family Housing; and Fair Housing and Equal Opportunity. Each study defines the work for an individual office, estimates the volume of work, calculates the resources required to perform the work, and identifies a framework for workload reporting. In addition, as of May 2001, HUD decided to integrate its strategic workforce planning activities into this process. The resource estimation studies were scheduled for completion in December 2001. In August 2000, HUD completed a study of its strategy for succession planning, which includes information on retirement eligibility of its staff. This study collected data to help define potential human capital issues associated with retirements over the next 3 years and to assist with HUD’s succession planning. Although HUD’s actions were intended to improve overall operations, HUD continues to face problems associated with its efforts to refocus and retrain its staff. These challenges include ensuring adequate levels of staffing, maximizing staff effectiveness, addressing morale issues, compensating for loss of knowledgeable staff due to retirement, and ensuring sufficient training. HUD’s plan to reduce its staff to 7,500 was only partially implemented because the Department did not anticipate how some of these actions would affect its staffing and its operations. Specifically, the plan apparently did not consider the long-term implications for HUD. In a 1998 review of the 2020 plan, we reported that HUD had not based its staffing target of 7,500 on a systematic workload analysis to determine its needs and questioned whether HUD would have the capacity to carry out its responsibilities once the reforms were in place. HUD terminated the downsizing effort in May 1998. The Department stated that the staffing level would be maintained at the current level (about 9,000) unless the Congress enacted legislation to consolidate HUD’s programs and further reductions could be made in the number of troubled multifamily assisted properties and troubled public housing authorities. HUD has continued to experience some problems from its downsizing efforts. Although HUD implemented hiring limitations and offered early retirement and buyouts to reduce the staff, the Department has not been able to reduce its number of programs and has not yet fully realigned its workload. Consequently, HUD field offices and centers continue to report that the Department lacks adequate staff to fulfill all its responsibilities. Our recent survey of HUD managers indicated that they believe they do not have enough staff and that their workload has increased, rather than decreased as envisioned under the 2020 plan. Also, according to the resource and estimation study, staffing levels in the field often are not based on workload or skill requirements, but rather on the numbers of employees located there when the organization was established. HUD’s current ability to respond to some of these concerns is restricted because the Congress has limited HUD’s staffing levels until it completes the ongoing development of a process to systematically estimate its resource needs based on its current workload. HUD has also experienced some morale problems as a result of the downsizing activity. In October 1997, HUD sent letters to each of its employees regarding their job status under the reforms. As reported above, HUD notified about 6,100 staff that they had a position within the organization, but about 3,000 staff were notified that they had not been placed in a position in HUD’s new organization. These employees were generally referred to as “unplaced” staff. The letter stated that these employees would maintain their current jobs if they did not obtain another position within HUD or a new job outside of HUD. The letter also stated that HUD would not implement a reduction in force until 2002, if one were necessary. When HUD made the decision to cease the downsizing there were still about 1,300 of these employees remaining at HUD, who had not yet been able to obtain other positions within the new organizational structure. HUD decided that jobs would be found for those staff without permanent positions and resulted in the majority of them remaining in their same locations. By September 1998, HUD reported that most of the remaining unassigned staff had been placed into permanent positions. This “unplaced” designation created problems that continue to raise concerns. First, managers told us the designation created a morale issue because those staff were generally stigmatized as poor performers, and the “unplaced” designation implied that they were not needed. This was particularly of concern to those staff who were unable to obtain positions through voluntary reassignment or merit selection. Second, some of these staff were allowed to stay in positions where work no longer exists. Many of the single-family staff located in the field offices were included in this unplaced category. Third, although some of these staff were subsequently moved to positions in other program areas or centers, some managers have reported that they had difficulties learning their new responsibilities. For example, several of the managers with whom we spoke who received unplaced staff indicated that despite the training that the unplaced staff received, they are still not suited for the positions to which they were assigned. For example, in our April 2000 report on HUD’s oversight of FHA lenders, center officials maintained that inexperience on the part of staff was one reason why the highest risk lenders were not always reviewed. According to the officials, many of the staff assigned to review lenders came from this pool of unassigned staff and had no background in lender monitoring and credit issues. As a result of the staffing reductions and hiring limitations undertaken in recent years, HUD’s staffing problems could be further affected by the high percentage of staff eligible for retirement in the near future. HUD has not yet developed reliable projections of how many of its eligible employees may actually retire, but according to the Department’s study of the retirement eligibility of its staff, more than 50 percent of seasoned staff from its core business groups are eligible to retire over the next 3 years. Recent estimates by Office of Personnel Management show that about 39 percent of HUD staff will be eligible to retire within the next 5 years, placing HUD among the federal agencies with the highest percentage of retirement-eligible employees. Managers in the field told us they are concerned about the potential retirements and the impact on their offices’ ability to do their jobs. Some program offices could lose over 40 percent of their current staff to retirement, if employees choose to leave now. See figure 3 for retirement eligibility in six program areas, covering about 6,000 employees. Additionally, HUD staff in field offices and centers have specific training needs that are not being met. Our August 2000 telephone survey indicated that although managers reported that training and staff skills have generally improved, they believe that training should be increased. Managers at the field offices and centers agreed that HUD should increase training in specific areas. Specifically, managers stated that training should be increased in the areas of information systems (75 percent), program regulations and changes (72 percent), technical job skills (71 percent), and interpersonal skills (59 percent). Center managers were more critical of HUD’s training than field office managers. We believe that this difference in managers’ views on HUD’s training is consistent with the fact that the centers are fairly new, and staff might require more training to learn the specialized skills needed to do their jobs. For example, although the Enforcement Center assigned mentors and conducted extensive training, a significant amount of training is still needed, particularly related to servicing and enforcement options for troubled multifamily properties. Over the years, HUD has increased its training budget and HUD officials noted that the instruction offered by the Training Academy can be described as good; however, most of these officials expressed concerns about the availability of specialized training to meet the specific needs of their centers, offices, or programs. For example, most HUD staff with whom we spoke listed training needs that are specific to the skills needed to perform their jobs, such as financial analysis, marketing, or accounting. However, many of these staff stated that they are unable to attend outside courses to meet these training needs due to a lack of local training and travel funding. In addition, we recently reported similar training concerns for the Enforcement Center. Field managers also have noted that the administration of training is too centralized. As discussed earlier, perhaps the most significant human capital issue that remains to be addressed is determining the most effective deployment of HUD’s workforce and the workload among the centers and field offices. HUD acknowledges that it is still struggling to address this problem and that inefficiencies exist, resulting from their decision to allow staff working in the single-family program to remain in field offices, even though the work was centralized into the HOCs. Additionally, the majority of staff we interviewed at HUD felt that they had not received sufficient, or in some cases any, guidelines and protocols for their interactions with the new centers. A HUD consultant in November 2000 reported that centers’ and field offices’ work showed that the agreements between them that govern their working relationships have not yet become effective accountability tools, and, according to some field staff, were being ignored. HUD’s 1997 Management Reform initiated major changes throughout the Department to try to resolve its management problems and improve HUD’s image among its various clients. HUD has made progress on both counts, and HUD generally considers its reorganization complete. However, HUD still faces considerable challenges in ensuring that these management reforms will amount to sustainable improvements in HUD’s performance. These challenges cut across HUD’s programs and its efforts to consolidate and streamline its operations, improve accountability and control of its programs, and refocus and retrain its staff. Successfully addressing these challenges in the areas of human capital, information and financial management systems, and acquisition management will determine whether HUD can sustain the progress of its management reform efforts and make progress toward its goal of becoming a high-performing organization. Insufficient staffing and inefficient distribution of workload affects the ability of HUD’s specialized centers to operate efficiently in the manner originally envisioned under the 2020 Management Reform Plan. It also affects HUD’s ability to monitor and ensure accountability of its programs in both the centers and in its program areas. Insufficient staffing increases HUD’s need to hire contractors to perform activities and affects its ability to oversee contractors and ensure contractor performance. As a result, human capital is the primary issue for HUD because of its crosscutting implications and its impact on major activities such as monitoring and contracting. HUD has the opportunity to develop a strategic human capital management approach that uses all available tools, including administrative authority, involves employees, and is aligned with HUD’s programmatic goals. Based on our work, such an approach could include the following elements: Use of the resource estimation and allocation studies and other available data to deploy the workforce appropriately so that a match exists between staffing levels, staff skills and competencies, and the workload staff are asked to manage. Implementation of a skills-assessment program that, most importantly, considers the skills necessary for the successful accomplishment of the agency’s mission and programmatic goals. This assessment would identify gaps in skills currently held by the workforce that are necessary for program implementation, prioritize the use of limited training resources, and focus recruiting efforts. Development of a comprehensive succession plan for addressing the pending retirement wave expected at the Department, including developing reliable projections of the number of eligible staff who may actually retire. The plan should include specific goals and strategies and incorporate how to transition work responsibilities to incoming staff. Creation of a recruitment and retention plan based on programmatic priorities for meeting future staffing needs. Effective information and financial management systems are crucial to effective and efficient operations and sufficient management control at HUD. However, HUD has not yet achieved the 2020 management reform goal of having state-of-the-art and fully integrated financial management systems. Although we believe it was reasonable to rescope and ultimately terminate the FSI project—given the problems experienced with it—as we have previously recommended, HUD still needs a modern, integrated financial management system. Ineffective information and financial management systems adversely impact HUD’s programs and operations and affect its staff’s ability to obtain reliable data needed to monitor its programs. Ineffective systems contribute to workload inefficiencies because they create additional work that can take staff away from other essential activities, such as monitoring. In addition, ineffective systems complicate oversight of contractor activities. Since HUD has plans to make substantial investments in new systems over the next few years, it is important that the Department continue to improve its software acquisition processes and have a high level of management attention on its information technology investments. HUD has substantially increased its contracting in the last few years and this trend is generally expected to continue. The Department, along with other federal agencies, has experienced problems in effectively overseeing its contractors. The acquisition management challenges HUD will likely face include deciding what work is best done by HUD employees and what is most efficiently contracted out, rather than relying on contracting to resolve resource shortfalls; identifying emerging staff workload needs to manage decisions to contract work; and ensuring that staff are adequately trained in responsibilities related to contract management and oversight. Under the 2020 Management Reform Plan, HUD made progress in addressing some of its significant problems and initiated major changes throughout the Department, but HUD still faces considerable challenges in ensuring that these management reforms will amount to sustainable improvements in HUD’s programs and performance. As HUD’s new leadership moves forward and is making decisions about organizational structure and programs, it has the opportunity to assess the lessons learned from the last few years and involve its employees, clients, and the Congress in establishing a shared vision for identifying and resolving the remaining challenges. Once consensus is achieved on the challenges and strategies, resources can be more effectively targeted toward resolving those issues deemed to have the highest priorities. To determine the status of the reforms, including progress made and problems encountered, we reviewed HUD’s 2020 Management Reform Plan, HUD’s internal status reports on the progress and accomplishments of the reform initiatives, HUD’s OIG reports on the reforms, and external assessments of the reforms by HUD consultants and GAO. In addition, we interviewed HUD management officials in the following offices: Community Planning and Development, Fair Housing and Equal Opportunity, Field Policy and Management, Housing, PIH, Administration, Training Academy, Departmental Operation and Coordination, Chief Procurement Officer, CFO, Government National Mortgage Association, CIO, Office of Troubled Agency Recovery, and Policy Development and Review. We also interviewed managers at most of the centers created under the management reforms, these include the REAC, DEC, FMC, and the GMC. Finally, we interviewed 22 staff and 22 managers (some of which also participated in our August 2000 survey) from field offices within the PIH, Office of Housing, Community Planning and Development, and Fair Housing and Equal Opportunity to follow up on their experiences with the management reform initiatives, since we last talked with them in 1998. To address HUD’s human capital issues specifically related to the 2020 reforms, we reviewed our reports related to these issues, as well as HUD internal and external reports on human capital related issues. We also asked questions related to human capital issues, such as staffing, training, and workload, to the managers and staff we interviewed throughout the course of our assignment. We conducted this review from February 2001 to September 2001. We performed our audit work in accordance with generally accepted government auditing standards. In commenting on a draft of this report, HUD agreed that the report accurately depicts the status of HUD’s progress and problems, at varying points, and that it forms a baseline against which continuing improvements can be measured. We made technical corrections and updates to the report based on HUD’s comments, where appropriate. The complete text of HUD’s comments and our responses are included in appendix III. As arranged with your office, we will also send copies of this report to the Secretary, Department of Housing and Urban Development. We will make copies available to others on request. If you or your staff have any questions about this report, please call me at (202) 512-2834. Key contacts and major contributors to this report are listed in appendix IV. Specific changes per 2020 Plan Organizational changes Create the following centers: Real Estate Assessment Center (REAC): for reviewing and evaluating physical inspections and financial reporting. Section 8 Financial Management Center (FMC) for integrated financial management of all Section 8 payment processing for Housing and PIH. Housing: Single-Family Homeownership Centers (HOC) to consolidate all single-family operations, Multifamily Centers to carry out asset management and asset development functions. PIH: Troubled Agency Recovery Centers (TARC) to deal with troubled public housing agencies; Special Applications Center for consolidating nonfunded applications and processes for specialized programs; PIH Grants Center (known as the Grants Management Center (GMC) for competitive grants management and management of the public housing operating fund and capital fund. Chief Financial Officer (CFO): Accounting Center to consolidate program and administrative accounting operations from 10 accounting divisions. Office of Administration: Administrative Service Centers and Employee Service Center to eliminate redundant administrative functions in human resources, procurement, and space planning as well as payroll, benefits, and counseling services, respectively. Redesign contract procurement process to improve operations and oversight. Consolidate routine cross-operational processing into centralized back office processing centers, or hubs, in the field. Consolidate program administrative functions into the Office of Administration. Establish Economic Development and Empowerment Service, aligning various job skills and other programs from CPD, PIH, and Housing. Outsource legal and investigative services when appropriate. Outsource technical assistance to grantees when appropriate. Privatize physical building inspections, financial audits, technical assistance, and real estate assessments. Consolidate 10 field accounting divisions into 1 accounting center within the Office of the CFO. Consolidate operations in 51 field offices into 17 Multifamily Centers within Multifamily Housing. Consolidate financial management and budget functions in CFO. Specific changes per 2020 Plan Program changes Privatize HOPE VI construction management and development process as appropriate. (L) Consolidate six homeless assistance programs. (L) Merge Section 8 certificate and voucher programs to streamline HUD regulations and oversight. (L) Extend Federal Housing Administration (FHA) note sale authority permanently. (L) Reform FHA single family property disposition to reduce staff burden, value lost while in inventory, and exposure to risk. (L) Reform 2: Modernize and integrate HUD’s outdated financial management systems with an efficient, state-of- the-art system. The plan stated that the single most glaring deficiency of the Department is the financial management systems. At the time, every program cylinder operated its own financial systems, and there were 89 different systems. Compounding the difficulties, many of the systems could not talk to each other. The plan was that HUD would have a common, consolidated financial management information system fully implemented by mid- 1999. Expected benefits included greater financial management accountability and improved communication between HUD, its grantees and communities across the country. Reform 3: Create an Enforcement Authority with one objective: to restore the public trust. The plan stated that the greatest breach of the public trust at HUD is the waste, fraud, and abuse in HUD’s existing portfolio of millions of housing units. Each of HUD’s program offices – PIH, Housing, FHEO and CPD – operated independent enforcement functions, with different standards and procedures. HUD would combine enforcement actions for PIH, CPD, FHEO (noncivil rights compliance), and Housing into one authority. The Enforcement Authority was to be responsible for taking legal action against all PHAs that received a failing score on their annual assessment. The Enforcement Authority would also move against all Housing properties that fail physical and financial audit inspections, cleaning up the historical backlog of 5,000-plus troubled Office of Housing properties. The Enforcement Authority was to also crack down on all CPD and FHEO grantees who failed audit standards or who engaged in waste, fraud, and abuse. The Enforcement Authority would consolidate existing employees and contract with outside investigators, auditors, engineers, and attorneys where necessary and appropriate. This division would also serve as liaison with the Inspector General, and coordinate its work with the Federal Bureau of Investigation, Department of Justice, and the Internal Revenue Service. Expected benefits included streamlining enforcement activity and dedicating resources to deal with troubled properties. Integrate HUD’s fragmented financial management system, repairing or replacing HUD’s 89 separate financial management and information systems. Use advanced mapping software system, Communities 2020, to show communities the impact of HUD funding and activity in their area and enable them to plan, track, and measure performance. Implement HUD’s new Management Integrity Plan. Consolidate existing organizations and employees: contract where appropriate with outside investigators, auditors, and attorneys. Monitor low-performing PHAs, properties failing physical and financial audit inspections, and CPD/FHEO grantees failing program compliance. Create a business-like entity to clean up the backlog of over 5,000 troubled multifamily properties. Streamline and privatize process for Housing’s pursuit of negligent owners. (L) Reform bankruptcy laws to prevent owners from using them as a refuge from enforcement actions. (L) Reform and description of problem and benefits to be achieved Reform 4: Refocus and retrain HUD’s workforce to carry out our revitalized mission. The plan states that HUD’s mission was never sharply defined, and HUD has changed its emphasis to suit the times. For example, after the HUD scandals of the 1980’s all emphasis was on monitoring and enforcing regulations. At other times, the emphasis was to help grantees do whatever they wanted. HUD will refocus its mission and then retrain the workforce to serve that mission. No matter what area an employee works in, his or her primary mission will be to either empower communities and people or to enforce the public trust. In the past, employees were too often charged to do both at the same time. The plan stated that these expectations were inconsistent and often contradictory. The plan recognized that monitoring and helping grantees are distinct functions and must be performed by different individuals–and in different divisions– within the organization. HUD created Community Resource Representatives (also known as Community Builders), who were to empower the community by bringing in technical expertise, and knowledge of finance programs and economic development. They were to be HUD’s “front door,” helping customers gain access to the whole range of HUD services. The Public Trust Officers were to work in the field offices as the front line for monitoring and would refer significant problem cases for enforcement to the new Enforcement Authority. HUD would also continue downsizing by consolidating and streamlining operations. Expected benefits included improving access of communities to HUD services; increasing monitoring; reducing fraud, waste, and abuse; and avoiding reductions in force. Reform 5: Establish new performance-based systems for HUD programs, operations, and employees. The plan stated that HUD uses an employee evaluation system that has some, but not significant, connection to program and agency long-term goals. HUD planned to explore changes to HUD’s system and implement effective and meaningful Government Performance and Results Act (GPRA) performance measures designed to hold HUD staff and grantees accountable for results. HUD also planned to push for changes in programs to emphasize performance, much of which would require legislation. For example, to change inflexible, labor-intensive competitive grants to performance based formula grants. The new HUD would emphasize product over process, performance over paperwork. Expected benefits included uniform standards for measuring performance, productivity, and accountability; increased ability to measure and reward performance for employees and contractors; reduced burden on field staff for monitoring and oversight; and reduced costs for HUD’s assisted housing. Specific changes per 2020 Plan Organizational changes Select and train staff as Community Resource Representatives, also known as Community Builders, and Public Trust Officers for all field offices. Downsize HUD staff form 10,500 to 7,500 using skills and resources where they are needed most. Develop a road map for downsizing HUD employees, including a buyout strategy and options for career transitions. Streamline and consolidate operations and reassign staff to high priority work. Create meaningful GPRA performance measures that hold HUD staff and grantees accountable for results. Specific changes per 2020 Plan Program changes Convert inflexible, labor-intensive competitive grant programs to performance-based grant programs, including: Tenant Opportunities, Economic Development/Support Services, Public Housing Drug Elimination, Competitive PHA Capital Funds; and six homeless programs. (L) Deregulate high-performing PHAs and smaller PHAs by mandating fewer reporting requirements (L) Create a Public Housing Authority Performance Evaluation Board. (L) Mandate judicial receivership for PHAs on the troubled list for more than 1 year. (L) Reduce excessive rent subsidies to market levels on assisted housing. (L) Reform 6: Replace HUD’s top-down bureaucracy with a new customer-friendly structure. Consistent with its new mission, the plan stated HUD must redesign its structure. The top-down headquarters/field structure is outdated and outmoded and HUD had not kept pace with similar reorganizations done in the private sector. Although HUD would retain all 81 field offices, it would be organized by function instead of by program. The newly consolidated operations would be located in processing centers while HUD’s public and grantee outreach would be conducted in community friendly locations. Expected benefits included improving customer service; improving working relationships with customers; better compliance with the Chief Financial Officers (CFO) Act; and improving strategic planning, performance and measurement of HUD’s operations. Create neighborhood “store-front” service centers in communities. Offer single point of service to customers through Community Resource Representatives and centralize back-office centers. Establish a new management planning strategy. Streamline headquarters and redeploy staff to field. Legend: L = legislation required, per the Management Reform Plan. What remains to be done While we have recognized that the reorganization has resulted in some improvements, we have continued to identify issues related to the various reforms, as noted below. substantially complete and some improvements have resulted. HUD’s OIG has also reported that the major 2020 organizational changes are complete. The REAC is fully operational and completed the first physical and financial inspections of HUD’s multifamily and public housing inventory, which found that over 80 percent of HUD’s inventory was in good physical condition. REAC’s responsibilities have expanded to include data collection and analysis of various HUD program activities. REAC manages a total of nine assessment systems, including reviewing single family appraisals, conducting tenant income matching with Internal Revenue Service and Social Security Administration data, and lender monitoring. HUD reports that REAC processes information related to over 27,000 property inspections, 33,000 financial statements, 4.5 million tenant verifications and about one million single-family appraisals. The FMC, located in Kansas City, Missouri, is operational and has assumed responsibility for processing a total of about 10,000 tenant-based and project-based Section 8 contracts for PIH and Housing, respectively. Of these, about 4,200 are Housing contracts that were already under contract with third-party contract administrators. The physical inspections done by REAC were contracted out, and we have raised concerns about the quality of those inspections as well as about HUD’s followup of problems identified during those inspections. We recognize that HUD is taking steps to address these issues and reports, but it will be important for HUD to ensure that the new procedures are being followed by the field offices and to perform assessments as needed to verify that deficiencies have actually been corrected. HUD consolidated single-family housing operations from 81 field offices into the 4 centers, which are fully operational and have resulted in faster processing times. Eighteen multifamily hubs and 33 program centers were created. HUD developed a universal position description for its multifamily property managers to increase its flexibility to manage its multifamily properties. HUD established 2 TARCs in Memphis and Cleveland that are responsible for about 50 troubled housing authorities. Reform 1: Reorganize by function rather than strictly by program cylinders. Consolidate and privatize as needed. Status of reform/accomplishments processing, reviewing, and awarding of categorical and formula grants for PIH. To address the issue of insufficient staff, the GMC hired contractors to process its categorical grants. Between 1998 and 2000, GMC processed approximately 26,700 grant applications and awarded 24,000 grants in excess of $16.3 billion to public housing authorities and other grantees. What remains to be done its new Public Housing Assessment System (PHAS) that has delayed the TARCs operating as planned. In an August 2001 review, the OIG found the situation generally unchanged. HUD reports that once the PHAS scoring system is fully implemented and/or the Section 8 Management Assessment Program begins to designate troubled Section 8 housing authorities, additional staff will need to be brought to the TARCs. HUD established the SAC to centralize the review, process, and approve nonfunded, noncompetitive applications related to demolition/disposition, designated housing, eminent domain, homeownership, and Section 202 Mandatory Conversion. In addition, it provides technical assistance and training to HUD’s public housing hubs and program centers, as well as to other external clients such as public housing authorities. HUD created a centralized accounting center in Ft. Worth, Texas, and reports that all major accounting work has been consolidated into the center and no outstationed staff remain in other field offices. The staff from the CFO told us that two processes pertaining to older programs for which HUD retains oversight responsibility remain in D.C., but they will eventually move to the center. A NAPA study suggested changes and HUD hired a chief procurement officer. HUD has also established a contract management review board, increased the use of accelerated contracting process, and increased the use of performance based service contracts. HUD has also added Government Technical Representative (GTR) and Government Technical Monitor (GTM) and provided specialized training to them. Since the 2020 reforms, HUD contracting has increased from about $250 million in fiscal year 1997 to over $1 billion in fiscal year 2000. This service was not created. We have continued to find problems in HUD’s ability to effectively manage and monitor its contracts, which indicate that additional work needs to be done in this area. For example, HUD has experienced problems with adequately monitoring the performance of its contractors who manage and market the single-family homes HUD acquires following foreclosures and ensuring contract inspections of multifamily properties were done consistently with HUD’s requirements. HUD has not been able to get all the Section 8 contract administrators on board, although they were expected to have been awarded in fiscal year 2000. In March 2000, the HUD OIG stated that despite the contracting reforms, HUD had not substantially improved its contracting attitudes and practices. Not applicable Reform 2: Modernize and integrate HUD’s outdated financial management systems with an efficient, state of the art system. Status of reform/accomplishments HUD reported to us that it considers this goal to have been met. HUD substantially rescoped its financial systems integration (FSI) project to focus on achieving a compliant general ledger and subsequently completed the project in November 2000. As stated above, HUD rescoped its systems integration project to focus on achieving a general ledger that is compliant with federal financial standards and completed the project in November 2000. HUD has developed a new vision to replace its systems integration strategy and has purchased commercial off-the-shelf software to address its financial systems needs and develop a systems architecture within which to manage future development. HUD established an Office of Chief Information Officer (CIO), and centralized information technology development activity within that office. According to the CFO, this was not a single plan but a series of activities designed to improve HUD’s risk management and oversight of its programs. These activities included revision of HUD’s Management Control Handbook and related training; developing risk-based monitoring tools, guidance and training; including Risk Management staff in HUD’s quality management reviews, and initiating a “control structure design project” to document the control structure of HUD’s programs. HUD has also initiated a Compliance and Monitoring Initiative to focus staff attention on monitoring training and initiated Quality Management Reviews of field offices. What remains to be done Both we and the OIG report that HUD faces significant systems limitations, such that substantial work remains for HUD to have a common, consolidated financial management information system that supports program management decision-making and financial management. HUD has continued to experience difficulties in trying to improve its systems. Both we and the HUD OIG continue to report that HUD’s information and financial management systems are not yet complete and reliable. In our January 2001 report, we reported that HUD needs to deploy a reliable financial management system that meets its program and financial management needs and complies with federal requirements. In February 2001, the HUD OIG reported that significant internal control weaknesses exist in HUD’s financial management system, HUDCAPS, such as maintenance practices remain weak, and controls over data integrity and security access were inadequate. In March 2001, the OIG noted that HUD continues to rely on extensive ad hoc analyses and special projects to develop account balances and necessary disclosures to complete its financial statements. In September 2001, we reported that HUD’s systems development and acquisition processes are underdeveloped and are not repeatable on a project-by-project basis. Our work and that of the OIG indicate that problems remain in HUD’s ability to monitor its programs, due in large part to resources issues and/or lack of guidance to field staff. HUD reports that it needs to complete its Control Structure Design Projects to support further risk analysis and management decisions on other front-end risk reviews, special risk reviews, and program or operation changes. Reform 3: Create an Enforcement Authority with one objective—to restore public trust. The Enforcement Center has been operational for about 3 years, with satellite offices in five cities. HUD reports that during fiscal year 2000, DEC actions resulted in savings of $29.7 million to the federal government through recoveries obtained, savings in program funds, and avoidance of insurance claims. Monetary recoveries from judgments, assessments of penalties, and settlements were $19.1 million; enforcement actions resulted in prepayment of owners of $29 million; and loan indemnifications assessed were $10.6 million. The DEC has primarily focused on enforcement actions for multifamily housing. According to a March 2000 IG report, the DEC’s accomplishments to date have been less than dramatic. Nearly all focus has been with multifamily program enforcement within housing. The DEC has not received referrals from any other program offices. Additionally, the report states that unless HUD is willing to provide the DEC with the necessary authority and resources to make prompt decisions when pursuing enforcement actions, the DEC will not achieve its full potential of aggressively pursuing enforcement actions against noncomplying entities. Issues remain related to the staffing and training of the Enforcement Center staff, as well its interaction the other program areas. HUD reports that the program offices are developing processes and guidance to improve monitoring of public housing authorities and grantees. The DEC has received 3,149 referrals on properties and has completed about 60 percent, or 1875. HUD reports it has reduced processing time on cases from an average of about a year to an average of about 100 days. The DEC has a business and operating plan goals to reduce the backlog by 80 percent during fiscal year 2001 and avoid future backlogs by completing 75 percent of all cases within 180 days. Reform 4: Refocus and retrain HUD’s workforce to carry out our revitalized mission. HUD selected and trained permanent and temporary Community Builders for outreach functions. Public Trust Officers were appointed during fiscal year 2000 at higher grades in the field offices. These Senior Public Trust Officers also received specialized training. HUD also developed and provided compliance and monitoring training, which was to help the public trust officers with their responsibilities. Reform 4: Refocus and retrain HUD’s workforce to carry out our revitalized mission. Status of reform/accomplishments managers, contract administrator oversight monitors, Public Trust Officers, and Community Builders to focus employees on HUD’s monitoring and outreach activities. HUD has a study of its resources estimation and allocation process under way that is expected to be done by December 2001. HUD offered buyouts, voluntary reassignments, early outs and designated about 3,000 staff as unplaced. When the downsizing was terminated, the remaining unplaced staff were placed in permanent positions. What remains to be done housing. A November 2000 report by a consultant reported, among other things, that HUD has an opportunity to strengthen and sustain the reform by addressing the concerns that remain among the employees. Our surveys of HUD staff and managers indicate that workload issues remain. The resources study needs to be completed and the recommendations considered. Since HUD stopped downsizing, no further work remains to be done on this specific issue as stated in the plan. HUD has established the new centers and back office processing centers as discussed above. We have continued to report HUD’s human capital issues as an area that needs additional attention to ensure that HUD has the right number of staff with the proper skills. The OIG has also raised concerns about the complexity of the reorganization. The OIG remains concerned that staffing requirements, a critical element of the reforms, are still under development. Reform 5: Establish new performance based systems for HUD programs, operations, and employees. HUD completed its strategic plan, fiscal year 1999, 2000, and 2001, and 2002 annual performance plans, and fiscal year 1999 and 2000 annual performance reports. The plans and reports have improved each year because HUD incorporates the suggestions from NAPA, GAO and others. HUD also implemented a management plan process that seeks to collect data from the field office and program level to roll up for the GPRA performance measurement process. We have reported that HUD needs to do additional work in developing goals and measures that make it possible to clearly determine the contribution of HUD’s programs to the outcomes in HUD’s strategic plan. We have reported that HUD also needs to continue improving its resource information, and data reliability and completeness. The HUD OIG, in a review of internal controls over performance data used for 1999, identified problems with the reliability of HUD’s data for selected performance indicators. In a November 2000 report, a HUD consultant noted that many of the performance measures in HUD’s management plan are still heavily focused on outputs and activities rather than outcomes for customers. Also, there is a lack of both customer/partner satisfaction data and information on the timeliness of process which the consultant said was of particular value to HUD customers. Reform 6: Replace HUD’s top down bureaucracy with a new customer friendly structure. Status of reform/accomplishments As of July 2001, HUD had established 16 storefront centers since 1998. Each storefront has a HUD kiosk located just outside of its doors to provide information on HUD programs 24 hours a day. HUD created the positions, known as Community Builders, and created back-office centers, such as the Accounting, Administrative Service, and GMCs. What remains to be done In a March 2000 report on store front operations, the OIG reported that HUD opened storefront offices to serve as national models for more responsive government. However, their impact is minimal and overall benefits to HUD customers cannot be measured. Additionally, the OIG stated funding for storefront operations could be better spent on improved oversight and monitoring of other HUD programs and that the public has less costly resources available to learn of HUD programs. The Community Builder role is under consideration for additional changes. In 1998, HUD implemented its Business and Operating Plan process that ties program office and field level activities to HUD’s strategic planning. HUD reports that in March 1998, it implemented a voluntary relocation program and that at least 300 staff were reassigned to critical field office vacancies. We have raised concerns about whether the goals and measures currently used clearly reflect HUD’s contributions to the outcomes. Staffing and workload issues have yet to be resolved. HUD’s resource estimation study is scheduled to be completed in December 2001. The following are our responses to specific areas in the Department of Housing and Urban Development’s letter dated October 5, 2001. 1. We did not, as HUD suggested, modify or delete our discussion of SEI’s Software Acquisition Capability Maturity ModelSM—specifically our evaluation of HUD’s software acquisition capability. High quality software is essential for HUD’s information systems to provide reliable information and support to the Department’s many programs. It is therefore essential that HUD address its software acquisition weaknesses to move forward with and help support its reform efforts. In our September 2001 report, we concluded that HUD’s software acquisition capability was immature because HUD did not fully satisfy the requirements for any of the “repeatable” key process areas we reviewed. We continue to believe, as HUD stated in its letter, that our review and related report provide a useful tool for the Department to assess its status in this area, because we discuss HUD’s strengths, weaknesses, or other observations on 310 key software acquisition practices. 2. We revised the draft to reflect the purpose of the January 2000 contract for both TARCs, as stated in HUD’s comments. 3. HUD expressed concern that the issues related to the quality of REAC inspections and correcting deficiencies have been resolved. After we have received and reviewed the report for the April 2001 through the September 30, 2001, time period, we will be in a better position to assess HUD’s contention that “issues of quality of inspections are largely past.” We agree that the establishment of the HUD REAC has been a positive step. We also agree with HUD’s assertions that the concept of Uniform Physical Inspection Standards is sound and a standard automated review of financial statements provides HUD with a level of information that is unattainable manually. Furthermore, we recognize that in response to recommendations contained in our July 2000 report, HUD has taken a number of steps to strengthen the procedures that it uses to assess the reliability of the physical inspections that REAC performs. This includes agreeing with our recommendation that HUD assess the reliability of its inspections on a periodic basis and report on the results of these assessments. We also recognize that HUD has recently taken actions to address recommendations in our June 2001 report on the processes that its Office of Multifamily Housing uses to ensure that deficiencies REAC identifies during physical inspections of multifamily properties are corrected. In particular, HUD has strengthened the procedures that HUD field offices are to follow to ensure that deficiencies are corrected by property owners. This is a positive step. Nevertheless, it will be important for HUD to ensure that the new procedures are being followed by the field offices and to perform assessments as needed to verify that deficiencies have actually been corrected. HUD Information Systems: Immature Software Acquisition Capability Increases Project Risks (GAO-01-962, Sept. 14, 2001). Single-Family Housing: Better Strategic Human Capital Management Needed at HUD’s Homeownership Centers (GAO-01-590, July 26, 2001). Department of Housing and Urban Development: Status of Achieving Key Outcomes and Addressing Major Management Challenges (GAO-01- 833, July 6, 2001). HUD Multifamily Housing: Improved Follow-up Needed to Ensure that Physical Problems Are Corrected (GAO-01-668, June 21, 2001). Major Management Challenges and Program Risks: Department of Housing and Urban Development (GAO-01-248, Jan. 2001). Public Housing: HUD Needs Better Information on Housing Agencies’ Management Performance (GAO-01-94, Nov. 9, 2000). HUD Management: Status of Actions to Resolve Serious Internal Control Weaknesses (GAO-01-103, Oct. 16, 2000). HUD Housing Portfolios: HUD has Strengthened Physical Inspections but Needs to Resolve Concerns About Their Reliability (GAO/RCED-00- 168, July 25, 2000). Observations on HUD’s FY 1999 Annual Performance Report and FY 2001 Annual Performance Plan (GAO/RCED-00-211R, June 30, 2000). Single-Family Housing: Stronger Measures Needed to Encourage Better Performance by Management and Marketing Contractors (GAO/RCED- 00-117, May 12, 2000). Single-Family Housing: Stronger Oversight of FHA Lenders Could Reduce HUD’s Insurance Risk (GAO/RCED-00-112, Apr. 28, 2000). Status of GAO’s Recommendations Related to High-Risk Issues at the Department of Housing and Urban Development (B-284624, Mar. 10, 2000). HUD’s Fiscal Year 2000 Budget Request: Additional Analysis and Justification Needed for Some Programs (GAO/RCED-99-0251, Sept. 3, 1999). Homeownership: Problems Persist With HUD’s 203(k) Home Rehabilitation Loan Program (GAO/RCED-99-124, June 14, 1999). Community Development: Weak Management Controls Compromise Integrity of Four HUD Grant Programs (GAO/RCED-99-98, Apr. 27, 1999). Single-Family Housing: Weaknesses in HUD’s Oversight of the FHA Appraisal Process (GAO/RCED-99-72, Apr.16, 1999). Major Management Challenges and Program Risks: Department of Housing and Urban Development (GAO/OCG-99-8, Jan. 1999). HUD Information Systems: Improved Management Practices Needed to Control Integration Cost and Schedule (GAO/AIMD-990-25, Dec. 18, 1998). Section 8 Project-Based Rental Assistance: HUD’s Processes for Evaluating and Using Unexpended Balances Are Ineffective (GAO/RCED-98-202, July 22, 1998). HUD Management: Information on HUD’s 2020 Management Reform Plan (GAO/RCED-98-86, Mar. 20, 1998). Housing and Urban Development: Potential Implications of Legislation Proposing to Dismantle HUD (GAO/RCED-97-36, Feb. 1997). HUD Information Resources: Strategic Focus and Improved Management Controls Needed (GAO/AIMD-94-34, Apr. 14, 1994). | In 1997, the Department of Housing and Urban Development (HUD) began a management reform effort, called the 2020 Management Reform Plan, to resolve its management and operational problems. GAO found that HUD has had some successes in implementing the management reforms, but challenges remain. Some initiatives, such as consolidating and streamlining operations, were achieved relatively quickly and are producing results. Other efforts, such as improving the efficiency of those operations and improving accountability, have been hampered by inefficient distribution of workload and other problems. HUD has made some progress toward improving accountability and control of its programs. Specifically, HUD developed a strategic planning process; enhanced monitoring tools; improved some aspects of its information and financial management systems; improved contracting procedures; and established centralized entities, such as an enforcement authority, to follow up on problem properties. HUD's efforts to refocus and retrain its staff have been somewhat successful. HUD faces several challenges in its efforts to consolidate and streamline its operations, improve accountability and control of its programs, and refocus and retrain its staff. Successfully addressing these challenges in the areas of human capital, information and financial management systems, and acquisition management will determine whether HUD can sustain the progress of its management reform efforts and become a high-performing organization. |
About 1.2 million years ago, a volcano erupted and collapsed inward, forming the crater now known as the Valles Caldera, in north-central New Mexico. This geologically and ecologically unique area covers about 89,000 acres of meadows, pine forests, hot springs, volcanic domes, and streams that support elk herds and other wildlife and fishery resources. Figure 1 shows a view of Valle Grande from Redondo Peak, the highest elevation within the Caldera. The Caldera comprises the formerly private lands known as the Baca Ranch and is almost entirely surrounded by the Santa Fe National Forest and Bandelier National Monument. Figure 2 shows the location of the Caldera in relation to the Santa Fe National Forest and Bandelier National Monument. The owners of the Baca Ranch operated it as a working ranch, providing grazing for their own cattle and, for a fee, for livestock owned by other parties. According to the Preservation Act, the working ranch arrangement was to continue after the federal government purchased the ranch. In managing the Caldera, the Trust is to protect and preserve the land while attempting to achieve a financially self-sustaining operation. “Financially self-sustaining,” as defined by the act, means that management and operating expenditures—including trustees’ expenses; salaries and benefits; administrative, maintenance, and operating costs; and facilities improvements—are equal to or less than proceeds derived from fees and other receipts (including interest on invested funds) for resource use and development. Appropriated funds are not to be considered. To carry out its duties, the Trust has the authority to solicit and accept donations of funds, property, supplies, or services from any private or public entity; negotiate and enter into agreements, leases, contracts, and other arrangements with any individual or federal or private entity; and consult with Indian tribes and pueblos on matters that may affect them. The Trust is managed by a nine-member Board. The President appoints seven members, and the other two members are the Supervisor of the Santa Fe National Forest and the Superintendent of the Bandelier National Monument. Of the seven presidential appointees, who are selected in consultation with the New Mexico congressional delegation, five must be New Mexico residents. Appointees are to be selected based on their expertise or experience. Generally, one individual must be appointed with knowledge of or experience in each of the following: (1) livestock and range management; (2) recreation management; (3) sustainable management of forest lands for commodity and noncommodity purposes; (4) financial management, budget and program analysis, and small business operations; (5) cultural and natural history of the region; (6) nonprofit conservation organizations concerned with Forest Service activities; and (7) state or local government activities in New Mexico, with expertise in the customs of the local area. Board members are generally appointed to 4- year terms and can be reappointed; however, no Board member may serve more than 8 consecutive years. The Trustees select a chairman from the Board’s members. An executive director, who is hired by the Board, oversees the Trust’s day-to-day operations. The Board must hold at least three public meetings a year in New Mexico. Under the Control Act, the financial statements of a government corporation must receive an independent financial audit annually in accordance with generally accepted government auditing standards. In addition, agencies must submit annual management reports to Congress that include a statement of financial position, a statement of operations, a statement of cash flow, a budget report reconciliation, a statement on management controls, a report on the results of the annual financial audit, and other necessary information about the operations and financial condition. The Results Act requires agencies to develop strategic and performance plans, measure performance, and report annually to Congress. The Results Act shifts the focus of an agency’s operations from reporting on activities toward achieving results. It requires a results-oriented strategic planning process with clearly defined strategic objectives linked to measurable performance goals and the collection of information to monitor and evaluate the programs. A strategic plan should contain the organization’s mission statement and strategic goals, a description of the means and strategies that will be used to achieve the goals, a description of the relationship between annual performance goals and the organization’s strategic goal framework, the identification of key factors that could affect achievement of the strategic goals, a description of program evaluations used in preparing the strategic plan, and a schedule for future program evaluations. The annual performance plan articulates measurable goals for the upcoming fiscal year that are aligned with an organization’s long-term strategic goals. The annual performance report compares an organization’s performance with performance goals for the past year. Implementation of the Results Act requirements enables managers to improve accountability, effectiveness, service delivery, and internal management, and to provide better information to Congress. A more effective management control program, as we have defined it for the purposes of this report, would encompass the requirements of the Control Act and the Results Act. These requirements include, among other things, (1) a strategic plan, (2) performance plans with measurable goals and objectives, (3) the identification and mitigation of program risks, (4) performance monitoring and reporting, and (5) annual audits. As required under the Preservation Act, the Board has taken steps to establish and implement management policies to achieve the goals of preserving and protecting the Caldera and providing for public recreation and sustained yield management. In particular, the Board (1) established a basic organization, (2) began to address infrastructure problems, (3) granted limited access to the public through its interim grazing and recreation programs, and (4) established an adaptive management framework. Between January 2001—the Board’s first meeting—and September 2001, the Board met regularly and held listening sessions with the general public to obtain views on how the Caldera should be managed. Separately, the Board met with representatives of local Indian tribes and pueblos. Using the information from these sessions, in December 2001, the Board issued 10 guiding principles for future decision making. These guiding principles, which are listed in appendix II, include a commitment to fair and affordable access for all permitted activities. At the same time, however, the Board stated that it would emphasize the quality of Caldera experiences over quantity, which could limit activities and fees. From January 2001 through August 2002, the Forest Service served as the interim manager, and the Board and employees from the Forest Service and other federal agencies conducted the Trust’s work. In October 2001, the Board hired its first employee, an executive director. During that year, the Trust’s office was located at the Santa Fe National Forest offices. The Trust officially assumed management of the Caldera in August 2002, after it provided for essential management services, including establishing staff, beginning business operations, and adopting management policies and procedures. During 2002, the Board drafted personnel and procurement policies and procedures as well as policies for environmental protection. It also drafted a tribal access and use policy to ensure access to the Caldera for religious and cultural purposes, as authorized by the Preservation Act. By the end of fiscal year 2002, the Trust had 7 employees, including business and resource managers. At the time of our review in 2005, the Trust was reorganizing under a new executive director and employed about 25 permanent and limited-term employees. Figure 3 shows the Trust’s proposed organization, as of September 2005. In addition, the Trust published its final management framework in May 2005. This document, entitled The Framework and Strategic Guidance for Comprehensive Management, describes the history and natural features of the Caldera, the goals of the Preservation Act, and the Trust’s approach for land stewardship, decision making, and public involvement. It further describes a range of potential public uses of the Caldera, from hunting and fishing to hiking and camping. From its inception through fiscal year 2003, the Trust maintained its financial accounts on the Forest Service’s financial system. However, in 2003, the Board decided to obtain an independent financial system for the Trust. The Trust contracted for financial services on the Oracle Federal Financial System managed by the Department of the Interior’s National Business Center—an option the Trust considered to be more cost-effective than developing a system in-house. Beginning with fiscal year 2004, the Trust maintains its financial information on that system. Shortly after the federal government assumed ownership of the Caldera, the Trust learned that the existing infrastructure—roads, buildings, fences, and water treatment facilities—was seriously degraded and would have to be rehabilitated before it could provide public access to the Caldera. The Trust began the rehabilitation work in 2002. Roads. The Caldera has an estimated 1,200 miles of roads, including 200 miles for the main access roads. Most of these roads had been constructed with little planning or engineering and had been used to support logging operations. They could not be readily used to support administration, ranching, recreation, and other needs. In 2002, 3.5 miles of Road 1, the main access road, and five key bridges were upgraded to all-weather commercial gravel standards; work on the this road was completed in 2003. Road 2—a 10.2-mile access road—was upgraded in 2004 and 2005. Road work will continue as needs are identified and resources become available. Figure 4 shows a portion of the main access road to the Caldera after rehabilitation. Buildings, fences, and other facilities. From 2002 to 2005, the Trust conducted minor maintenance on the ranch buildings used to house employees. In 2002 and 2003, the Trust repaired the Caldera’s 54 miles of boundary fence and installed restricted access signs. In 2004, it assessed the layout and condition of 64 miles of interior fences. The height of the fences was shortened in many areas to allow for elk movement. The Trust also installed scenic vistas and kiosks on New Mexico Highway 4, the main access road to the Caldera, to allow public viewing of the Caldera. Other facilities—such as livestock corrals—were also assessed and rehabilitated. Figure 5 shows a historic building constructed in 1909 and used as a commissary where ranch hands could purchase supplies on the Caldera. Water treatment facilities. When the federal government acquired the Caldera, the existing water treatment facility was not functioning and the Caldera did not have potable water. Rehabilitating the system became a top priority for the Trust. Repairs to the water collection and filtration system were completed in 2004, and work was ongoing to repair the water distribution system in 2005. As currently scheduled, potable water will be available in the spring of 2006. According to the Preservation Act, the Trust is to provide for livestock grazing consistent with the other purposes of the act. The grazing program, begun in 2002 as a 5-week drought-relief program, has been operating under an interim livestock management plan, which is effective through calendar year 2005. Until it can develop a more comprehensive strategy, the Trust has established an interim grazing program that allows grazing for between 1 and 2,000 cattle, depending on the condition of the forage. This level is lower than the private owners had allowed—up to 6,000 cattle during the spring, summer, and fall grazing seasons. Table 1 shows the level of livestock grazing through 2005 (estimated) by calendar year. Two of these grazing programs—Conservation Stewardship and Replacement Heifers—are designed, in part, to introduce local ranchers to more prudent management practices. Under the stewardship program, which replaced the cow-calf program, applicants have to demonstrate that they will implement projects on their own lands to improve the condition of the range while their cattle graze on the Caldera. The largest participant in the program, the Pueblo of Jemez, implemented major range improvements on its own lands, such as reseeding and resting rangeland. The Replacement Heifer Program allows ranchers to graze heifers on the Caldera and have the heifers bred with the Trust’s registered bulls, which are certified to produce calves with low birth weights. The program is designed to improve the genetics of local herds and to protect the heifers from dying or suffering other complications when they give birth. The Trust granted limited public access through recreation programs beginning in 2002. Recreation activities offered have included, for example, hunting, fishing, hiking, cross-country skiing, snowshoeing, sleigh rides, wagon rides, and horseback riding. In most cases, the Trust charged fees for access to the Caldera. Table 2 shows the level of public participation in the various recreation programs from calendar years 2002 through 2004. Participation in recreation activities is expected to increase for 2005. As shown in table 2, the Trust offered limited recreational opportunities in 2002—a total of 1,920 participants. However, in 2004, participation in recreation activities increased more than fourfold over the 2002 level. The fishing program hosted the most visitors over the period, or approximately 26 percent of total visitors. Elk hunting/antler collection and hiking were also popular, each representing about 21 percent of visitor participation, followed by wagon/sleigh rides at about 15 percent. Fishing and fishing clinics. Two streams on the Caldera are suitable for fishing. In 2003, the Trust granted fishing access to 1,785 participants on a first-come, first-served basis. With increased demand in 2004, the Trust used a lottery system to award access to 2,107 participants. In addition, the Trust hosted youth and adult fishing clinics in both years. Elk hunting and antler collection. The Trust worked with the New Mexico Department of Game and Fish to set the numbers of available elk-hunting licenses and used a lottery and auction to award licenses. Participation in elk hunts has declined each year because of the New Mexico Department of Game and Fish decided to decrease the number of hunting licenses available to sustain a viable elk herd. In addition to the elk hunt, the Trust has offered area youth groups the opportunity to collect antlers shed by elk each year. These groups sell the antlers, which are generally used to make decorative items, such as lamps, and the groups use the proceeds to support nonprofit programs. Hiking. Beginning in 2002, the Trust provided guided hiking through a contractor to enable public access to the Caldera before it developed the infrastructure needed for general public access. In 2003 and 2004, the Trust implemented its own hiking program, expanded the activity boundaries for hiking, and established unguided hikes as an option. In 2005, the Trust increased the number of trails available for hikers on the Caldera. In total, the Trust now has 24 miles of trails available for hikers. Wagon and sleigh rides. The Trust offered horse-drawn wagon and sleigh rides to visitors. The horse-drawn rides allow greater access to areas in the Caldera. Wagon rides can occur year-round, while sleigh rides, of course, need sufficient snowfall. Participation in wagon and sleigh rides has increased more than sixfold, from 250 in 2002 to 1,520 in 2004. Also in 2004, the Trust donated wagon rides on the Caldera as a prize for a charity auction. Other recreation. In 2003, the Trust added van tours, snowshoeing, cross- county skiing, bird watching, and stargazing lectures. In 2004, the Trust implemented an equestrian program, so that riders could transport their own horses to the Caldera for rides on designated trails. Over 200 riders participated. Also in 2004, the Trust added mountain biking, group tours and seminars, workshops, and overnight photo- and bird-watching excursions. The Trust is using a science-based adaptive management framework for the Caldera, which many believe to be a potentially effective approach to managing the land. Under this approach, the Trust will make land management decisions on the basis of scientific research and monitoring, taking into account the public’s views and federal environmental requirements. The foundation of this management approach is inventorying natural resources, monitoring environmental changes that result from the Trust’s programs, conducting research that will primarily help manage the Caldera’s resources, and complying with federal environmental requirements. Inventories. Little information was available about the Caldera’s resources when the federal government acquired the Caldera. As a result, in 2001, the Trust—using volunteers and employees detailed from other federal agencies—began to inventory the Caldera’s vegetation and forest, wildlife and fisheries, geology, and other resources. Some of these baseline inventories have several components. For example, the wildlife inventory includes components by species, such as mammals, reptiles, and fish. Some inventory components have been completed, while others are still ongoing and are scheduled to be completed during 2007. Figure 6 shows the current inventory and monitoring locations on the Caldera. In addition, about 5 percent of the Caldera has been surveyed for cultural resources. As a result of recent surveys, 25 previously unknown historic properties have been discovered. For example, scientists have identified prehistoric sites showing evidence of toolmaking using obsidian. The cultural inventory is ongoing, and its completion date has not been established because future construction plans are uncertain. According to the Caldera’s cultural program coordinator, planned surveys can be delayed because of the need to survey areas slated for construction, such as roads. Monitoring. The monitoring program is intended to assess the impact that grazing, fishing, forest thinning, prescribed fire programs, and other activities have had on the Caldera. For example, the Trust is monitoring areas it has fenced along streambeds to prevent elk and cattle grazing in order to better understand the impacts of grazing on areas that are not fenced off. Figure 7 shows a fenced riparian area on the Caldera. The Trust is also monitoring the effects of natural and nonprogrammatic factors, such as changes in climate and species populations, especially nonnative populations. For example, as part of this program, the Trust established five weather stations to monitor rainfall, snowfall, wind, and temperature as well as five stations to monitor stream water quality. Research. The research program benefits both the management of the Caldera and public land management. For example, hydrological research funded by the National Science Foundation through the University of Arizona will provide information to aid in the day-to-day management of the Caldera and will also contribute to the understanding of hydrologic systems overall. This research will enable scientists to understand how much rain the Caldera’s lands absorb and predict the amount of runoff into streams and rivers. As more data become available, scientists can predict the impact of rain and drought on water quality and forage availability on the Caldera and use the information to drive future management decisions for grazing and recreation. Environmental compliance and public participation. The Trust must comply with the National Environmental Policy Act (NEPA), which requires federal agencies to assess the likely environmental impacts of any major actions they propose. If the agency determines that a proposed activity will significantly affect the quality of the human environment, it must prepare an environmental impact statement (EIS). An EIS specifies, among other things, the purpose of and need for the proposed action, its environmental consequences, and the comparison of alternatives to the proposal. Federal agencies, in addition to complying with the Council on Environmental Quality’s regulations for implementing NEPA, develop agency-specific procedures. Before the Trust adopted its NEPA procedures in July 2003, it used the Forest Service’s procedures to ensure NEPA compliance. Under the Forest Service procedures, the Trust categorically excluded interim fishing, hiking, road maintenance, and hazardous-fuel reductions from the general requirement to develop an environmental assessment or impact statement because it was determined that the actions would have no significant impact on the human environment. Under the Forest Service regulations, the Trust conducted environmental assessments of the interim grazing, noxious-weed eradication, and prescribed burns and did not find that these activities significantly affected the Caldera. The Trust expects to complete an environmental impact statement before establishing a permanent grazing program in 2007. The 2003 procedures are intended to efficiently and effectively implement NEPA and create a collaborative working relationship between the Trust and tribal governments, citizens, and federal, state, and local authorities. To obtain public views and to track and report the Trust’s land management actions, the Trust is developing an Internet-based system—the Stewardship Action Record System (StARS). Once functional (expected at the end of 2005), StARS will allow public review and comment on all actions taken and provide the public with opportunities to monitor the results of ongoing efforts. StARS proposals have been developed for public recreation, grazing, infrastructure development, research projects, and fire management. The Trust is also exploring ways to distribute information to the public and obtain comments without using the Internet. According to the President’s Council on Environmental Quality, the Trust’s NEPA procedures clearly integrate progressive NEPA compliance with principles of adaptive management and environmental management systems. The council also stated that the procedures allow for uncertainty in the decision-making process because actions are monitored and revised as more information becomes available. Despite the progress made, the Trust has much work to do to meet its mandated goals under the Preservation Act. Specifically, the Trust lacks (1) strategic and performance plans and programs to ensure that revenue streams are sufficient to achieve financial self-sustainability, (2) plans to minimize program risks from fire that could damage resources and legal liabilities that could result in catastrophic losses and reduced visitor use, and (3) mechanisms for monitoring progress in meeting its financial and other obligations, including annual audits and performance reporting. These shortfalls could be addressed through a more effective management control program, as envisioned in the Control and Results Acts. Frequent turnover of Board members and key staff also contributed to delays in implementing the components of an effective management control program. Without a more effective management control program, the Trust cannot adequately plan and implement programs or monitor progress toward meeting the mandated goals of the Preservation Act. The Trust has not developed strategic and performance plans as required under the Results Act. Specifically, it has not developed a strategic plan that not only outlines its mission and goals but also describes how it will achieve and revise its goals and objectives, how performance goals relate to the organization’s strategic goal framework, and how it will conduct program evaluations. In 2005, the Trust published its Valles Caldera National Preserve Framework and Strategic Guidance for Comprehensive Management. This document provides useful information about the history of the Caldera, background on the Trust, and general goals, but discusses issues in terms of possibilities and in a broad and philosophic manner instead of applying a methodical and analytical approach to strategic planning. Board members stated that they did not prepare a strategic plan because they believed that the NEPA compliance process had to be completed before they could publish a plan. However, agencies are not required to prepare an EIS prior to formulating a strategic plan. The Trust has not developed an annual performance plan with measurable goals for the activities it allows on the Caldera, which would help it determine whether it is accomplishing the overall strategic goals. For example, the performance plan could support the overall strategic goal to provide recreational opportunities by establishing annual measurable goals for the Trust’s recreation activities. An example of a measurable goal could be to increase public participation in hiking activities by 10 percent per year until the Trust has determined that the allowed level of hiking will not impair or damage the Caldera and is consistent with the other goals under the Preservation Act. The performance plan could also support the strategic goal to protect and preserve the Caldera, which could contain a measurable goal to restore and expand a specific number of wetland acres per year. However, the Trust cannot agree on the balance that should be struck between the activities that should occur on the Caldera and the impact of these activities on the land in order to achieve its overall goals of resource protection, recreation, sustained yield management, and financial self-sustainability. To become financially self-sustaining by 2015, the Trust needs to generate enough revenue to pay for its operations and maintenance as well as infrastructure development costs. The Trust’s main revenue-generating activities are hunting, fishing, special events such as mountain biking, and grazing. Table 3 shows the revenue generated, by program activity, for fiscal year 2004. To date, however, the Board has not developed sufficient revenue streams to cover its program costs or developed performance goals for becoming financially self-sustaining. Specifically, managers estimated that the grazing program lost about $55,000 in 2004 but have not computed the gain or loss for other programs. With total revenues of about $500,000 and total expenditures in excess of $5 million in fiscal year 2004, it is apparent that programs were operating at a loss. The Board does not plan to change the operation of revenue-generating programs until the Trust complies with NEPA. According to the Valles Caldera National Preserve Framework and Strategic Guidance for Comprehensive Management, the Trust considers the financial self-sustainability goal as one of many goals of equal priority. Furthermore, according to the framework, the Trust cannot set a date for achieving financial self-sustainability—established as a goal to be accomplished by 2015 in the Preservation Act—because its federal land stewardship obligations do not allow it to operate grazing and recreation activities at a level that puts natural resources at risk. Therefore, the framework states, it may be reasonable to continue appropriations to cover environmental stewardship costs, such as those for environmental assessments and resources inventories, while the balance of the Trust’s programs operate in a self-sustaining manner. While financial self- sustainability may not be attainable in the long run, we believe it is premature to assume that appropriations will continue to be needed after the Trust’s 15th year of operation—the time period established to achieve the goal of self-sustainability. Moreover, the Trust is directed to report to Congress in its 14th year if the achievement of self-sustainability by its 15th year is unrealistic. In the meantime, the Trust has an obligation to continue to develop a strategy and implement a plan to become financially self- sustaining. The Preservation Act also requires the Trust to report to Congress on how and when the Trust will become financially self-sustaining. That is, the Trust is to provide Congress with a schedule of decreasing appropriations that demonstrates how it will achieve financially self-sustaining operations by 2015. Such a schedule should, at a minimum, quantify the annual appropriations as well as other projected revenue sources needed through 2015 and demonstrate that these sources of income will meet or exceed the expected program operations and maintenance costs during that time frame. However, the Trust has only presented the three-phased strategy shown in table 4 to achieve that goal. As the table shows, the Trust’s Schedule of Decreasing Appropriations does not include financial information to show how the appropriations will decrease each year. The Preservation Act also authorized the Trust to solicit and accept donations of funds, property, supplies, and services. The Trust has received some donations, primarily volunteer labor. Through 2005, cash donations totaled about $56,000, $50,000 of which was earmarked to pay the salary of a full-time employee to coordinate volunteer efforts. However, the Trust has not developed a plan for outreach to philanthropic organizations. For example, charitable organizations supporting national parks have been established to solicit donations to help support park needs. The Trust has discussed this option but has not actively pursued it. The Trust has not addressed program risks, including fire and legal liabilities that could undermine its ability to meet its financial obligations. The Trust completed a fire management plan in 2004 that adopts, by reference, the federal National Fire Plan. According to the National Fire Plan, agencies need a fire management plan to outline a decision-making process for responding to naturally occurring fires. Such a plan lays out the conditions under which fires must be suppressed or allowed to burn to benefit resources. The Caldera’s plan, however, has not addressed fire management to benefit resources, only the management of prescribed fires. Without a plan to manage fires for resource benefits, all naturally occurring fires on the Caldera must be suppressed, and suppression can be costly. For example, in May 2005, a fire on the Caldera burned about 82 acres before being suppressed—at a cost of about $338,000. In the opinion of the Forest Service Region 3 Fire Manager, this fire could have been left to burn because it did not threaten any key resources or public infrastructure. Extended periods of drought and high fire risk in northern New Mexico could easily deplete the Caldera’s financial resources because suppression costs are high. The Trust does not have liability coverage to protect against injuries on the Caldera because it was uncertain whether it could acquire such insurance using appropriated funds. Moreover, as a government corporation, the Trust did not believe it could access the federal judgment fund, a fund in the U.S. Treasury used for the payment of final judgments against the United States. This lack of liability coverage and uncertainty led the Trust to take a cautious approach to implementing programs and increasing public access. According to the Board, in June 2005 the Trust clarified these issues with its legal counsel, who determined that legislation might be necessary to access the judgment fund but that it could use its own funds to purchase liability insurance. The Trust has yet to develop the mechanisms needed to monitor progress in meeting its financial and other obligations under the Preservation, Control, and Results Acts. These mechanisms include an annual financial audit to ensure the credibility of reported financial information and an annual performance report that describes progress toward achieving its annual performance goals. Without these mechanisms, the Trust, Congress, and other stakeholders cannot determine whether the Trust is on a course to meet all of its goals. Annual Audits. The Control Act requires annual financial audits for government corporations’ financial statements by an independent, external auditor selected by the head of the corporation. The results of the audits are to be reported to the head of the government corporation and to Congress. The Board has yet to conduct an audit because it has not produced auditable financial statements. In 2003, the Trust contracted with an independent accounting firm for auditing services, including an audit of the (1) statement of financial position of the Trust and (2) related statement of activities and cash flows, as of September 30, 2003. However, according to the Trust’s former business manager, the audit firm recommended that the audit be postponed until 2004 since the Trust’s financial records had only recently been established on the new financial system operated by the National Business Center. The Trust agreed with this recommendation. As of October 2005, the Trust had not contracted with an independent firm to audit its annual financial statements. Also in 2003, the Trust contracted with another firm to review the payroll process and controls for each of the revenue sources and to recommend improvements. According to Trust managers, in fiscal year 2005, financial policies and procedures were still not in place and financial statements had not been produced. The managers told us that they were in the process of establishing management controls and attempting to reconstruct prior years’ expenditures in preparation for their first external audit. Annual Performance Reports. The Control, Results, and Preservation Acts require the Trust to report annually to Congress on certain aspects of its operations. Collectively, these acts require a statement of financial position, a statement of operations, a statement of cash flows, reconciliation to the budget report, a management controls statement, a report on a financial statement audit, and reports on annual performance. The annual reports to Congress the Trust has prepared for fiscal years 2001 through 2004 under the Preservation Act describe the interim programs the Trust has implemented and summarize the prior years’ accomplishments. The Trust may not have been able to prepare annual reports to Congress that address requirements of the Control Act partly because the Trust has not produced a budget report or financial statements. In addition, because the Trust has not developed annual performance plans with performance goals, it has not produced a performance report required by the Results Act. Effective management of an organization’s workforce—its human capital— is essential to achieving results and an important part of internal control. Operational success is possible only when the right people for the job are on board and are provided the right training, tools, structure, incentives, and responsibilities. Management should ensure that it obtains a workforce with the required skills that match those necessary for achieving organizational goals. As part of its human capital planning, management should also consider how best to retain valuable employees, plan for their eventual succession, and ensure continuity of needed skills and abilities. Excessive or unexpected turnover of staff can indicate problems with an organization’s management control program and contribute to delays in implementing programs needed to achieve established goals. Throughout its short history, the Trust has experienced significant turnover among Board members and staff. According to the Preservation Act, three of the initial Board members are appointed for 2 years, while four other Board members are initially appointed for 4-year terms. All subsequent appointments to these positions are for 4 years. At the end of the first 2- year term, the Board operated for about 5 months before the President appointed replacements. In January 2005, four more board members completed their terms, and the Board operated for 4 months before the President appointed three of the four replacements. As of October 2005, the President had not appointed anyone for the fourth position, which has now been vacant for about 10 months. The Trust has also experienced high turnover among key staff. The Trust’s first executive director served 18 months, resigning as director in March 2004. The position remained vacant for about 7 months while the Trust searched for a replacement. Although this position was filled in October 2004, the executive director resigned after 10 months of service. Other key positions became vacant in 2004 and 2005, including the Trust’s controller, business manager, programs director, chief administrative officer, communications manager, and cultural program coordinator. As of October 2005, the executive director, programs director, chief administrative officer, cultural resources coordinator, and geospatial information systems coordinator positions remained vacant. The business manager position was abolished. Table 5 shows the turnover of key Trust staff in 2004 and 2005 and the current status of these positions. According to some stakeholders we spoke with, the turnover of Board members and other key staff has contributed to the Trust’s inability to develop a strategic and performance plan with measurable goals and objectives as well as to delays in implementing programs. For example, the NEPA environmental assessment related to the grazing program was postponed when four Board members completed their 4-year terms in January 2005. Some staff stated that the lack of consistent leadership and the lack of progress in organizational and program development has contributed greatly to staff turnover. To meet the mandated management goals of the Caldera, the Trust faces multiple challenges—balancing conflicting goals and objectives for resource development and use with preserving and protecting these resources for sustained future recreational enjoyment of the Caldera. While the Trust has made some progress in achieving its mandated goals, its further progress is in doubt because it has not developed a well-defined management control program, which is collectively encompassed in the mandates governing the Caldera’s operations. Such a program would include strategic and performance plans, measurable goals and objectives and monitoring plans, annual performance reports, and a strategy for achieving financial self-sustainability. These mechanisms would help provide greater accountability for achieving results and enhancing decision making. Furthermore, an effective management control program—to include human capital initiatives designed to retain needed skills and provide timely replacement of lost skills—can ease the effects associated with turnover in the Board and staff. Achieving financial self-sustainability by 2015 is only one of many goals and objectives set forth in the Preservation Act, but it is key to the Trust’s success in managing and operating the Caldera without federal funds. The Trust assumes that it may have to continue to rely on federal funding after 2015, but this assumption is premature because the Trust has not focused on the actions it needs to take to become self-sustaining, such as expanding or establishing new revenue-generating programs or identifying other nonfederal revenue sources (donations). Furthermore, without developing programs to minimize risks associated with implemented programs, the Trust cannot manage the uncertainty surrounding liability and fire suppression costs, which could undermine its efforts to achieve financial self-sustainability. Finally, without an independent financial statement audit, the Trust cannot demonstrate to Congress and other stakeholders that it is developing a sound financial base and that reported financial information is credible. To help ensure that the Trust meets its goals under the Preservation Act and to improve management oversight, accountability, and transparency under the Control Act and the Results Act, we are making the following seven recommendations to the Valles Caldera Board of Trustees. To establish a more effective management control program, we recommend that the Board develop a strategic and performance plan that identifies measurable goals and objectives for protecting and preserving the Caldera, providing recreation, sustaining yield, and becoming financially self-sustaining; a plan for becoming financially self-sustaining that includes financial information detailing how and when the Trust will try to achieve this goal; mechanisms for periodic performance monitoring and reporting, including annual performance reports that enable Congress and the Trust to track progress in achieving the Trust’s program goals and objectives; and a plan for the timely replacement of key personnel. To increase accountability to Congress and other stakeholders, we also recommend that the Board obtain the annual financial statement audit for 2005, provide a status report or the auditor’s final opinion on the Trust’s financial condition in its January 2006 annual report to Congress, and arrange to conduct future annual financial audits in a timely manner. We provided the Valles Caldera Board of Trustees with a draft of this report for review and comment. The Board provided written comments that are included in appendix I. The Board generally agreed with the accuracy of the findings, validity of the conclusions, and soundness of the recommendations. It also provided additional insights into four specific areas. First, it stated that it has, over the last several years, engaged in extensive strategic planning sessions to lay the foundation for developing more detailed operating plans once the highest level strategic planning work is completed and sufficient experience has been obtained in conducting interim programs. It also said that it has adopted, in 2005, a set of strategic goals, the achievement of which is both measurable and time specific. We acknowledge that in 2005 the Board announced four broadly stated strategic goals. However, as stated in the report, the Trust has not developed strategic and performance plans that include all required elements of the Results Act, which provides a methodical and analytical approach to strategic planning. Second, with regard to risks posed by fire and legal liabilities, the Board said it was in the process of completing a Fire Use Plan that addresses the use of management-ignited fire as well as the use of fire originating from natural ignitions and that Congress has adopted legislation to provide the Board with access to federal fire suppression funds. These actions will enhance the Board’s ability to evaluate natural ignitions and apply the appropriate management response. We agree that it is important to complete the Fire Use Plan, which, according to the Board, will be finalized by May 2006. As mentioned in the report and in the Board’s comments, having a sound fire management plan will provide greater assurance that proper management actions are taken in the event of a wildland fire. The Board also said it has pursued clarification of whether the Board can access the federal government’s judgment fund and has obtained a legal opinion from an independent firm that concludes that legislation might be necessary to access the judgment fund but that the Trust could use its own funds to purchase liability insurance. We revised the report to include this clarification and agree that the Trust should use its own funds to purchase liability insurance. Third, the Board agreed that the timely appointment of Board members and the management of its human resources are essential to achieving positive results. In this regard, it mentioned that it has revised its bylaws to effect the orderly transition of Board members, so as to mitigate the impact of possible delays in the appointment process. It also said it had adopted a new organizational structure and performance review process for employees. The change in the Board’s bylaws has the effect of allowing the Board to make important decisions in the absence of a full Board due to delays in appointments of Board members. We agree this is important given that the replacement of Board members is out of its control. The Board also recognized that it has experienced a relatively high level of Trust employee turnover, which it said has “occurred in a constructive fashion, void of grievances or formal complaints.” Nonetheless, the fact remains that high turnover, particularly of key employees, can cause disruption to an organization and affect its ability to accomplish established goals. As the report states, an effective management control program has a process in place for the timely replacement of key personnel lost to turnover. Establishing such a process is within the purview of the Board and important to effective management of an organization’s human capital. Finally, the Board stated that while the Trust was fully committed to the goal of financial self-sustainability, the Trust recognizes that to obtain that goal, it needs to streamline required federal overhead stemming from compliance with federal laws and statutes and to control administrative and operating costs. It said it was committed to developing the management plans required for conversion of financially attractive programs to regular status and acquiring capital resources. Furthermore, it stated that the Board takes the goal of financial self-sustainability seriously and on par with other provisions of the Preservation Act and thus disagreed with the implication in the report that it did not consider the financial self- sustainability goal as a priority. Because the Board’s published management framework, entitled Valles Caldera National Preserve Framework and Strategic Guidance for Comprehensive Management, states that the Trust considers the financial self-sustainability goal as only one of many goals, to avoid ambiguity, we revised the report’s language to state that the Trust considers the self-sustainability goal as one of many goals of equal priority. Regardless, the seriousness of the Board’s actions in addressing this goal would, as mentioned in the report, be further enhanced by demonstrating a more aggressive approach to identifying additional revenue sources that would help the Trust come closer to achieving financial self-sustainability. We obtained and analyzed information from the Trust on its activities, relevant laws, regulations, program documents, and related materials, and met with Trust officials responsible for major activities, such as recreation, resource inventorying, construction, and financial management. Since the Caldera was initially under the management of the Forest Service, we interviewed Forest Service officials and reviewed available documentation supporting activities undertaken during this time. We also visited the Caldera to observe the actions taken to date toward meeting the Trust’s statutory goals. The financial statements of the Trust have not been independently audited. We conducted limited testing of these data and discussed these data with key Trust officials. We describe issues related to these financial data in the body of this report. We assessed and determined that the nonfinancial data, such as participation in recreation activities and levels of livestock grazing, were sufficiently reliable for the purposes of this report. We conducted our work from January 2005 through October 2005 in accordance with generally accepted government auditing standards. We are sending copies of this report to interested parties. We will also make copies available to others upon request. In addition, this report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. 1. Future generations. Administer the Preserve with the long view in mind, directing efforts toward the benefit of future generations. 2. Protection. Recognizing that the Preserve imparts a rich sense of place and qualities not to be found anywhere else, commit to the protection of its ecological, cultural, and aesthetic integrity. 3. Integrity. Strive to achieve a high level of integrity in the stewardship of the lands, programs, and other assets in the Trust’s care. This includes adopting an ethic of financial thrift and discipline and exercising good business sense. 4. Science and adaptive management. Exercise restraint in the implementation of all programs, basing them on sound science and adjusting them consistent with the principles of adaptive management. 5. Good neighbor. Recognizing the unique heritage of northern New Mexico’s traditional cultures, be a good neighbor to surrounding communities, striving to avoid negative impacts from Preserve activities and to generate positive impacts. 6. Religious significance. Recognizing the religious significance of the Preserve to Native Americans, the Trust bears a special responsibility to accommodate the religious practices of nearby tribes and pueblos, and to protect sites of special significance. 7. Open communication. Recognizing the importance of clear and open communication, commit to maintaining a productive dialogue with those who would advance the purposes of the Preserve and, where appropriate, to developing partnerships with them. 8. Part of a larger whole. Recognizing that the Preserve is part of a larger ecological whole, cooperate with adjacent landowners and managers to achieve a healthy regional ecosystem. 9. Learning and inspiration. Recognizing the great potential of the Preserve for learning and inspiration, strive to integrate opportunities for research, reflection, and education in the programs of the Preserve. 10. Quality of experience. In providing opportunities to the public, emphasize quality of experience over quantity of experiences. In so doing, and in reserving the right to limit participation or to maximize revenue in certain instances, commit to providing fair and affordable access for all permitted activities. In addition to the contact named above, Roy Judy, Assistant Director; Christine Bonham; Doreen Feldman; Lisa Knight; Tom Kingham; Julian Klazkin; Allen Lomax; Cynthia Norris; Judy Pagano; Dawn Shorey; Carol Herrnstadt Shulman; and Maria Vargas made key contributions to this report. | In 2000, Congress authorized the purchase of the Valles Caldera (the Caldera) in north-central New Mexico. The Valles Caldera Trust (Trust), a wholly owned government corporation, is to become financially self-sustaining and to manage the Caldera for multiple purposes while sustaining the land's valuable natural resources. GAO was mandated to assess the progress the Trust is making in meeting its statutory goals. The Trust has made progress in meeting its goals to preserve and protect the Caldera for future generations as well as to provide for public recreation and sustained yield management. Specifically, it has (1) established a basic organization with about 25 staff; (2) drafted policy and procedures and contracted with the Department of the Interior's National Business Center for accounting services; (3) begun engineering and construction efforts to address infrastructure problems--roads, water systems, fences, and buildings; (4) established interim grazing and recreation programs; and (5) implemented an adaptive management approach that focuses on making management decisions based on scientific data. The Trust, however, still has much work to do to meet its goals, including achieving a financially self-sustaining operation. Strategic and performance plans with measurable goals and objectives: For example, the Trust must decide on the level of activities (e.g., grazing, hiking, and hunting) that will be allowed without seriously harming the land's resources, and yet will still provide sufficient recreational activity and sustained yield management. The Trust also must select additional opportunities for generating revenues, such as securing private donations. Plans to manage program risks: The Trust has not addressed program risks, including fire and legal liabilities. For example, the Trust lacks a fire plan, which would outline a decision-making process for responding to fires, and has not obtained liability coverage. Because it did not have a fire plan, the Trust spent about $338,000 in May 2005 to suppress a fire, which, in the opinion of the Forest Service Region 3 Fire Manager, could have been left to burn because the fire did not threaten any key resources or public infrastructure. Also, because it has not obtained liability coverage, the Trust has restricted the number of Caldera visitors. Mechanisms for monitoring progress: Among other things, the Trust has not had annual financial audits and has not prepared performance reports that would help it assess its progress toward meeting its financial and other goals. The Trust's efforts to raise the revenues needed to bring it closer to meeting its financial self-sustainability goal could be undermined by one or more of these issues. Frequent turnover in Board members and key staff has contributed to the problems experienced to date. |
Congress has authorized two different models for governing financial regulatory agencies: a single-director and board. Among financial regulators, single directors head the Office of Thrift Supervision (OTS), the Office of the Comptroller of the Currency (OCC), and the Office of Federal Housing Enterprise Oversight (OFHEO). In contrast, boards or commissions run FHFB, the Fed Board, NCUA, SEC, CFTC, FDIC, and FCA. Advantages and disadvantages exist for both models. In the single- director model, the director is responsible for making all the decisions at the agency, without the potential hindrance of having to consult or get the approval of board members. The primary advantage of the board model is that it provides the potential to benefit from the diverse perspectives and experiences of board members. However, one potential disadvantage of the board model is that consultation among board members could create inefficiencies in running the agency. To overcome the potential inefficiencies associated with the board model, responsibilities for policy and day-to-day administration are divided between the board and chair at many regulatory agencies. Policy decisions include making rules and regulations or authorizing enforcement actions. Day-to-day administration might include directing staff, overseeing safety and soundness examinations, and expending funds as authorized by the board. Administrative responsibilities that are often considered to be more significant (that is, not day-to-day) include the hiring and removal of senior officials and restructuring the agency. FHFB has a five-member board of directors. The Secretary of the Department of Housing and Urban Development (HUD) serves as an ex officio member, and the remaining four full-time directors are appointed by the President with the advice and consent of the Senate for 7-year terms. Each of the four appointed directors must have experience or training in housing finance or a commitment to providing specialized housing credit. Not more than three of the five members can be from the same political party. The President designates one of the four appointed members to serve as chair. As discussed in this report, since 1990 the board has operated under a resolution that delegated most administrative functions to the chair. The FHFB board operated with all appointed members serving on a full- time basis for the first time in December 2001. From 1990 to 1993, the board operated with four appointed members who served on a part-time basis. Beginning in January 1994, FHFB board membership became full time. However, from 1994 through 2001, the board operated with at least one vacant seat and sometimes two or three. In December 2001, the President appointed FHFB’s new chair, and the board operated with four full-time appointed members plus the HUD designee throughout 2002. As of September 2002, FHFB’s 104 staff members were organized into four program offices: Office of Supervision (OS) – The office is responsible for conducting on- site examinations of the FHLBanks and the FHLBank System’s Office of Finance and conducting off-site monitoring and analysis. OS is also responsible for overseeing the FHLBanks’ implementation of their risk- based capital plans. In addition, OS is responsible for providing expert policy advice and analyzing and reporting on the economic, housing finance, community investment, and competitive environments in which the FHLBank System and its members operate. Office of General Counsel (OGC) - The General Counsel is FHFB’s chief legal officer and is responsible for advising the board, the chair, and other officials on interpretations of law and regulation. OGC prepares all legal documents on behalf of FHFB and prepares opinions, regulations, and memorandums of law. The office represents FHFB in all administrative adjudicatory proceedings before the board and in all other administrative matters involving FHFB. Also, OGC represents FHFB in judicial proceedings in which the agency’s supervisory or regulatory authority over the FHLBanks is at issue. Office of Management (OM) - The OM director is the principal advisor to the FHFB chair on management and organizational policies and is responsible for the agency’s technology and information systems, finance and accounting, budget, personnel, payroll, contracting and procurement, and facilities and property management. Office of Inspector General (OIG) - OIG is responsible for conducting and supervising audits and investigations of FHFB’s programs and operations. The costs of FHFB’s operations are financed through assessments on the FHLBanks. In fiscal year 2003, FHFB’s operating budget was about $27 million. The FHFB chairs’ authority to administer the agency is broader than that of the chairs of the other financial regulators included in our review, with the exception of the FDIC. Under a delegation of authority, the chair can make important administrative decisions that may have policy implications (such as appointing senior officials) without obtaining the approval of other board members. Over the years, some FHFB board members have complained that the delegation of authority allows the chair to act unilaterally, and it has been the source of disputes among board members. On January 29, 2003, the FHFB board considered and rejected by a 3 to 2 vote a proposal to revise the delegation and limit the chair’s authority. The FHFB and FDIC chairs have broader authority to make key administrative decisions than the chairs of other financial regulators (see fig. 1). Specifically, at FHFB and FDIC, the chairs can appoint senior officials without a board vote or approval. At each of the other financial regulators we reviewed, appointments of most senior officials require a vote or the approval of a majority of the board. However, in some cases, agency chairs can appoint Schedule C officials to run certain staff offices, which is discussed in more detail later in this report and in appendix II. We also note that at some agencies, such as CFTC, the chair or other senior career agency officials appoint staff responsible for carrying out the agencies’ functions. As also shown in figure 1, at four of the regulators we reviewed, including FHFB, the chair can reorganize the agency without seeking board approval. While CFTC officials said that the chair has authority to reorganize the agency, the practice has been to submit such proposals to the commission for a vote. At three agencies, major reorganization proposals must be submitted to the board or commission for a vote or approval. For example, the Fed Board has a two-tier process by which reorganizations that meet specific criteria may require the approval of the entire board. At FCA, the board must approve major organizational changes, but the chair has the authority to make organizational changes within particular units. The basis for the FHFB chair’s significant administrative power is a delegation of authority approved by the board in 1990 and 1993. According to former FHFB Chair, Dan Evans, the 1990 delegation of authority facilitated the administration of the agency due in part to the fact that board members served on a part-time basis. According to the FHFB former managing director who served under Evans, the agency’s part-time board members spent most of their time in geographic locations across the United States and came to Washington several days each month to conduct the agency’s business, particularly policy issues. According to Evans and the former managing director, the 1990 delegation facilitated the administration of FHFB as convening the part-time board members for administrative decisions was challenging. The 1990 delegation of authority authorizes the chair to “. . . effect the overall management, functioning, and organization . . .” of the FHFB. Although FHFB’s statute authorizes the board to employ and set the compensation of agency staff, the delegation of authority ceded appointment, removal, and pay authorities to the chair. The delegation of authority included a provision that allowed board members to challenge decisions made under the delegation, obligating the chair to call a special session of the board to consider any matter or business at the request of any two or more board members. In November 1993, FHFB’s part-time board made technical revisions to the 1990 delegation of authority that allowed the HUD secretary to serve as the chair in the absence of a chair or vice chair. Otherwise, the terms of the 1993 delegation are substantially similar to the 1990 delegation and grant significant administrative authority to the FHFB chair (see fig. 2). According to a 1996 FHFB OGC memorandum that discusses the basis for the delegation and FHFB’s former managing director, some of the part-time board members did not continue on the board as full-time members. The FHFB memorandum states that the part-time board members were concerned that the agency would not be able to function in the absence of the chair and other board members. The 1993 delegation of authority has remained in effect because it has not been overturned by a majority vote of the board. The FDIC board has also voted to give significant administrative authority to its chair through its bylaws and a delegation of authority. Through its bylaws, the FDIC board delegated certain appointment authority as well as reorganization authority to the chair. On January 29, 2002, the board members voted unanimously to delegate additional authority to the chair. The delegation expanded the chairs’ authority to appoint senior officials without a board vote. In contrast to FHFB’s delegation of authority, FDIC’s delegation expires when the current chair leaves office and the rules for administrative decision making revert to the rules in place prior to the revised delegation. Disagreements among board members about the chair’s use of the delegation of authority to make unilateral administrative decisions have historically caused tensions between the chair and other board members. In a recent example of the disputes among FHFB board members, Democratic members stated that the current Chair did not consult them in any significant way prior to announcing a major agency reorganization on August 7, 2002 (the specifics of the reorganization are discussed later in this report). As done in 10 previous FHFB reorganizations under the delegation of authority, the Chair did not seek board approval. According to the Chair, he notified other board members about the key points of the reorganization several weeks prior to the announcement. However, other board members have stated that they were not involved in the planning of the reorganization and did not receive details about the reorganization until it was announced. For example, at the September 2002 board meeting, one member stated “. . . we’ve just had a major restructuring that wasn’t done by the board, that there wasn’t advance notice, which had a real impact on the office.” FHFB board members who served under former Chair Morrison also stated that he used the delegation of authority to exclude other board members from key administrative decisions. For example, one board member stated that Morrison appointed senior officials and reorganized the agency without any consultation. The board member stated that he disagreed with these decisions and believed that they undermined FHFB’s regulatory effectiveness. Morrison said that his actions were consistent with the administrative powers authorized to the chair under the delegation. Morrison also said that he met frequently with other board members to explain his actions and that other board members never called special board meetings to question his decisions, as permitted under the delegation. FHFB board members have also complained that chairs have used their delegated authority as the basis for unilateral actions on policy, which is the responsibility of the board as a whole. For example, two board members said that the current Chair acted unilaterally in selecting FHLBank public interest director candidates in 2002 and had minimal consultation on these selections with other board members. In past years, FHFB approved public interest director candidates by notational vote. In 2002, these two board members requested that the vote on the candidates take place in an open meeting, and they expressed their concerns at this public meeting about not having been consulted. FHFB officials said that the current Chair has initiated actions to improve the selection of public interest directors. In particular, the Chair developed new criteria governing the appointment of public interest directors. The new criteria require public interest directors to have an understanding of such issues as finance, political awareness, and corporate governance. On January 29, 2003, the FHFB board voted unanimously to approve the appointment of 28 public interest directors. Disputes about the FHFB’s powers under the delegation of authority also took place during Chair Morrison’s tenure, between 1995 and 2000. For example, in a letter sent to Members of Congress, a former board member alleged that “Mr. Morrison has used and expanded the delegation of authority to unilaterally implement his policy objectives by thwarting Board consideration of issues where there may be disagreement with the Chairman by the independent directors.” Morrison said that his decisions under the delegation were proper and did not stray into policy matters reserved for the board. On January 29, 2003, while a draft copy of this report was with FHFB for official comment, the board debated and rejected by a 3 to 2 party-line vote a proposal to revise the existing delegation of authority and limit the chair’s administrative authorities. FHFB’s Chair placed the proposal on the agenda for meeting at the request of the agency’s two Democratic board members. Although FHFB board members’ staff said that they exchanged proposed language to revise the delegation of authority prior to the board meeting, they did not engage in substantive discussions over the proposal during that period. The proposed revisions to the delegation discussed at the January 29 board meeting would have allowed the FHFB board to approve the appointment of the agency’s office directors and reorganizations down to the office level. A board member who proposed the revision said that the current delegation had been “misused” by FHFB chairs and used as a basis to usurp the policy-making responsibilities of the board. Among other statements, FHFB’s Chair denied that he had “misused” his authority under the delegation and stated that the delegation was appropriate, among other reasons, because organizations need a single individual to direct operations to ensure efficient administration. On August 7, 2002, the FHFB Chair announced a major reorganization, and the agency sent RIF notices to nine staff members. Although FHFB provided significant financial compensation and career transition services to affected employees, certain FHFB actions in connection with the RIFs do not appear fully consistent with federal age discrimination statutes, regulations, or court decisions. We have informed EEOC of our findings in this area. In addition, FHFB placed each of the affected staff on administrative leave during the 60 day advance notice period (the period from the RIF notification on August 7 until actual separation from federal service). While OPM regulations require federal agencies to keep employees on active duty status during the advance notice period, FHFB officials said the agency had statutory authority to place the staff on administrative leave. According to the FHFB Chair and Director of Management, the August 2002 reorganization was focused on improving supervision of the FHLBank System. Through a review of the organizational structure of FHFB, the Chair concluded that the agency dedicated too few resources to FHLBank supervision and too many resources to support functions and public and congressional relations. Accordingly, the Chair decided to eliminate the Office of Managing Director and the Office of Communications and merge OS with the Office of Policy, Research, and Analysis (OPRA) (see figs. 3 and 4). The Chair also decided to shift resources and positions from the eliminated offices to OS. In addition, FHFB changed the title of the Office of Resource Management to the Office of Management. The Chair and the Director of Management assumed responsibility for the day-to-day administrative duties formerly carried out by the Managing Director and, as is discussed later in this report, the Chair’s personal staff assumed responsibility for the Office of Communication’s public and congressional affairs functions. As part of the reorganization, FHFB notified nine employees that they were subject to the RIF and that they would be separated from the federal service in 60 days (referred to as the advance notice period). To minimize the effect on the employees, FHFB hired an outplacement firm to help them prepare resumes and develop job search strategies. FHFB also notified each employee that he or she would receive federal severance and accrued annual leave benefits. Further, FHFB presented each of the affected employees with a “Negotiated Settlement Agreement” that offered 3 to 6 months salary (depending upon employment status) in exchange for agreement not to file any administrative actions or lawsuits against the FHFB, its chair, directors, or employees in connection with the employees’ employment with the agency or involuntary separation. According to documentation provided by FHFB, the agency gave the affected employees 47 days to decide whether to sign the settlement agreement. According to FHFB officials, eight of the nine affected employees signed the settlement agreements. FHFB’s settlement agreements included provisions that waived employees’ rights to file lawsuits based on the Age Discrimination in Employment Act (ADEA), as amended. Waivers of rights under ADEA are valid and enforceable only if the waiver is knowing and voluntary, and courts have generally required employers to strictly comply with ADEA standards regarding waivers. Although FHFB took steps to comply with ADEA and EEOC regulations, certain provisions in the settlement agreements are not consistent with requirements. First, the settlement agreements required that employees waive their rights to file complaints, charges, or appeals with EEOC, which is not consistent with statutory and regulatory requirements. Second, FHFB did not advise each affected employee in writing to consult an attorney prior to signing the agreements and waiving his or her ADEA rights. Third, FHFB did not provide required information to the affected employees to assist them in determining whether to waive their rights under ADEA. The Older Workers Benefit Protection Act (OWBPA) includes detailed provisions that deal with the validity of releases and waivers under ADEA, and sets forth minimum requirements for a knowing and voluntary release of claims under the ADEA. EEOC regulations implementing OWBPA apply to waivers of rights and claims under ADEA, and the regulations provide specifically that they apply to all waivers of ADEA rights and claims, regardless of whether the employee is employed in the private sector or public sector, including federal employment. The statute and regulations require that the agreement be in writing, refer specifically to claims under ADEA, be given in exchange for consideration that is above and beyond any benefit to which the employee is already entitled, and give the employee adequate time to consider the waiver before signing it. The settlement agreements appear designed to comply with several of these requirements. For example, employees were given 47 days to consider the waiver and the settlement agreement refers specifically to claims under ADEA. As required by ADEA, the settlement agreements also provided affected employees over 40 years of age 7 days after signing the agreement to revoke the agreement. In addition, the settlement agreements included payments above and beyond what the employees were entitled to receive by statute. That is, FHFB agreed to pay each affected employee 3 to 6 months salary in exchange for agreeing to sign the settlement agreement. However, a provision in the settlement agreements requiring employees to waive their rights to file charges, complaints, or appeals with EEOC does not appear to be consistent with OWBPA requirements and EEOC regulations. OWBPA provides that “No waiver agreement may affect the Commission’s rights and responsibilities to enforce this chapter. No waiver may be used to justify interfering with the protected right of an employee to file a charge or participate in an investigation or proceeding conducted by the Commission.” The EEOC regulations provide that “no waiver agreement may include any provision prohibiting any individual from “. . . (i) Filing a charge or complaint, including a challenge to the validity of a waiver agreement, with EEOC, or (ii) Participating in any investigation or proceeding conducted by EEOC.” The settlement agreements also do not appear consistent with OWBPA requirements and EEOC regulations that require employers to notify employees in writing to consult with an attorney prior to agreeing to waive their rights under ADEA. The courts have determined that employers must specifically advise employees to consult an attorney. In FHFB’s settlement agreements, the relevant provision states that the “employee understands that he has had the opportunity to contact a representative of his choice to discuss the terms and conditions of this Negotiated Settlement Agreement. . .” An FHFB attorney stated that agency officials pointed out this statement in the settlement agreements to the affected employees. However, the settlement agreement does not advise the employees in writing to consult an attorney before agreeing to waive their rights under ADEA, and FHFB did not provide any other written advice for employees to consult an attorney. In addition, FHFB did not provide information to the affected employees as is required under OWBPA and EEOC regulations. Employers that offer additional benefits to a group of involuntarily terminated employees in exchange for a waiver of claims under ADEA must satisfy additional requirements. These employers must provide detailed written information to employees describing the group termination program, including a listing of the job titles and ages of the employees selected for the program, and similar information for individuals who were not selected. This information is designed to permit older workers to make more informed decisions concerning waiver of ADEA rights. FHFB’s settlement agreements were part of a group termination program (e.g., a RIF). However, FHFB officials said that while the names of the terminated employees were provided, written information on the job titles and ages of all employees who were offered the settlement agreement was not provided. According to FHFB, the EEOC regulations do not require that this information be provided if the employer decides to eliminate all of the positions in a particular unit, as FHFB did with respect to the Office of the Managing Director and the Office of Communications. However, OWBPA and EEOC regulations do not distinguish between situations where employers terminate selected positions in a particular unit and others where all positions are terminated. Employers are required to provide information on employee ages and job titles under either circumstance. FHFB restricted the access of employees subject to the RIF to the agency’s headquarters during the 60-day advance notice period—the period from the RIF notification on August 7, 2002, until actual separation from federal service—and placed them on administrative leave. FHFB’s decision to restrict staff access during the advance notice period was not consistent with OPM regulations, but FHFB officials said that the agency had statutory authority to take this action. OPM regulations that apply to RIFs state that, when possible, employees should remain in active-duty status during the advance notice period. When in an emergency the agency lacks work or funds for all or part of the notice period, it may place employees on annual leave with or without their consent, leave without pay without their consent, or nonpay status without consent. While no statute governs the use of administrative leave, OPM regulations and federal administrative decisions have established standards for its use. These regulations and decisions have permitted agencies, in certain situations, to excuse an employee for brief periods without a loss of pay. However, agencies generally may not place employees on administrative leave for long periods unless their absence furthers an agency’s mission. FHFB officials said that the agency’s authorizing statute provides authority to place employees on administrative leave during the advance notice period. FHFB officials said that the statute allows the agency to set the compensation of its employees without regard to the statutes affecting other agencies. FHFB officials said that all forms of leave, including administrative leave, are forms of compensation, and therefore, the agency was authorized to place the affected staff on administrative leave. Further, FHFB officials said that (1) placing the staff on administrative leave allowed them to take full advantage of the job placement services that the agency offered and (2) requiring the employees to report to the agency during the advance notice period when there was insufficient work for them to do was not cost effective. Although FHFB’s statute provides broad authority to set compensation of its employees, we note that the scope of FHFB’s authority and whether it appropriately supercedes OPM’s RIF regulations has not been established. Although we identified weaknesses in FHFB’s examination program in a 1998 report, FHFB did not address these weaknesses, and they persisted for several years. In August 2002, FHFB announced plans that could significantly improve its examination program and more than double the number of examiners. However, because FHFB has just started to revise its examination program, it is too early to evaluate the effectiveness of these plans. Our 1998 report identified limitations in FHFB’s examination program, which raised questions about the agency’s ability to help ensure that FHLBanks operate in a safe and sound manner. For example, the report found that FHFB examiners did not thoroughly review FHLBank internal control systems. Internal controls are defined as arrangements, such as procedures, organization structure, and technical methods, designed to provide reasonable assurance that (1) assets are protected from unauthorized use or disposition; (2) transactions are in compliance with law, regulation, FHFB policy, and the policy directives of the FHLBank’s director and management; and (3) financial reporting is accurate. According to our report, in September 1996, FHFB examinations stated that internal control reviews were “limited.” FHFB officials cited the limited number of examiners, 8 to 10 individuals, as one explanation for not conducting thorough internal control evaluations. From 1998 through 2001, FHFB did not develop an examination program to ensure that each FHLBank has established an adequate internal control system. We reviewed all 36 FHFB bank examinations conducted in 1999 to 2001. Each of the 36 examinations stated that the review of internal controls was “limited in scope and did not involve a comprehensive review of the entire system of controls.” As of late July 2002, FHFB had 10 examiners, or the same number as in 1998. Moreover, from 1998 through 2002, direct mortgage acquisition programs added risks to the FHLBank System and the FHLBanks developed increasingly complex approaches to manage these risks. Further, our 1998 report noted that FHFB examination workpapers did not adequately document corporate governance reviews or indicate that such reviews were conducted. Board of director and management oversight are essential elements of the corporate governance of financial institutions and financial and other risk management. At the September 2002 FHFB board meeting, discussion among board members suggested a concern about the lack of emphasis on corporate governance in the examinations. One board member stated that he believes the FHLBanks’ corporate governance is “uneven” and that FHFB’s examinations have not devoted sufficient attention to this critical area. The Chair and the other board member discussed directing FHFB’s examination staff to conduct an audit of corporate governance in the FHLBank System. The next section discusses this audit. Our 1998 report noted that off-site monitoring in the FHFB examination program was weak and conducted in an uncoordinated manner. Off-site monitoring involves the analysis of financial data to monitor bank financial performance and to identify risks. Off-site monitoring can serve as an effective means to supplement the work of examiners working on-site. Regular monitoring between examinations, which generally take place on an annual basis, is important because the FHLBanks’ financial conditions and risks can change significantly in a short period. The 1998 report noted that OS off-site monitoring consisted of four periodic reports as well as monthly reviews of various bank information. While these reports were potentially beneficial, FHFB suspended them in 1997 due to staff constraints in OS. The 1998 report also noted that coordination between OPRA and OS on off-site monitoring activities was lacking. We found that FHFB’s off-site monitoring program is still limited. For example, FHFB’s OS director said that in July 2002 that only one individual performs off-site monitoring functions. Rather than assess the financial performance of the FHLBanks, the director said that the individual tracks FHLBank compliance with existing examination recommendations. Although this function is important, it does not provide FHFB with information about safety and soundness issues, such as changes in the FHLBanks’ financial condition. A more comprehensive off-site monitoring program could help alert FHFB officials to the need for an on-site examination. In August 2002, FHFB’s Chair announced that FHFB would significantly increase the resources devoted to OS. FHFB set the fiscal year 2003 budget for OS at $9.7 million, a $2.8 million increase from fiscal year 2002 funding levels. FHFB also hired a new OS director and deputy director, both of whom have experience in examinations at other financial regulatory agencies. FHFB also plans to increase the number of examination staff from 10 to 24 by fiscal year 2004 and to open satellite locations in different parts of the country in which to base examiners. Under the previous examination approach, 8 to 10 examination staff spent 6 to 7 months on travel each year. FHFB officials said satellite locations would reduce travel demands on the examination staff and aid in hiring and retaining qualified staff. At the time of our review, OS was in transition; however, FHFB had increased the number of examiners. As of February 5, 2002, there were 14 examiners on staff at FHFB, an increase of 4. According to the OS Director, FHFB also plans to significantly change its approach to conducting examinations to obtain a fuller understanding of FHLBank operations as FHLBank System business becomes more complex. Prior to September 2002, FHFB assigned its examiners to teams that included 4 to 5 members. In general, each examiner was responsible for conducting annual examinations at 6 of the 12 FHLBanks. According to FHFB officials, the examination teams reviewed different banks from year to year, and their membership was rotated as well. Therefore, an FHFB examiner might work on a particular bank’s examination one year but not the next. Moreover, FHFB examiners did not necessarily specialize in the areas (e.g., credit risk, interest rate risk, or affordable housing programs) that are examined on an annual basis. Instead, an examiner might review a bank’s interest rate risk operations at one examination and review another bank’s affordable housing program at the next examination. The OS Director said that under the revised examination approach, by the fourth quarter of fiscal year 2003, FHFB plans to have three examination teams in place. Each team will consist of 8 members, with each team responsible for 4 of the 12 FHLBanks for 3 to 4 years. In addition, each examiner will focus on a particular area, such as interest rate risk or affordable housing compliance, at each of the four FHLBank examinations for which the individual is responsible annually. For example, the OS Director said that two recent hires on the examination staff have expertise in the area of corporate governance. According to FHFB, as of February 2003, OS had completed ten targeted corporate governance reviews at the FHLBanks, and expects to complete a final report on all 12 banks’ corporate governance by March 2003. The OS Director also said that FHFB plans to develop a proactive and risk- based management approach to conducting FHLBank examinations. Prior to FHFB’s recently announced changes to its examination program, examiners might examine a particular FHLBank as of June 30 of a particular year. The examiners would then assess whether the bank was operated in a safe and sound manner and complied with all laws and regulations as of that date. Under the new risk management approach, the OS director said that the examination staff would try to identify the future risks facing each FHLBank and develop plans to help ensure that FHLBank management establish systems and controls to adequately manage those risks. Overall, FHFB’s planned examination program is similar to the examination program of OFHEO, which regulates Fannie Mae and Freddie Mac. Fannie Mae and Freddie Mac are large government-sponsored, privately owned and operated corporations chartered by Congress to enhance the availability of mortgage credit across the nation during good and bad economic times. Similar to FHFB’s proposed examination program, OFHEO has established a risk-based examination program that assesses the controls Fannie Mae and Freddie Mac use to manage significant risks. In addition, OFHEO assigns staff with specialized skills, such as interest rate risk management, to its examination teams. OCC, FDIC, OTS and the Fed Board have also implemented similar risked-based examination programs. FHFB has plans to expand off-site monitoring. Specifically, as of October 21, 2002, an FHLBank analyst was assigned to each FHLBank in an effort to enhance the OS off-site monitoring program. According to the OS Director, the recently announced merger between OS and OPRA (see figs. 3 and 4) provides opportunities for FHFB to enhance its off-site monitoring capability. In particular, examination and OPRA staff will now work in the same unit, which should allow better coordination of their activities. Available data indicate that 50 (67 percent) of the 75 public interest directors that FHFB appointed for the first time from January 1, 1998, through May 8, 2002, made one or more political contributions in the 8-year period prior to their initial appointments (see fig. 5). We obtained public interest director appointment data from FHFB and contribution data from CRP. CRP provided data that covers all federal election cycles from 1990 through 2002. We organized and presented the CRP contribution data to cover the tenures of the three FHFB chairs who were in office when FHFB made public interest director appointments during 1998 to 2002: Bruce Morrison, June 1995 to July 2000; William Apgar, July 2000 to December 2000; and John T. Korsmo, December 2001 to present. We focused our analysis on the 8-year period prior to each public interest director’s appointment to ensure a standard means of comparison between the three FHFB chairs. Figure 6 shows that 28 (or 56 percent) of the public interest directors who reported making contributions prior to their appointments had done so 1 to 10 times while 22 (44 percent) had done so 11 or more times. Of the 5 directors appointed during Apgar’s tenure, all reported making 1 to 10 donations. The public interest directors appointed during the Morrison and Korsmo tenures were generally divided equally between those who reported 1 to 10 donations and those who reported giving 11 or more contributions. Table 1 summarizes the number of contributions and the total amount of those contributions that each FHFB public interest director appointee made prior to his or her appointment. When we totaled each director’s contributions, we found the median value of those totals ranged from $3,250 for the 5 appointments made during Apgar’s tenure to $8,364 for the 26 appointments made during Korsmo’s tenure. As shown in table 2, during the Morrison and Korsmo tenures, FHFB did not appoint public interest directors who give exclusively to the party that is not the party of the chair. That is, FHFB did not appoint any public interest directors who had made contributions exclusively to the Republican Party during Morrison’s tenure, nor did FHFB appoint any public interest directors who gave exclusively to the Democratic Party during Korsmo’s tenure. However, during the Morrison and Korsmo tenures, FHFB appointed public interest directors who gave to both parties. During Apgar’s tenure, FHFB appointed three individuals who gave exclusively to the Democratic Party, one who gave exclusively to the Republican Party, and one who gave to both parties. We also analyzed data obtained from Fannie Mae and Freddie Mac to determine the political contributions of members of their boards of directors who are appointed by the President. Using CRP data, we determined the political contributions of Fannie Mae and Freddie Mac directors appointed from January 1, 1998, through 2002. Our analysis shows that 18 of the 19 (95 percent) of the Fannie Mae and Freddie Mac directors appointed during that period had made political contributions in the 8-year period prior to their initial appointments. The median value of the total number of contributions for an individual was 11, and the median of the total preappointment donations was $7,000. In some cases, FHFB’s use of Schedule C positions differs from the practices of other financial regulators. At FHFB and five of the six other financial regulatory agencies that we reviewed, the agencies allot Schedule C positions to the chair and other board members. Unlike FHFB, four of these five agencies appoint Schedule C officials to head certain staff offices, such as Office of Policy or the Office of General Counsel. The FHFB chair’s personal staff, including a Schedule C appointee, are responsible for the agency’s public and congressional affairs functions, a practice unique among the regulatory agencies that we reviewed. Schedule C appointees at FHFB and five other agencies work directly for the agencies’ policymakers: the chair and other board members (see table 3). Unlike FHFB and CFTC, the other four agencies allot Schedule C positions to head some staff offices. For example, FCA has Schedule C appointees in positions such as Director of the Office of Congressional and Public Affairs, Director of the Office of Policy and Analysis, and Chief Operating Officer. SEC has Schedule C appointees for three director positions: Director of the Office of Communications, Director of the Office of Legislative Affairs, and the Director of Office of Public Affairs. SEC also allots Schedule C positions to several nondirector-level positions within the organization. We compared FHFB’s approach to managing its public and congressional affairs functions to the approaches of the six other financial regulatory agencies. Unlike FHFB, each of these six agencies has a separate public and congressional affairs office, typically staffed by full-time career employees. At SEC, FCA, and NCUA, the chairs appoint Schedule C officials to run these offices; while career officials run the offices at the Fed Board and FDIC. At CFTC, a noncareer and non-Schedule C executive heads the public and congressional affairs office. Since FHFB’s August 7, 2002, reorganization, the Chair’s personal staff has been responsible for the agency’s public and congressional affairs functions. Specifically, FHFB officials said that a Schedule C appointee from the Chair’s staff has assumed responsibility for managing media relations and a career staff member who is also on the Chair’s staff is responsible for congressional relations. According to the FHFB officials, the Chair’s personal staff have been able to incorporate the public and congressional affairs functions into their normal duties. FHFB officials said that the Chair’s staff have been able to assume these responsibilities because, with about 100 employees, FHFB is a comparatively small agency with limited congressional and public affairs responsibilities. Due to the delegation of authority, the FHFB chair has relatively broad administrative power, compared with most financial regulatory chairs, to appoint senior officials and reorganize the agency without obtaining a board vote or approval. The delegation prevents the full board from participating in key administrative decisions that have potential policy implications. At a January 29, 2003, FHFB board meeting, the board in a close 3 to 2 vote along party lines rejected a proposal to revise the delegation of authority that would have required board approval for senior appointments and major agency reorganizations. Although FHFB board member staff exchanged proposed language to revise the delegation of authority prior to the meeting, there was little collaboration among the staff. While the FHFB board has determined that the delegation remains the most efficient means to administer the agency, we continue to believe that the decision potentially frustrates one of Congress’ objectives in establishing a board to regulate the FHLBank System. That is, the board structure is designed to help ensure that key decisions benefit from the experiences and perspectives of all board members. In addition, the FHFB board’s decision will likely result in the continuation of the sometimes bitter conflicts that have periodically characterized the relationships among board members over the past 8 years. Going forward, the FHFB board would benefit from considering a range of options that would involve all board members in key administrative decisions. Some of these options may not involve any changes to the current delegation of authority. For example, the chair could notify and brief other board members of key administrative decisions prior to their implementation and seek other board members’ advice and counsel on these decisions. Or, the FHFB board could consider practices at other financial regulatory agencies that provide for board or commission involvement in key administrative decisions. At CFTC, for instance, the chair’s authority to reorganize the agency is similar to that of the FHFB chair, but CFTC’s practice has been for the chair to submit major reorganization proposals to the commission for a vote. In addition, board members and their staffs could work together to determine if there are any areas of agreement on approaches—including revising the delegation of authority—that would increase board participation in key administrative decisions while preserving the chair’s authority to administer the agency on a day-to-day basis. While there is no requirement or guarantee that FHFB board members agree on all key administrative decisions, establishing processes and practices to ensure full board participation could enhance the quality of such decisions and improve relations among board members. FHFB offered significant financial compensation to staff that received RIF notices during the August 2002 reorganization. However, provisions in the settlement agreements do not appear fully consistent with federal age discrimination statutes and regulations. For example, a provision in the settlement agreements that required employees to waive their rights to file charges, complaints, or appeals with EEOC is not consistent with ADEA’s prohibition against waivers of these rights. FHFB also (1) did not include required language in the settlement agreements advising employees in writing to consult with an attorney prior to signing the settlement agreements and waiving their ADEA rights and (2) failed to provide the affected staff with information on the job titles and ages of staff, as required under ADEA and EEOC regulations. We have informed the EEOC about our findings regarding the FHFB settlement agreement provisions pertaining to the waiver of ADEA rights. FHFB did not take actions in a timely way to address FHLBank examination program weaknesses that we identified in a 1998 report. However, in 2002, current FHFB Chair Korsmo announced plans and initiated actions, such as hiring more examiners that have the potential to improve the quality of the agency’s safety and soundness oversight. Continued FHFB management focus on the examination program is essential over the next several years to ensure that the reforms are fully implemented and their effectiveness evaluated. We also note that FHFB’s Chair, initiated these changes to the examination program under the delegation of authority. While these changes hold out the potential for improving FHFB’s examination program, the unilateral manner in which they were carried out resulted in further disputes among board members. Permitting greater board involvement in such key decisions would provide greater opportunity for consensus without necessarily delaying any changes. Decisions that have the potential to affect the critical means by which FHFB ensures FHLBank safety and soundness merit the attention and consideration of the full board. To ensure full board participation in key administrative decisions that have policy implications, such as senior appointments and major reorganizations, we recommend that the FHFB board consider a range of options that could be implemented within the current delegation of authority. These options include the chair (1) notifying, briefing, and/or soliciting input from other board members on major administrative decisions prior to their implementation and (2) submitting key administrative decisions to the board for a vote or approval. We also recommend that board members and their staffs hold discussions on approaches—including potential revisions to the delegation of authority— that would ensure board participation in key administrative decisions while preserving the chair’s authority to administer the agency on a day-to-day basis. We also recommend that FHFB fully comply with applicable federal age discrimination statutes and regulations in offering settlement agreements to employees subject to RIFs. We received FHFB’s comments on a draft of this report from the Director of the Office of Management and written comments from FHFB board members Franz S. Leichter and Allan I. Mendelowitz, which are reprinted in appendix IV and V, respectively. We also provided relevant excerpts from a draft of this report to the six other financial regulatory agencies that we reviewed (SEC, FDIC, NCUA, Fed Board, CFTC, and FCA). FCA’s Chair provided written comments, which are reprinted in appendix VI. Representatives from all six regulatory agencies that we contacted provided oral comments and we received technical comments, which we have incorporated as appropriate. FHFB disagreed that the board should revise the delegation of authority to allow for board participation in key administrative decisions. FHFB agreed with one of our findings regarding the settlement agreements offered to employees subject to the 2002 RIF but disagreed with two others. FHFB also commented on the draft report’s findings regarding the examination program, public interest director appointments, and Schedule C positions. Among other statements, Leichter and Mendelowitz agreed with our recommendation regarding the delegation of authority and expressed concern about how the agency conducted the RIF. The FCA Chairman’s comments related to the number of Schedule C positions that are filled at the agency. Representatives from each of the six agencies that we contacted agreed with the draft report’s findings regarding their agency’s operations. The following summarizes FHFB’s comments and, where appropriate, our evaluation for the five report sections: (1) the delegation of authority, (2) FHFB’s compliance with age discrimination requirements in connection with the RIF, (3) FHFB’s examination program, (4) public interest director appointments, and (5) Schedule C positions at financial regulatory agencies. We also summarize the comments of Leichter, Mendelowitz, and the FCA Chairman. FHFB noted that at the January 29, 2003, meeting the board had considered, as we recommended in the draft report, and rejected a proposal to revise the delegation of authority that would have required board approval for senior appointments and major reorganizations. FHFB stated that a majority of the board believes that vesting broad administrative responsibility in the chair is the best method to manage the agency’s day-to-day operations. However, we continue to believe that full board participation in key administrative decisions is essential. FHFB also made several points to support its view that the board should not revise the delegation of authority. First, FHFB stated that the current delegation of authority allows individual board members to propose items to the board for action. Second, FHFB stated that we did not provide sufficient evidence to support the assertion that there was tension and conflict among board members regarding the delegation of authority. FHFB also stated that Congress intended for tension to exist in creating FHFB— due to the divided partisan composition of the board—and that such tension can serve a “constructive purpose.” Third, FHFB stated that we made an error in figure 1 of the draft report “ . . . in asserting that the appointment of senior officials and personnel decisions at the Securities and Exchange Commission must be made with board approval.” FHFB stated that reorganization and top-level appointments at SEC do not require a board vote. In addition, FHFB included a lengthy attachment to its official agency comments, which has not been included in this report. The attachment discussed a range of issues, including a history of the delegation of authority, theories on management and delegations of authority at other agencies, and information on FHFB’s examination and supervision program for the FHLBanks. Regarding FHFB’s first point, we believe that the provision in the delegation allowing board members to call board meetings to challenge the chair’s key administrative decisions does not provide for enhanced board collegiality and consultation. Rather, the delegation of authority allows the chair to make and implement such decisions without consulting other board members and requires any board members who oppose these decisions to marshal a majority vote to overturn the decision. In our view, board member collaboration would be enhanced if consultations and votes or approvals took place before key administrative decisions were made and implemented. While there is no requirement or guarantee that all board members would agree to vote for or approve key administrative decisions, full board participation in the process could serve to improve the decisions and enhance collegiality. We disagree with FHFB’s second point and believe that this report offers significant evidence of tensions and conflicts between board members resulting from the delegation. Such tension and conflicts have periodically characterized the board member relations over the past 8 years. We acknowledge that tension and conflict are inevitable at any board with divided representation and that such tension can in some cases be beneficial. However, we note that at FHFB, unlike most other financial regulatory agencies, there is no appropriate process or forum for board members to consider key administrative decisions before they are made and implemented. We also disagree with FHFB’s final assertion that our report incorrectly described the process for appointing senior officials at SEC. The report draft stated that at most other financial regulators boards either vote on or must give approval for senior appointments. The relevant authority regarding SEC—Reorganization Plan No. 10 of 1950—states that the commission is responsible for approving senior appointments. The commission has established a practice to fulfill this responsibility whereby the chair obtains the approval of other commissioners prior to making senior appointments. SEC officials agreed with our report’s statements regarding the agency’s appointment process. FHFB said it agreed with one of our findings regarding the settlement agreements but disagreed with two others. FHFB said it concurs that the settlement agreements should have advised employees to consult with an “attorney” rather than a “representative” prior to signing. However, FHFB also stated that the language in the settlement agreements was not intended to interfere with EEOC’s enforcement authority. FHFB stated that any employee was clearly free to challenge the settlement agreement at a later date. FHFB also stated that it disagreed with a statement in the draft report that it was required to provide the names, ages, and positions of employees who were not selected for separation from the agency. FHFB also stated that since it abolished all of the positions in the former Office of Communications and the Office of Managing Director, OWBPA and EEOC requirements on providing information to employees who were offered the settlement agreement did not apply. Additionally, FHFB disagreed with a statement in the draft report that FHFB’s decision to place staff subject to the RIF on administrative leave during the advance notice period was inconsistent with OPM regulations. FHFB said that its statute authorizes the agency to pay the compensation of its employees without regard to the laws affecting federal employees, and that administrative leave is a form of compensation. While FHFB agreed with our findings regarding advising employees to consult with an attorney prior to signing the agreements, we need to clarify that the problem with the separation agreements was not confined to the use of the term “representative” rather than the term “attorney.” OWBPA and EEOC regulations require that the employer advise the employee in writing to consult an attorney prior to waiving their ADEA rights. FHFB’s settlement agreements were deficient in that they did not directly advise or recommend that employees consult with an attorney prior to signing them. Rather, the settlement agreements used more passive language stating that each employee had the opportunity to contact a representative to discuss the terms and conditions of the agreements, which the courts have held does not meet the statutory requirements. If FHFB had replaced the word “representative” in the settlement agreement with the word “attorney,” the agreements still would not have been consistent with OWBPA and EEOC requirements. We disagree with FHFB that the settlement agreement provisions pertaining to EEOC and information requirements were consistent with applicable requirements. EEOC regulations clearly prohibit any agreement that interferes with an individual’s right to file a complaint with EEOC or affects the EEOC’s rights and responsibilities to enforce the ADEA. While FHFB asserts that employees were clearly free to challenge the agreements at a later date, the broad language of the settlement agreement states that employee agrees not to file a complaint or appeal with the EEOC. Such a broad prohibition could deter an individual from contesting the agreement and the validity of the waiver of ADEA rights. Additionally, the draft report stated that FHFB did not provide information on the job titles and ages of staff offered settlement agreements to all such staff. The draft report did not state that FHFB should have provided such information for staff who were not subject to separation. There is also no requirement that employers provide names of employees, and the draft report did not state that FHFB should have done so. Nonethless, FHFB’s failure to provide information on the job titles and ages of employees subject to the RIF to all such employees was inconsistent with EEOC regulations. While the EEOC regulations define the scope of the information requirement, the regulations do not suggest that when all of the positions in a particular office are eliminated, no information needs to be supplied. The purpose for providing the information is for employees to have the opportunity to assess the viability of an age discrimination claim and whether or not to waive their rights to pursue such a claim. FHFB employees were not provided with the information necessary to make such a decision. Regarding FHFB’s comments on placing staff on administrative leave, we have added language to the report stating that FHFB believes it has statutory authority to disregard OPM regulations requiring staff to be kept on active status during the advance notice period. However, we note that the scope of FHFB’s authority and whether it appropriately supercedes OPM’s RIF regulations has not been established. FHFB stated that Chair Korsmo initiated significant changes to enhance the capabilities of the agency’s FHLBank examination program and that the draft report did not sufficiently recognize that he was responsible for these initiatives. FHFB stated that at the start of Korsmo’s tenure in December 2001, the agency’s Office of Supervision was understaffed and insufficiently focused on the FHLBanks’ risk assessment processes, internal control systems, and systems of corporate governance. FHFB also listed the steps that the Chair initiated to improve supervision, including hiring experienced management for OS and increasing the number of examiners. FHFB also stated that while it agrees with our assertion that these changes have the potential to improve the agency’s examination program, it believes that the changes have already resulted in significant progress. We agree that Chair Korsmo has initiated important steps to improve its examination program and have added language to the report describing these initiatives. However, we continue to believe that additional time and management oversight is needed to ensure that this critical FHLBank examination function is improved. FHFB stated that the draft report had a narrow focus on the political contributions of FHLBank public interest directors and that this narrow focus resulted in an incomplete portrayal of the selection process, recent improvements in that process, and the critical roles played by public interest directors. FHFB also stated that the draft report’s focus called into question the integrity of the appointment process and that political contributions are a determining factor in the appointment process. FHFB stated that public interest directors are now appointed in public votes and that the Chair instituted new criteria for the selection of public interest directors. FHFB also noted that the board voted unanimously to approve 28 public interest directors at the January 29, 2003, board meeting. We were asked to provide an analysis of the political contributions of public interest directors prior to their initial appointments. We did not conduct a broader review of the appointment process or the qualifications and capabilities of public interest directors. Our review was not intended to call into question the appointment process or the integrity or qualifications of individual public interest directors. We have added language to this report discussing the Chair’s criteria for appointing public interest directors and the January 29, 2003, board meeting. FHFB noted that the report did not identify any Schedule C practices at FHFB that violated OPM rules and that Chair Korsmo has instituted changes to correct past practices that improperly categorized employees who should have had Schedule C appointments. FHFB stated that all of the agency’s Schedule C officials serve as confidential advisers to board members. FHFB also reiterated that the small size of the agency serves as an appropriate basis for assigning its public and congressional affairs functions to the Chair’s personal staff. In their comments, Leichter and Mendelowitz said that because the FHFB board did not consider or vote on an agency response to our draft report, there is no official agency response to the report. We have not attempted to resolve this dispute among FHFB officials, and we treat the response from FHFB’s Director of Management as the agency’s official response. Regarding the major issues discussed in the draft report, Leichter and Mendelowitz made the following comments: Delegation of Authority: Leichter and Mendelowitz stated that the delegation of authority (1) resulted in conflicts between board members; (2) was contrary to FHFB’s authorizing legislation, which vests agency management in the board rather than the chair; and (3) was “anachronistic” because it was enacted when the board had a part-time membership. In response to a comment from Leichter and Mendelowiz regarding changes that the FHFB board made to the delegation of authority in 1993, we have added language to the report. FHFB Actions in Connection with the RIF: Leichter and Mendelowitz said that they were “deeply concerned” about the way in which FHFB conducted the RIF and expressed concern about the elimination of the Office of Managing Director because the action impeded communication between board members and agency staff. They also raised concern that the draft report did not discuss other procedures that FHFB followed in conducting the RIF. Such an analysis was outside the scope of this review. Public Interest Director Appointments: Leichter and Mendelowitz said that the appointment of public interest directors has become increasingly “political,“and they expressed concerns that public interest directors lack expertise in the FHLBanks increasingly sophisticated financial practices. As discussed previously, our review was limited to an analysis of public interest director political contributions prior to their initial appointments. Schedule C Practices: Leichter and Mendelowitz questioned whether it was “appropriate” for one board member’s staff to perform functions that the former Office of Communications previously performed for the entire board. While the FHFB Chair’s staff currently performs these functions, we note that at other agencies (SEC, CFTC, NCUA) the chairs can appoint and remove the Schedule C officials who run public or congressional affairs offices. Therefore, it is not clear that the FHFB Chair exercises greater control over these functions than is the case at the other agencies. The FCA Chairman stated that of the agencies’ 12 Schedule C positions, 6 are currently held by career staff. We will send copies of this report to Chairman of the Senate Committee on Banking, Housing and Urban Affairs; the Chairman of the House Financial Services Committee; and the Ranking Minority Member of the Subcommittee on Capital Markets, Insurance, and Government Sponsored Enterprises of the House Committee on Financial Services. We will also send copies to FHFB, NCUA, FCA, CFTC, SEC, FDIC, and the Fed Board. We will also make copies available to others upon request. In addition, this report will be available at no charge on the GAO Web site at http:// www.gao.gov. Please contact Mathew J. Scire at (202) 512-6794 if you or your staff have any questions concerning this report. Key contributors to this report were Rachel M. DeMarcus, M’Baye Diagne, Nadine Garrick, Ayeke Messam, Marc W. Molino, Andy Pauline, Wesley M. Phillips, Mitchell B. Rachlis, and Barbara M. Roesmann. As discussed with your staff, our report objectives are to (1) compare the Federal Housing Finance Board (FHFB) chair’s administrative authorities to those of the chairs of other financial regulators and discuss the basis for that authority; (2) assess FHFB’s compliance with selected applicable statutes and procedural requirements in connection with a reduction-in- force (RIF) that was carried out as part of an agency reorganization announced on August 7, 2002; (3) assess FHFB’s progress in enhancing its Federal Home Loan Bank (FHLBank) safety and soundness examination program; (4) provide data showing the political contributions of FHLBank public interest directors prior to their appointments; and (5) compare FHFB’s use of Schedule C appointments and the organization of its public and congressional affairs functions with the practices of other financial regulatory agencies. To study the source of the FHFB chairs’ administrative authorities and how they compare to those of other financial regulators, we reviewed the Federal Home Loan Bank Act, the Financial Institutions Reform, Recovery, and Enforcement Act of 1989, and FHFB’s delegation of authority to its chair. We also reviewed the legislation, regulations, delegations of authority, and other legal documents that govern or describe the scope and limitations of each chair’s authority at six other selected financial regulators. We interviewed officials from each of the selected financial regulators, including former FHFB officials, to obtain their views on the authorities of chairs and board members at each of these entities. Using this information, we compared the FHFB chairs’ administrative authorities to those of the selected financial regulators. To study FHFB’s compliance with required Reduction-in-Force (RIF) and other procedures, we reviewed the Age Discrimination in Employment Act, as amended, the Older Workers Benefits Protection Act, applicable Equal Employment Opportunity Commission (EEOC) and Office of Personnel Management (OPM) regulations, and case law. We also contacted senior FHFB officials regarding the RIF. Our review did not include an analysis of the “bumping rights” procedures that FHFB followed in carrying out the RIF. To study FHFB’s progress in enhancing its FHLBank safety and soundness examination program, we assessed whether FHFB addressed recommendations about its examination program that we made in a 1998 report. We reviewed 1999 to 2001 examination reports for the 12 FHLBanks. We also interviewed FHFB officials, as well as officials at OFHEO, to which we compared FHFB’s examination program. To study the data showing the political contributions of FHLBank public interest directors prior to their appointments, we obtained public interest director appointment data from FHFB for 1998 to 2002 and contribution data from the Center for Responsive Politics (CRP) for 1990 to 2002. CRP organizes and provides political contribution data that is initially reported to the Federal Election Commission (FEC). To ensure a standard comparison, we determined whether directors made a political contribution in the 8-year period prior to their appointment. We matched and merged the two data sets and analyzed the data to determine the number of public interest directors who made contributions prior to their initial appointments. We also collected data from Fannie Mae and Freddie Mac on the names and appointment dates of board members who received their initial presidential appointments from 1998 through 2002. We obtained data from CRP to determine the Fannie Mae and Freddie Mac directors’ political contributions in the 8-year period prior to their appointments. We took several steps to assess the reliability of the CRP data and concluded that the data were sufficiently reliable for our purposes. First, we interviewed CRP officials to determine their data management procedures and the approach that they followed to match the list of public interest directors that we provided to the CRP contribution database. Second, we reviewed the matched data set that CRP provided and corrected erroneous matches between directors and contributors. Third, in performing our analysis, we conducted basic tests on the data we used. In performing our analysis, however, we did not verify the accuracy of the FEC political contribution data on which CRP records are based. Our review did not include an analysis of FHFB’s appointment process or the integrity and qualifications of individual board members. To study FHFB’s use of Schedule C appointments and organization of the public and congressional affairs functions and compare it with the other financial regulatory agencies, we interviewed agency officials at each of the selected financial regulators, and reviewed documents that described the allocation of Schedule C appointments, as well as the management and staffing structure of the agencies’ public and congressional functions. We conducted our review in Washington, D.C., San Francisco, and Seattle from April 2002 through February 2003 in accordance with generally accepted government auditing standards. We compared FHFB to six other regulatory boards and commissions. We reviewed each board or commission’s statute and policies relating to the administrative authority of the chair. We focused on two administrative areas: appointment of senior officials and reorganization decisions. In cases where the chair is authorized to make key administrative decisions without board approval, we also determined whether board members had authority to review decisions made by a chair in these circumstances. We reviewed the following seven agencies: Commodity Futures Trading Commission (CFTC), Farm Credit Administration (FCA), Federal Deposit Insurance Corporation (FDIC), Federal Housing Finance Board (FHFB), Board of Governors of the Federal Reserve System (Fed Board), National Credit Union Administration (NCUA), and Securities and Exchange Commission (SEC). The commission consists of five members, appointed by the President with the advice and consent of the Senate, and each serve staggered 5-year terms. General Administrative Powers of the Chair: According to the statute that established CFTC, the chair is the chief administrative officer. Executive and administrative functions are generally exercised solely by the chair, according to budget categories, plans, programs, and priorities established and approved by the commission. Key Administrative Powers of the Chair: Appointment of Senior Officials: According to the statute establishing CFTC, the chair’s appointment of heads of major administrative units is subject to approval of the commission. Reorganizations: While the chair is generally authorized to reorganize the staff of the agency pursuant to his or her power over executive and administrative functions, as a practice the commission votes on agency reorganizations. The board consists of three members, appointed by the President with the advice and consent of the Senate, and each serve staggered 6-year terms. General Administrative Powers of the Chair: The President designates one of the members as chairman, and the chairman serves as the agency’s chief executive officer (CEO). The powers of the chair as CEO that are necessary for day-to-day management may be exercised and performed by the chairman through such other officers and employees of the FCA as the chair shall designate. Policy Statement 64, originally adopted by the board of FCA in 1994 and revised as recently as September 24, 1999, provides rules for the transaction of business (Rules) and operational responsibilities of the board. Key Administrative Powers of the Chair: Appointment of Senior Officials: According to the statute that established the FCA, the appointment of the heads of major administrative divisions is subject to the board’s approval. Under Policy Statement 64, the board interprets “heads of major administrative divisions” to mean the chief operating officer and career office directors. However, in some cases, such as the Director of the Office of Congressional and Public Affairs, the chair can appoint Schedule C officials to run these offices. Reorganizations: Under Policy Statement 64, the board approves the FCA organizational chart down to the office level along with relevant functional statements for each office. Under Policy Statement 64, the authority to make organizational changes within any division rests with the CEO. Review of Decisions Made by Chair: As noted in Article V and Article IX of Policy Statement 64, Special Meetings of the board may be called: 1. by the Chairman; 2. by any two members; or 3. if there is at the time a vacancy on the board, by any member. Any call for a Special Meeting shall set forth the business to be transacted and shall state the place and time of such a meeting. Except with the unanimous consent of all members, no business shall be brought before a Special Meeting that has not been specified in the notice of call of such a meeting. Section 1. The business of the Board shall be transacted in accordance with these Rules (Policy Statement 64) as the same may be amended from time to time: Provided, however, that upon agreement of at least two members convened in a duly called meeting, the Rules may be waived in any particular instance, except that action may be taken on items at a Special Meeting only in accordance with Article V, Section (3) b, hereof. Section 2. These Rules may be changed or amended by the concurring vote of at least two members upon notice of the proposed change or amendments having been given at least 30 days before such vote. The President with the advice and consent of the Senate appoints three members of the five-member board for a term of 6 years. In addition to the three appointive directors, there are two ex officio members of the FDIC board: the Comptroller of the Currency and the Director of the Office of Thrift Supervision. General Administrative Powers of the Chair: One of the appointive directors shall be designated by the President, with the advice and consent of the Senate, to serve as chair of the board for a term of 5 years. The chair serves as the CEO. The board has delegated to the chair the authority to manage the FDIC’s day-to-day operations and the general powers and duties usually vested in the office of the CEO of a corporation. Key Administrative Powers of the Chair: Appointment of Senior Officials: A delegation of authority to chair, approved on January 29, 2002, gave authority to the chair to appoint and remove senior officers. Reorganizations: Under the delegation of authority, the chair has authority to reorganize the agency. Challenging Administrative Decisions Made under Delegation: Two or more board members may initiate a review of any decision made under the delegation of authority. The board consists of four members appointed by the President with the advice and consent of the Senate and each serve staggered 7-year terms, and the fifth member is an ex-officio member, the Secretary of Housing and Urban Development. General Administrative Powers of the Chair: The President designates an appointed director as chair. The board has adopted a delegation of authority that authorizes the chair to effect the overall management, functioning, and, organization of the board. Key Administrative Powers of the Chair: Appointment of Senior Officials: Under the delegation of authority, a chair can appoint agency personnel without a board vote or obtaining board approval. Reorganizations: Under the delegation of authority, a chair can reorganize the agency without a board vote or consent. Challenging Administrative Decisions Made under Delegation: Under the delegation of authority, the chair must call a special session of the board to consider any matter of business on the request of any two or more board members. The board consists of seven members appointed by the President with the advice and consent of the Senate. The full term of a board member is 14 years, and the seven terms are staggered so that one expires in each 2-year period. General Administrative Powers of the Chair: The chair, subject to board supervision, serves as its “active executive officer.” Key Administrative Powers of the Chair: Appointment of Senior Officials: The board votes on the appointment of senior officials. Reorganizations: The board votes on major administrative reorganizations, which are defined as those that involve changing officers (appointing or removing an officer). The NCUA has a full-time, three-member board, which is appointed by the President with the advise and consent of the Senate. General Administrative Powers of the Chair: The Federal Credit Union Act provides that the chair is the spokesperson for the board and implements policies and regulations adopted by the board. Key Administrative Powers of the Chair: Appointment of Senior Officials: The board votes on the appointment of senior officials. However, in some cases, such as the directors of the Office of Congressional and Public Affairs, the chairs can appoint Schedule C officials to run it. Reorganizations: The board votes on reorganizations of the agency. Five members serve staggered 5-year terms, and are appointed by the President with the advice and consent of the Senate. General Administrative Powers of the Chair: There is no statutory reference to the selection of a chair. However, under section 3 of the Reorganization Plan No. 10 of 1950, the function of the commission, with respect to choosing a chair from among the members was transferred to the President. The Reorganization Plan also transferred to the chair from the commission the administrative and executive functions of the commission, including appointment and supervision of personnel, the distribution of business, and the use and expenditure of funds. Appointment by the chair of the heads of the major administrative units is subject to the approval of the commission. However, in some cases, such as the Director of the Office of Public Affairs, the chair can appoint Schedule C officials to run these offices. Key Administrative Powers of the Chair: Appointment of Senior Officials: Under Reorganization Plan No. 10 of 1950, the board approves the appointment of senior officials. Reorganizations: Under Reorganization Plan No. 10, the chair can reorganize the agency. While employees generally may agree to waive rights to pursue employment related claims if the waiver is knowing and voluntary, special considerations apply to waivers of rights under the Age Discrimination in Employment Act (ADEA). Title VII of the Civil Rights Act of 1964 (Title VII) does not include age as a basis for illegal discrimination in the workplace. However, in 1967, Congress enacted the ADEA to promote the employment of older persons based on their ability rather than age, to prohibit arbitrary age discrimination, and to help employers and employees find ways of meeting problems arising from the impact of age on employment. The ADEA forbids arbitrary discrimination against workers on the basis of age in hiring, promotion, terms of employment and discharge. The ADEA was enacted with characteristics of both Title VII and the Fair Labor Standards Act of 1928 (FLSA); while Title VII’s substantive prohibitions on discrimination were included, the enforcement mechanisms of FLSA were also incorporated. This structure caused controversy over waivers of rights under ADEA because Title VII waivers are treated differently from FLSA waivers. Title VII rights may be waived without government supervision so long as the waiver is knowing and voluntary. In contrast, rights provided by the FLSA cannot be waived without government supervision. Waivers must be supervised by the Secretary of Labor or under a federal court-supervised settlement of a lawsuit filed pursuant to FLSA. On August 27, 1987, EEOC issued a final rule that allowed unsupervised waivers if the waiver was knowing and voluntary and provided that a valid ADEA waiver may not release prospective claims and may not be in exchange for consideration that includes employee benefits to which the employee was already entitled. The EEOC rule also listed several factors as being relevant to determining whether a waiver is knowing and voluntary. These factors included whether the employee was encouraged to consult with an attorney. However, Congress suspended the rule, citing concerns that the rule was contrary to public policy and, in the spring of 1988, held hearings concerning waivers of ADEA rights and EEOC’s regulation. In October 1990, the Older Workers Benefit Protection Act (OWBPA) amended ADEA to add specific requirements for releases of ADEA claims. The legislative history of OWBPA provides that the legislation is intended to protect individuals covered by ADEA, and it further provides that the legislation establishes minimum requirements that must be satisfied before a court can proceed to determine factually whether a waiver was knowing and voluntary. All of the requirements are necessary independent of the knowing and voluntary considerations. The informational requirements are designed to permit older workers to make more informed decisions and to determine whether an employment termination program gives rise to a valid claim under ADEA. OWBPA requires that no individual may waive any right or claim under ADEA unless the waiver is knowing and voluntary. OWBPA specifies the minimum requirements for a knowing and voluntary release of claims under ADEA. The waiver must, at a minimum, comply with the following requirements: 1. Be written in a manner calculated to be understood by the average individual eligible to participate, 2. Specifically refer to rights and claims arising under ADEA, 3. Not waive rights and claims that may arise after the date the waiver is 4. Provide for consideration in addition to anything of value to which the individual already is entitled, 5. Advise the individual in writing to consult with an attorney prior to 6. Give an individual a period of at least 21 days within which to consider the agreement, and 7. Provide that the individual may revoke the agreement for a period of at least 7 days following the agreement’s execution. A waiver in settlement of a charge filed with EEOC or a court action must meet the first five factors listed above, and the individual must be given a reasonable period of time within which to consider the agreement. Additional informational requirements apply in the case of a waiver requested in connection with an exit incentive or other employment termination program offered to a group or class of employees. The employer must inform the individual in writing as to the following: 1. Any class or group of individuals covered by the program, and 2. The job titles and ages of all individuals, eligible or selected for the program, and the ages of all individuals in the same job classification or organizational unit who are not eligible or selected for the program. In addition, the individual must be given at least 45 days within which to consider the agreement. These additional requirements were added because, in the case of group termination programs, additional protections are required for individuals from whom a waiver is sought. More time is provided to weigh options, understand the program, and consult with an attorney. Employers are required to provide detailed, written information describing the group termination program. The OWBPA also mandates that a waiver not affect EEOC’s rights and responsibilities to enforce ADEA and further states that “ waiver may be used to justify interfering with the protected right of an employee to file a charge or participate in an investigation or proceeding conducted by the Commission.” In June 1998, EEOC published final regulations that provide guidance on all waivers of ADEA rights and claims, regardless of whether the employee is employed in the private or public sector, including employment by the United States. “The OWBPA implements Congress’ policy via a strict, unqualified statutory stricture on waivers, and we are bound to take Congress at its word. Congress imposed specific duties on employers who seek releases of certain claims created by statute. Congress delineated these duties with precision and without qualification: An employee ‘may not waive’ an ADEA claim unless the employer complies with the statute . . . The OWBPA governs the effect under federal law of waivers or releases on ADEA claims and incorporates no exceptions or qualifications.” Other courts have used similar language in describing the operation of OWBPA. “Since the OWBPA establishes minimum or threshold requirements, absolute technical compliance with its provisions is required. The absence of even one of the OWBPA’s requirements invalidates a waiver.” Butcher v. Gerber Products Company, 8 F. Supp. 2d 307, 314 (S.D.N.Y. 1998). “Under the OWBPA, a release cannot be deemed knowing and voluntary unless all of the requirements of the OWBPA have first been satisfied.” Collins v. Outboard Marine Corp., 808 F.Supp. 590, 594 (N.D. Ill. 1992). “When an employee signs a purported release of claims arising under the ADEA, that release will not bar an ADEA claim unless the release strictly complies with the statutory requirements of the OWBPA.” Thiessen v. General Electric Capital Corporation, 232 F.Supp.2d 1230, 1233 (D. Kan. 2002). While an employee can waive the right to recover from an employer based on a claim of age discrimination under ADEA, OWBPA provides that a waiver may not affect the EEOC’s rights and responsibilities to enforce ADEA. In addition, no waiver may be used to justify interfering with the protected right of an employee to file a charge or participate in EEOC investigations or proceedings. EEOC regulations also provide that no waiver agreement may include any provision imposing any limitation adversely affecting any individual’s right to file a charge or complaint, including a challenge to the validity of the waiver, with the EEOC. According to EEOC guidance, the OWBPA language is evidence that Congress reaffirmed the public policy against interference with EEOC enforcement efforts. EEOC’s guidance cites the legislative history of OWBPA, which states that the provision is intended as a clear statement of support for the principle that the elimination of age discrimination in the workplace is a matter of public as well as private interest, and that no waiver agreement may be permitted to interfere with the achievement of that goal. In connection with the OWBPA’s statutory prohibition, the Senate Committee report expresses support for the holding and reasoning of the Fifth Circuit in EEOC v. Cosmair, Inc., 821 F. 2d 1085 (5th Cir. 1987). In Cosmair, the court found that a waiver of the right to file a charge with the EEOC is void as against public policy in part because the public interest in private dispute settlement is outweighed by the public interest in EEOC enforcement of ADEA. Allowing the filing of charges to be obstructed by enforcing a waiver of the right to file a charge could impede EEOC enforcement of the civil rights laws. The court found that the EEOC depends on the filing of charges to notify it of possible discrimination. The court determined that an employer and an employee cannot agree to deny to the EEOC the information it needs to advance the public interest in preventing employment discrimination. However, an employee can waive the underlying cause of action and the right to recover from the employer in a lawsuit. Both the OWBPA and EEOC regulations provide that an employee must be advised in writing to consult an attorney. Courts analyzing waivers have applied the requirement strictly. In American Airlines v. Cardoza- Rodriguez, 133 F. 3d 111 (1st Cir. 1998), the court considered a waiver of rights offered to certain employees in connection with an early retirement program. The First Circuit found that language contained in the release that stated “I have had reasonable and sufficient time and opportunity to consult with an independent legal representative of my own choosing before signing this . . . ” was insufficient because employer did not advise employees to consult with counsel before executing the release. In Thiessen, the court considered a release stating “the Company advised the employee in writing to consult with a lawyer before signing this Agreement.” The court found that this language suggests that the Company, at some previous time, advised the employee to consult with an attorney and determined that this language, standing alone, does not comply with OWBPA’s requirement that an employer advise the employee in writing to consult an attorney prior to executing the release. The court also said, however, that the employee could have complied with the statute by providing the employee with prior written advice so that the statement in the release was factually accurate. In Cole v. Gaming Entertainment, L.L.C., 199 F. Supp. 2d 208 (D. Del. 2002), the court considered a provision in a written release of employment claims that the “mployee acknowledges that he/she has been advised to consult with an attorney prior to executing this Agreement.” The court found that the language was insufficient to satisfy the requirements of ADEA. Citing American Airlines, the court found that the passive language used by the release was insufficient under current case law. However, the court found that the release language might have met OWBPA standards if the employer’s representatives had advised the employee of his right to counsel as contemplated by the release language. The OWBPA provides that a waiver cannot be considered knowing and voluntary unless, at a minimum, if the waiver is requested in connection with an exit incentive or other employment termination program offered to a group or class of employees, the employer informs the individuals in writing in a manner calculated to be understood by the average individual eligible to participate, as to (1) any class, unit, or group of individuals covered by the program; any eligibility factors for such program; and any time limits applicable to such program and (2) the job titles and ages of all individuals eligible or selected for the program and the ages of all individuals in the same job classification or organizational unit who are not eligible or selected for the program. The EEOC regulations provide that “other employment termination program” as set out in OWBPA usually means a group or class of employees who were involuntarily terminated and who are offered additional consideration in return for their decision to sign a waiver. The regulations go on to state that the existence of a program will be determined based upon the facts and circumstances of each case. A “program” exists when an employer offers additional consideration for the signing of a waiver pursuant to an exit incentive or other employment termination (e.g., a reduction in force) to two or more employees. The regulations also state that typically, an involuntary termination program is a standardized formula or package of benefits that is available to two or more employees. The terms of the program are generally not subject to negotiation between the parties. The regulations make clear that the number and identity of employees who must be provided with the information will depend on how the employer chose persons who would be offered consideration for signing a waiver. In some cases, the information requirement extends to all employees within a certain job category; and in some cases, extends to all employees within a particular division or to all employees in the employer’s facility. The legislative history of OWBPA indicates that group termination programs raise additional issues and require additional protection for individuals from whom a waiver is sought. These informational requirements are designed to permit older workers to make more informed decisions in group termination programs. The employees affected by these programs have little or no basis to suspect that action is being taken based on their individual characteristics. The Senate Report explains that the principal difficulty encountered by older workers in these circumstances is their inability to determine whether the program gives rise to a valid claim under ADEA and that the need for adequate information and access to advice before waivers are signed is especially acute. The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to GAO Mailing Lists” under “Order GAO Products” heading. | The Federal Home Loan Bank System (System) faces additional risks due to the development of new products such as direct mortgage purchase programs. Responding to concern about the methods used for administrative decisionmaking, and the ability of the Federal Housing Finance Board (FHFB) to fulfill its critical mission to regulate the safety and soundness of the System, GAO was asked to (1) compare the FHFB chair's administrative authorities with those of other financial regulators and discuss the basis for that authority, (2) assess FHFB's compliance with selected statutes and regulations in connection with an August 2002 reduction-in-force (RIF) carried out as part of an agency reorganization, and (3) assess FHFB's progress in enhancing its FHLBank safety and soundness examination program. FHFB's chair has greater authority to make key administrative decisions than the chairs at five of the six other financial regulators GAO reviewed. FHFB's chair has the authority to appoint and remove officials and reorganize the agency without a vote by the board. In contrast, statutes, regulations, and practices limit the chairs' authorities at most other regulators. In particular, the boards or commissions at these agencies approve most senior-level appointments and several boards approve major reorganizations. The basis for the FHFB chair's comparatively broad administrative authority is a delegation of authority, which the board passed in 1990 and 1993. The delegation allows the chair to make and implement key decisions without obtaining or benefiting from the views of all board members and has contributed to sometimes bitter conflicts among board members over the past 8 years. Although FHFB provided significant financial compensation to staff subject to the RIF, its procedures were not fully consistent with all applicable federal age discrimination statutes and regulations. For example, FHFB presented a settlement agreement to separated staff that offered 3 to 6 months salary in exchange for, among other things, the employees agreeing to waive their rights to file charges, complaints, or appeals with the Equal Employment Opportunity Commission (EEOC). EEOC regulations implementing the Age Discrimination in Employment Act do not permit waivers of employees' rights to file charges or complaints with EEOC. In addition, FHFB did not advise the affected employees in writing to consult an attorney prior to signing the agreements as is required. Although for several years FHFB did not take steps to correct weaknesses in its FHLBank examination program that GAO identified in a 1998 report, FHFB's current Chair has recently undertaken several steps to improve its examinations. In 1998, and again in 2002, GAO found that FHFB performed limited reviews of FHLBank functions that are critical in managing the banks' financial and operational risks. Among other changes announced in 2002, FHFB plans to increase the number of examiners from 10 to 24 and revise its examination approach to focus on the major risks and quality of controls at each FHLBank. Although these changes have the potential to improve FHFB's examination program, it is too soon to assess their effectiveness. |
During Operation Desert Storm, the Army deployed all or nearly all of certain support units such as transportation and military police units. As threats to U.S. security interests evolve and defense budgets shrink, it is important that the Army accurately identify the support forces it requires. TAA is the Army’s biennial process to determine required support units and recommend the type and number of support units that the Army should include in its budget. The requirements generated in this process are dependent on a variety of inputs and guidance, including scenarios derived from the Defense Planning Guidance, wargaming assumptions, and logistical data that are developed for use in the computer modeling. For purposes of this report, logistical data include planning factors, consumption rates, and other data. Planning factors cover 9 of the Department of Defense’s (DOD) 10 classes of supply; for modeling purposes, these factors are usually expressed in pounds per person per day. Consumption rates include such factors as the number of soldiers admitted to a hospital per day and the number of prisoners captured per day. An example of other logistical data would be the amount of support that allies can provide to offset U.S. requirements. While planning scenarios are largely given to the Army, logistical data must be developed by the Army. These data are compiled in the Army Force Planning Data and Assumptions document (AFPDA). Once the data are finalized—during TAA force structure conferences—the Concepts Analysis Agency conducts the computer modeling, which generates unit requirements based on a set of rules that determine the number of support units needed. After requirements are determined, additional force structure conferences are held where Army officials decide which units can be filled within the projected resource levels. Figure 1 highlights key elements of the TAA process for developing requirements and making force resourcing decisions. The Army’s Deputy Chief of Staff for Logistics (DCSLOG) is responsible for developing the logistics data in the AFPDA. In practice, some of this responsibility has been delegated to the Combined Arms Support Command (CASCOM), which is the Army’s integrator for some combat service support issues. Biennially, DCSLOG and CASCOM update the logistics portions of the AFPDA by tasking the major commands, Army component commands, and schools to validate the logistical data related to their areas of expertise. For example, school representatives are tasked to validate data based on their perspectives on doctrine; component commands are tasked to provide their perspectives on unique data and issues related to their theater. The logistical data are presented to workshops to gain group acceptance. They are then sent forward to the TAA force structure conference, where the data are approved. Army documents describe the AFPDA update as a systematic review and validation of key data used in TAA. However, Army regulations related to TAA primarily focused on the validation and management of planning factors. Effective May 1994, the Army broadened its regulation to include additional logistical data found in the AFPDA. This change should help to improve the validity of logistical data, but additional procedures are needed to correct the problems we found with the AFPDA update process. Before May 1994, Army regulation 700-8 specified responsibilities for the development and management of logistics planning factors. The Army Logistics Center, CASCOM’s predecessor, was responsible for managing the development, validation, and collection of planning factors, and was to recommend factors to DCSLOG for approval. However, DCSLOG and CASCOM officials did not believe that the development and management of other logistical data for use in the AFPDA, such as theater specific data provided by component commanders, were covered in this or any other regulation prior to May 1994. In 1993, the Army Audit Agency found the Army’s management of planning factors to be inadequate, and recommended changes to the process. The recommended changes included tasking responsible activities to (1) update planning factors periodically and (2) validate methodologies and assumptions used to develop planning factors. In 1994, the Army revised its regulations to improve the management of planning factors. These revisions included specifying time frames for updates to take place and incorporating internal control responsibilities to guide the development of planning factors. The regulation was also changed to include other logistical data and to link the development of logistical data to the AFPDA. While the regulation gave DCSLOG the overall responsibility for logistical data management, the day-to-day management for logistical data was delegated to CASCOM. The Army’s TAA process relied heavily on commands and schools to review and validate the accuracy of logistics data. Commands and schools were requested prior to the TAA workshops to review and validate logistics data. However, we found that some data had not been validated, were outdated, or were not supported by documented studies. Because the process was poorly documented, we could not determine how widespread these problems were. Further, no organization was responsible for ensuring that the data validations occurred and were derived from consistent and sound methodological studies. Our review of available documentation for several past TAAs showed that some data had not been validated in several years. Although some school officials believed the AFPDA contained outdated data, actions were not undertaken to validate or change the data. For instance, officials with the ordnance school, which develops doctrine for maintenance units, expressed concern in 1989 that rates for equipment that is expected to be abandoned and the rates for vehicles expected to be damaged in combat had not been updated in 4 years and, thus, were unlikely to be accurate. These rates primarily affect the number of maintenance units. In another instance, the Army engineers submitted workload factors that were outdated and had not been validated prior to the January 1992 TAA workshop. These factors measured the number of hours it takes to construct such structures as railroads, bridges, and pipelines. A new study was done only after concerns were raised about the validity of these factors during the AFPDA workshops. We found data that were not supported by documentation. At the U.S. Army Central Command (ARCENT), for example, officials that provided data for TAA in 1992 had not maintained documentation that would show how the data were developed. This lack of documentation reduces assurance that the data are valid and can cause problems during future updates if key personnel change. For example, U.S. Army, Korea, officials told us that they did not know how data on the Korean theater had been developed because there were no files or individuals who could explain the prior year’s validation process. We found that while the Army sought consistency and accuracy in the logistical data update process, no organization ensured that a reasonable methodology was used by the commands and schools nor that studies or supporting models used to develop the data were valid. We found that neither CASCOM nor DCSLOG had overseen the validation process. According to a DCSLOG official, DCSLOG has not routinely reviewed the methodology used by various proponents who submit factors and data to the process. This official stated that only if a factor looked unusual would it generate an inquiry back to the proponent to ask how that factor was developed. CASCOM officials stated that they had no regulatory requirement to review the methodology of proponents who developed logistical data. The Army’s revised regulation governing the development and validation of logistical data for the TAA process is an improvement. The revised regulation requires CASCOM to examine the AFPDA to ensure data consistency, adherence to doctrine, necessity, identification of sources, and rationale of methodology. It also specifies time frames for the AFPDA updates, thus putting the commands and schools on notice when the data validation will be required. CASCOM officials stated that they have not yet defined their role regarding overseeing the update of AFPDA data. Therefore, CASCOM had not told the commands and schools what will be required of them. We believe that CASCOM should establish procedures that would specify how commands and schools are to validate and maintain all logistical data in the AFPDA. Specifically, major commands, Army component commands, and schools should be directed to ensure that their data are based on sound analytical studies and assumptions and that the methodological bases for those data and assumptions are documented. Moreover, CASCOM’s guidance should specify what CASCOM will require from commands and schools to exercise its oversight responsibility. According to DOD, CASCOM is already developing procedures to improve the update process and should complete a review of the adequacy of existing data by the end of 1996. According to Army regulations, theater-specific data are best obtained from Army components most familiar with the region and involved in the theater war-planning process. However, we found that the current level of participation by Army component commanders does not ensure that data and assumptions used by TAA are similar to data that component commands use to develop their war plans. The result is that the required force structure developed in TAA does not agree with theater war plans. Army component commands should have an important role in the TAA process. During development of the AFPDA, Army regulations instruct the Army components to review, revalidate, and submit theater-unique logistics data. Specifically, they are to provide data such as support provided by allies, theater stockage policies, and theater consumption factors. Also, as part of the TAA process, Army components identify theater-unique requirements that may be different from current doctrinal rules. This identification is required because the Army recognizes that each theater is unique and that the Army component commands are the most familiar with their area. In practice, however, Army components sometimes believe that their role in the process is insufficient to affect the process. Thus, Army component officials said they don’t always consider developing data for TAA as a priority. Therefore, some commands do not always send representatives to workshops where data are discussed and adopted. In other instances, component command representatives at the workshops have not challenged data that is inconsistent with their plans. TAA requirements for military theaters sometimes differ from those in theater war plans. Some differences can be attributed to the fact that TAA provides a longer-term force structure outlook than theater war plans.Other differences, however, result from TAA and war plans being derived from different assumptions, logistical data, and computing methods. For example, according to U.S. Army, Europe, officials, TAA requirements developed in 1992 did not match planning efforts in the European theater because the two processes used different scenarios. TAA modeled a northern region scenario for Europe, whereas U.S. Army, Europe, used a southern region scenario in its war plans. The TAA’s northern region scenario was based on the Defense Planning Guidance. U.S. Army, Europe, officials believe that TAA-generated requirements are based on an unrealistic scenario. U.S. Army, Europe, officials told us that conflicts in the southern region are more probable than the northern region; and thus, believe establishing requirements for that region is prudent. Further, force structure requirements for the southern region are more challenging than for the northern region because of the more mountainous terrain, lack of infrastructure, and the lack of host nation capability. As a result, U.S. Army, Europe’s, requirements and the TAA requirements for Europe differed greatly. U.S. Army, Europe, officials stated that these difference still exist in the current TAA update cycle. In another example, we compared TAA support requirements developed in 1992 for Southwest Asia with ARCENT’s operational requirements. The analysis showed that some support areas, such as medical, maintenance, and military police differed significantly. Table 1 summarizes some of the differences between ARCENT requirements based on TAA and war plans. As shown in the table, ARCENT plans require 31 combat support hospitals, which would require 18,817 positions, and TAA requires 18 hospitals, which would require 10,908 positions—a difference of 13 hospitals and 7,909 positions. The ARCENT medical planner believes TAA uses disease and non-battle injury rate much below what the Command believes are likely in its region, resulting in lower patient estimates and fewer hospitals. A CASCOM official responsible for medical units was unaware that ARCENT used a different method to determine requirements for combat support hospitals. However, this official believes that the TAA method is more precise. The table also shows that TAA has about 8,260 general support maintenance positions, while ARCENT plans envision 2,767 positions—a difference of 5,493 positions. TAA requirements were developed in response to a protracted Central European scenario that involves equipment overhaul in theater. Because ARCENT does not envision a protracted conflict in the Southwest Asia region, ARCENT plans to perform most major repairs in U.S. depots. ARCENT officials said that they have not yet been successful in convincing TAA decisionmakers to adopt the ARCENT concept. However, a CASCOM official familiar with maintenance unit issues said that ARCENT has not surfaced this issue in TAA workshops or conferences. The table also shows differences between TAA and ARCENT war plans for combat support military police companies. ARCENT plans require 107 of these companies, whereas TAA requires 77 companies—a difference of 30 companies and 5,280 positions. The ARCENT Military Police planner stated that requirements are different because TAA modeling does not adequately reflect theater geography and concentration of troops in determining requirements for these police companies. CASCOM officials stated that TAA has not addressed these issues because ARCENT has not raised them at workshops and conferences. We recommend that the Secretary of the Army take the following actions: Require CASCOM to establish procedures that specify (1) how major commands, Army component commands and schools should validate and maintain data for the AFPDA and (2) what CASCOM will require to exercise its oversight responsibility. Establish procedures and identify the differences in theater planning requirements and TAA requirements to ensure that there are valid reasons for differences or make adjustments to requirements. DOD generally concurred with our findings and our recommendation that procedures are needed to ensure that data are valid. DOD noted that CASCOM is in the process of establishing procedures to improve the validation of data used in TAA. DOD disagreed with our recommendation that the Army identify differences between theater planning and TAA requirements to ensure that the reasons for the differences are valid. DOD believes that the two processes were designed for different purposes and yield different but consistent results. We recognize that there are differences between the process used to compute requirements for the TAA and theater commands. These differences largely result because TAA computes requirements further in the future than do theater commands, which may result in different assumptions such as the level of unit modernization, threat, and budget levels. However, the examples we have cited are not related to these factors. Rather, the differences result from fundamentally different views about how certain functions will be performed or at what rate events will occur. Thus, we continue to believe that differences between the two processes should be identified to determine if they are valid. We conducted this review from July 1993 to September 1994 in accordance with generally accepted government auditing standards. We are sending copies of this report to the Secretary of the Defense; the Secretary of the Army; the Director, Office of Management and Budget; and interested congressional committees and individuals. Copies will be sent to other interested parties upon request. Please contact me at (202) 512-3504, if you or your staff have any questions concerning this report. Major contributors to this report are Robert Pelletier, Rodell Anderson, and Blake Ainsworth. To determine how Army assumptions and data used in the TAA process were developed, we reviewed available documentation from past TAAs and interviewed officials at the Department of the Army Headquarters, Washington D.C; Concepts Analysis Agency, Bethesda, Maryland; U.S. Forces Command, Fort McPherson, Georgia; Combined Arms Support Command and Quartermaster School, Fort Lee, Virginia; Transportation School, Fort Eustis, Virginia; Engineer School and Center, Fort Leonard Wood, Missouri; and the Medical School and Center, Fort Sam Houston, Texas. To gain a perspective on Army component commands’ participation in TAA and the relationship between TAA and operational planning, we interviewed personnel and reviewed related documents at the U.S. Central Command at MacDill Air Force Base, Florida; U.S. Army, Central Command at Fort McPherson, Georgia; the U.S. European Command at Stuttgart, Germany; U.S. Army, Europe, at Heidelberg, Germany; and Forces Command at Fort McPherson, Georgia. We also discussed 8th U.S. Army’s role in TAA with logistics planners in Seoul, Korea. To assess TAA and theater requirements for Southwest Asia, we reviewed ARCENT’s major operations plan and troop list for the region and compared it with TAA modeling results and other TAA-related requirements and resourcing documents. The following are GAO’s comments on the Department of Defense’s (DOD) letter dated December 19, 1994. 1. We continue to believe that the Army’s Total Army Analysis (TAA) process did not ensure valid data, based on the problems we found with the process. DOD describes improvements made during the current TAA; we did not review the improvements, and thus, we cannot comment on them. However, as DOD acknowledges in its response, additional procedures are needed to ensure that data are validated. 2. Our information is based on numerous discussions with theater command representatives at Army Central Command and U.S. Army, Europe. These individuals indicated that theater command participation is not comprehensive and conscientious enough to ensure that theater perspectives are considered in the process. 3. We recognize that there are differences between the process used to compute requirements for the TAA and theater commands. These differences largely result because TAA computes requirements further in the future than do theater commands, which may result in different assumptions, such as the level of unit modernization, threat, and budget levels. However, the examples we have cited are not related to these factors. While DOD believes that the TAA process includes sufficient open forums in which force requirements are reviewed by representatives of theater commanders, many theater representatives believe their perspectives are not always included in the TAA process. Because we did not have access to these debates, we could not ascertain to what degree theater perspectives are raised or how differences are resolved. Therefore, we continue to believe that differences between the two processes should be identified to determine if they are valid. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO reviewed the Army's Total Army Analysis (TAA) process, focusing on whether its results are based on valid logistical data assumptions. GAO found that: (1) the Army lacks adequate procedures to govern the development and review of logistical data used in the TAA process; (2) until recently, Army regulations only focused on the management and validation of planning factors, and those regulations were not followed; (3) the Army has revised its regulations to require that all logistical TAA data be validated and that the process be centrally managed, but further guidance is needed to ensure the validity of all data and sufficient oversight of the process; and (4) Army programmers sometimes use data and assumptions in the TAA process that differ from what Army component planners use for war plans, which can result in vastly different requirements. |
The Air Force supply management activity group (SMAG) helps to maintain combat readiness and sustainability by supplying the Air Force with items necessary to support troops, weapon systems, aircraft, communications systems, and other military equipment. In doing so, SMAG is responsible for about two million items, ranging from weapon system spare parts to fuels, food, medical and dental supplies and equipment, and uniforms. SMAG is the largest supply management activity in Defense—it reported $12 billion in revenue and $24.5 billion in inventory for fiscal year 1997. SMAG operations are financed as part of the Air Force Working Capital Fund, which was formerly a part of the Defense Business Operations Fund. In December 1996, the Under Secretary of Defense (Comptroller) dissolved the Defense Business Operations Fund and created four working capital funds to clearly establish the military services’ and DOD components’ responsibilities for managing the functional and financial aspects of their respective activity groups. The funds are to operate by charging customers the full costs of goods and services provided to them as currently defined in the Department of Defense’s (DOD) Financial Management Regulation, Volume 11B, Reimbursable Operations, Policy and Procedures—Defense Business Operations Fund. The primary goal of the current working capital fund financial structure is to focus the attention of all levels of management on the full costs of carrying out certain critical DOD business operations and the management of those costs. Unlike a private sector enterprise which has a profit motive, the four working capital funds are to operate on a break-even basis over time by recovering the full costs incurred in conducting the business operations. Accomplishing this requires DOD managers to become more conscious of operating costs and make fundamental improvements in how DOD conducts business since customers have a defined amount of funds to pay for goods and services. It is critical for the working capital funds to operate efficiently since every dollar spent inefficiently is one less dollar available for other defense spending priorities. As figure 1.1 illustrates, SMAG receives orders from customers to purchase inventory items. Customers use appropriated funds, primarily Operation and Maintenance appropriations, to finance these orders. SMAG provides the inventory items to customers and bills customers on the basis of predetermined prices—commonly referred to as standard prices, which generally are to be in force throughout the entire fiscal year. SMAG uses payments from customers to replenish the inventory sold to customers by (1) buying new inventory items or (2) ordering repair services of existing inventory from industry and DOD depot maintenance activities as well as to cover operating costs. SMAG procures critical material and makes repair parts available to its customers through five inventory control points: Ogden Air Logistics Center (ALC), Ogden, Utah; Oklahoma City ALC, Oklahoma City, Oklahoma; Sacramento ALC, Sacramento, California; San Antonio ALC, San Antonio, Texas; and Warner Robins ALC, Warner Robins, Georgia. The five ALCs report to the Air Force Materiel Command (AFMC), located at Dayton, Ohio. SMAG’s operations are divided into two main categories: wholesale and retail. Wholesale operations encompass about 200,000 types of inventory items (generally weapon system related) for which the Air Force is the inventory control point. SMAG procures, manages, and sets the prices that customers will pay for these wholesale items. The wholesale prices include SMAG’s operational support cost, such as civilian salaries and accounting costs. SMAG adds a surcharge to the acquisition cost or repair cost of the individual inventory items to recover its operating costs. SMAG retail inventory operations encompass items that are managed by the other services, Defense agencies, or government agencies. These non-Air Force entities are the inventory control points for these items and, therefore, set the prices for these items. The retail portion of SMAG purchases these items from the non-Air Force entities and then resells them to customers. Since fiscal year 1991, the composition of the inventory items and costs managed by SMAG has significantly changed, making it more complicated for SMAG to manage, budget, and account for inventory. Prior to fiscal year 1991, SMAG consisted of the following six divisions: (1) systems support, (2) general support, (3) fuels, (4) medical/dental, (5) commissary, and (6) the Air Force Academy Cadet Store. The systems support division—the only wholesale division—procured consumable items (items that are replaced rather than repaired) for aircraft, missiles, and their major components. Beginning in fiscal year 1991, the Air Force added two new wholesale divisions to its stock fund operations: the reparable support and cost of operations divisions. The reparable support division procures depot level repairables and pays for the repair of these repairable inventory items. Managing repairable items was a new function for SMAG, and it complicated the budgeting and accounting for inventory items since SMAG did not have any experience in setting prices to recover the cost to repair items. The cost of operations division included the overhead costs for the five inventory control points of the stock fund which also complicated matters for the stock fund since these costs were not previously captured and included in the prices charged customers. The effect of adding the repairable support and cost of operations divisions to the stock fund is significant. For example, in fiscal year 1997, the Air Force reported wholesale division sales of about $6.8 billion of which only a reported $500 million pertained to the systems support division—the only wholesale division that existed prior to fiscal year 1991. Three other changes also impacted SMAG’s operations. In fiscal year 1992, the commissary division was transferred from SMAG to the Defense Commissary Agency. The Air Force budgets show that the commissary division had estimated sales of $2.6 billion to $2.8 billion per year in the early 1990s. About 475,000 consumable items were transferred from the system support division to the Defense Logistics Agency from fiscal year 1992 through 1997. The transfer of these items significantly reduced the number of items managed by the systems support division to about 125,000 items. On October 1, 1997, the Air Force consolidated SMAG’s three wholesale divisions into one wholesale division called the Materiel Support Division. The Air Force created the Materiel Support Division to provide better cost visibility. Now, the estimated costs associated with each ALC are included in the prices of inventory items they manage. Previously, these costs were spread across the board to all inventory items. The objectives of our review were to evaluate the (1) accuracy and consistency of SMAG’s accounting and budgetary reports, (2) SMAG’s price-setting process, and (3) Air Force Working Capital Fund’s cash management practices, including the practice of advance billing customers. To evaluate the accuracy and consistency of SMAG’s accounting and budgetary reports, we (1) obtained and analyzed the Defense Working Capital Fund Accounting Report (1307), the Air Force Defense Business Operations Fund Chief Financial Officer Annual Financial Statement, and the Air Force’s Working Capital Fund budget justification report for fiscal years 1992 through 1996, (2) interviewed Air Force and Defense Finance and Accounting Service (DFAS) officials to determine why reports covering the same period provided widely different results, and (3) analyzed the DOD Working Capital Fund report, dated September 1997, that was prepared in response to the National Defense Authorization Act for Fiscal Year 1997, to determine the actions DOD is planning to improve the accuracy of the working capital fund’s accounting report. We also met with DOD Inspector General and Air Force Audit Agency officials to discuss the accuracy of SMAG’s financial reports. The quantitative financial information used in this report on SMAG’s financial operations was produced from DOD’s systems—which have long been reported to generate unreliable data. We did not independently verify this information. The DOD Inspector General has cited system deficiencies and internal control weaknesses as major obstacles to the presentation of financial statements that would fairly present the Defense Business Operations Fund financial position for fiscal years 1993 through 1996. To evaluate SMAG’s price setting-process, we (1) obtained and analyzed the budget documents used in setting prices, (2) interviewed Air Force comptroller and program officials at Headquarters and AFMC to discuss the rationale for the various factors, including cost reduction goals, used to develop SMAG’s prices charged customers, (3) analyzed documents on the new price-setting procedures and interviewed Air Force officials to determine if the Air Force encountered problems in implementing the new procedures, and (4) analyzed budget documents concerning prices and interviewed Air Force officials to determine why the Air Force changed the fiscal years 1997 and 1998 prices once they were implemented. To evaluate the Air Force’s Working Capital Fund’s cash management practices, including its practice of advance billing customers, we (1) collected and analyzed financial information related to the cash balances, advance billings, collections, disbursements, accounts receivable, and accounts payable from fiscal year 1992 through fiscal year 1997, (2) obtained and analyzed DOD and Air Force guidance on managing cash, and (3) interviewed officials in the Office of the Under Secretary of Defense (Comptroller), Air Force Headquarters, and AFMC concerning the cash management practices and the Air Force’s continual need to advance bill customers to alleviate the cash shortage problem. We also analyzed the DOD Working Capital Fund report, dated September 1997, that was prepared in response to the National Defense Authorization Act for Fiscal Year 1997, to determine the actions DOD is planning to improve the working capital fund’s cash management practices. We did not independently verify the reported cash information. We performed our work at the headquarters, Offices of the Under Secretary of Defense (Comptroller) and Air Force, Washington, D.C.; Air Force Materiel Command, Dayton, Ohio; the Sacramento Air Logistics Center, Sacramento, California; Headquarters, Defense Finance and Accounting Service, Arlington, Virginia; Defense Finance and Accounting Service Denver Center, Denver, Colorado; Air Combat Command, Langley Air Force Base, Virginia; Air Mobility Command, Scott Air Force Base, Illinois; and Air Force Space Command, Peterson Air Force Base, Colorado. Our work was performed from August 1997 through May 1998, in accordance with generally accepted government auditing standards. The Department of Defense provided written comments on a draft of this report. We incorporated DOD’s comments where appropriate. These comments are discussed in chapters 2, 3, and 4 and are reprinted in appendix I. We have previously reported that DOD has had long-standing problems in preparing accurate working capital fund financial reports, particularly with regard to the accuracy of net operating results (the difference between annual revenue and expenses). These data are critical in setting prices and ensuring that the funds break-even over time. The problems we identified were attributable to significant deficiencies in the working capital fund accounting systems as well as a lack of sound internal controls. We found that these financial reporting problems persist in SMAG’s accounting and budgeting reports, where we identified billions of dollars of unexplained differences in the reported net operating results each year from fiscal years 1992 through 1996. Because SMAG’s financial reports cannot be relied upon, DOD cannot be certain (1) of the actual operating results for SMAG or (2) whether the prices SMAG charges its customers are reasonable. In recognizing the funds’ financial reporting problems and other inefficiencies in fund operations, the National Defense Authorization Act for Fiscal Year 1997 required DOD to develop an improvement plan by September 30, 1997. In response to this requirement, DOD acknowledged that the working capital funds have financial reporting problems and arrived at decisions to address them. It has not yet though developed a detailed implementation plan that lays out the specific steps that need to be taken to correct the problems. Having accurate financial reporting information is essential to monitoring fund operations, preparing budgets, and setting proper prices. For example, without accurate financial reports on SMAG, DOD and Air Force managers cannot effectively analyze trends, such as annual or monthly increases or decreases in billings to and reimbursements from customers to reduce or eliminate the need for additional working capital; perform monthly aging analysis of accounts receivable to identify old measure the progress of execution data against the original budget, such as monitoring estimated and actual collection and disbursement amounts to assess operational and financial problems. Volume 1 of the Department of Defense Financial Management Regulation recognizes that DOD accounting systems should provide critical data for use in budget formulation and monitoring budget execution. Thus, it requires that financial management data be recorded and reported in the same manner throughout DOD components and that accounting information be synchronized with budgeting information. As mentioned earlier, we have previously reported that DOD has had long-standing problems in preparing accurate working capital fund financial reports, particularly with regard to the accuracy of the net operating results. For example, in March 1993, we reported that although SMAG’s fiscal year-end 1992 financial report—as prepared by DFAS—showed a loss of $8.6 billion, an Air Force analysis disclosed a profit of $800 million. The $9.4 billion difference exceeded SMAG’s total revenue reported by DFAS for that year. Similarly, in March 1994, we reported that the Navy supply management activity group’s monthly financial report for May 1993 showed a profit of $23.1 billion which was over five times greater than the $4.3 billion in reported revenue for the same month and, therefore, was in error. We reported in March 1995, that due to a $6 billion clerical error, the Army supply management activity group reported an operating loss of $8.5 billion for fiscal year 1994 on a program that reported revenue of $7 billion for the same period. In addition, the DOD Inspector General has not been able to express an opinion on the accuracy of the Defense Business Operations Fund financial statements for fiscal years 1993 through 1996 due to significant deficiencies in the accounting systems and the lack of sound internal control structure. DOD has frequently acknowledged that the working capital funds’ financial reports are inaccurate—in the Acting Comptroller’s February 2, 1993, letter to the congressional defense committees; in the Defense Business Operations Fund September 24, 1993, improvement plan; and in DOD’s February 2, 1994, response to our October 1993 letter on the Defense Business Operations Fund improvement plan. More recently, DOD reported in its fiscal years 1996 and 1997 Annual Statement of Assurance as required by the Federal Managers’ Financial Integrity Act that inadequate accounting and reporting for the working capital funds, including the Air Force SMAG, were major control deficiencies. The Air Force also recognized SMAG financial reporting as a material weakness in its fiscal year 1997 Statement of Assurance. In this statement, the Air Force reported weaknesses in inventory valuation and noted the adverse effect it has on forecasting budget requirements. It stated that correcting this problem will result in more accurate inventory pricing and budgets. The Air Force also reported that internal controls were not sufficient to ensure that SMAG accounts were accurately reflected in financial statements. DOD has stated in the past that it was acting to correct these financial reporting problems. For example, in the Defense Business Operations Fund improvement plan dated September 1993, DOD stated that the primary causes of the financial reporting problems were (1) inconsistent or insufficient policy guidance and (2) inadequate financial systems. DOD’s September 1993 plan identified numerous actions needed to correct the deficiencies identified with the guidance and financial systems. However, because these long-standing problems continued, the congressional Defense committees acted to mandate improvements in the financial operations of the working capital funds. Specifically, the National Defense Authorization Act for Fiscal Year 1997 required DOD to prepare a plan by September 30, 1997, to improve the management and performance of the working capital funds. Among other things, the Act required DOD to address the issue involving financial reporting requirements. In response to the authorization act requirement, DOD developed a plan to improve the management and performance of its working capital funds. In this plan, dated September 1997, DOD stated that the working capital funds have financial reporting problems and DOD recognized that (1) differences between the budgeting and accounting reports for the same information confuses managers and should be eliminated, (2) large adjustments significantly affecting the operating results can occur as long as 4 months after the “as of” date and undermine management’s confidence in the reports, (3) a formal reconciliation of the various reports is not presently performed, and (4) eliminating the differences—or providing a reconciliation—would make reports more useful to decisionmakers and restore creditability and confidence in the reports. DOD’s plan also identifies decisions made to correct these financial reporting problems which include (1) developing policies and procedures for reconciling budgetary and accounting reports, (2) developing a handbook that identifies the differences between the various reports to assist managers in monthly report analysis, and (3) revising the cost of goods sold treatment and presentation in the 1307 accounting report. The DOD September 1997 plan does a good job in identifying the problems hindering accurate financial reporting and the decisions reached to resolve the problems. However, DOD does not yet have an implementation plan that identifies (1) the specific tasks that need to be accomplished, (2) individual DOD component’s responsibilities when two or more components are involved with correcting the problem, or (3) milestones that could provide a basis for monitoring progress. DFAS officials told us that they are developing the detailed tasks that need to be performed. Financial reporting weaknesses still persist in SMAG’s accounting and budgeting reports. Our comparison of SMAG’s accounting and budgeting reports for fiscal years 1992 through 1996 identified billions of dollars of differences in the reported net operating results and cost of goods sold—two factors that are integral in developing prices to be charged customers. Without reliable financial reports, DOD cannot be certain if SMAG’s prices will recover the costs of providing inventory to its customers. Moreover, the Congress, DOD, and the Air Force will not have the information they need for oversight and decision-making purposes. We compared SMAG’s net operating results reported in its Chief Financial Officer reports (the working capital fund’s annual financial statement) and in its 1307 accounting reports (the fund’s monthly accounting report which provides data on fund operations, including revenue earned, expenses incurred, profits, and losses) for fiscal years 1992 through 1996 and identified annual differences totaling billions of dollars. For the 3 most recent fiscal years, these differences are detailed in table 2.1. Both of these reports provide budget execution data on SMAG and, therefore, should provide the same information. As indicated in the above table, for fiscal year 1996, the original fiscal year-end 1307 accounting report, issued in November 1996, showed that SMAG had a net operating loss of about $2 billion. After this report was issued, DFAS made four revisions in preparing the SMAG portion of the Chief Financial Officer report, which had a major impact on SMAG’s net operating results. Specifically, DFAS adjusted the net operating results to show a positive $459 million in versions one and two of the Chief Financial a negative $11 billion in version three, and a positive $2.2 billion balance in version four—the final Chief Financial Officer report. After these changes were made, DFAS revised the original 1307 so that the amounts in that report would match the amounts in the CFO report. The size of these changes is significant, especially considering the fact that SMAG’s total revenue was $12.8 billion, according to the Chief Financial Officer report. We also compared SMAG’s reported net operating results in the 1307 accounting report to the Air Force Working Capital Fund budget justification report—which provides reported actual and budgetary data on revenue, expenses, net operating results, and prices and is also essential to managing SMAG operations. Again, we identified significant differences totaling billions of dollars for fiscal years 1992 through 1996. For example, the 1307 accounting report showed that SMAG lost about $2 billion during fiscal year 1996 while the budget report showed that it lost $99 million. Differences between these two reports are expected since not all the accounts used to determine the net operating results in the 1307 accounting report are used to develop the net operating results in the budget. For example, if Air Force disposes of inventory items and does not plan to replace these items, it does not consider this an expense for budgeting purposes. However, for accounting purposes, this is considered an expense that reduces the net operating results. The Department of Defense Financial Management Regulation (DOD 7000.14-R) requires business activities to (1) explain the differences between the net operating results shown in the 1307 accounting report and those used in the budget formulation of prices charged customers as shown in the budget, (2) identify and justify the net operating result amounts in the 1307 accounting report that DOD components request be excluded from the prices, and (3) obtain approval from the Office of the Under Secretary of Defense (Comptroller) for the amounts to be excluded. The Air Force did not reconcile the net operating results shown in these two reports as required because it believed that the 1307 accounting report was incorrect. Given the magnitude of the net operating result differences reported in the 1307 accounting report, Chief Financial Officer report, and the budget, it is clear that the figures contained in these reports cannot be relied upon for oversight and decision-making. Without knowing the net operating results, the Air Force cannot determine whether the prices being charged SMAG’s customers will allow it to recover its costs and operate on a break-even basis. In some cases, the prices might have been set too high because of erroneous net operating result data. In other cases, prices might have been set too low to recover the costs of providing goods and services, thereby resulting in a cash shortage. (See chapter 3 for a discussion of SMAG pricing problems and chapter 4 for a discussion of cash problems.) A root cause of SMAG’s inability to accurately report on its financial operations is that it cannot determine an accurate cost of goods sold. The cost of goods sold is an important factor used in arriving at the group’s annual net operating results (revenue less expenses, which include the cost of goods sold, equals net operating results). Our comparison of SMAG’s cost of goods sold reported in its Chief Financial Officer reports, 1307 accounting reports, and budget justification reports for fiscal years 1992 through 1996 identified differences totaling billions of dollars. These differences are detailed in table 2.2. Office of the Under Secretary of Defense (Comptroller) and/or DFAS officials told us that DOD’s logistical and accounting systems are not capable of providing the necessary information to identify the actual (historical) cost of goods sold amount based on normal commercial practices such as the first-in, first-out cost or weighted average cost of the items sold. Therefore, DOD uses the latest acquisition cost method to value inventory and arrive at the cost of goods sold which is permissible under the Statement of Federal Financial Accounting Standards No. 3, Inventory and Related Property. DOD uses a summary-level formula to adjust the value of inventory from the standard (selling) price to the latest acquisition cost by removing surcharges for operating costs from the standard price. Once DOD determines the latest acquisition cost, it then uses the following general formula for computing the cost of goods sold: Beginning inventory at beginning-of-the-period latest acquisition costLess: Beginning allowance for unrealized holding gains/losses Plus: Purchases of goods for sale Less: Disposal or other drawdown of goods other than sale Equals: Cost of goods available for sale Less: Ending inventory at end-of-the-period latest acquisition cost Plus: Ending allowance for unrealized holding gains/losses Equals: Cost of goods sold However, as evidenced by the reported differences shown in table 2.2, DOD has had problems implementing this formula to compute the cost of goods sold. Office of the Under Secretary of Defense (Comptroller) and DFAS officials also told us that in order to determine the actual (historical) cost of the goods sold, the method for valuing inventory must be changed from the current method of using the latest acquisition cost to valuing inventory based on historical costs. If DOD changes its method for valuing inventory, it must ensure that its method complies with the Statement of Federal Financial Accounting Standards No. 3. These officials further stated that by valuing inventory at historical cost, DOD would know the cost of each individual item sold, something it does not know now. This information could then be summarized for reporting on the supply management activity groups’ financial operations in DOD’s monthly accounting reports. However, the officials stated that before inventory could be valued at historical cost, DOD would have to either (1) modify its existing logistical and accounting systems or (2) develop new ones. Either option would be a long-term effort. SMAG’s financial reports cannot be relied upon to provide DOD and Air Force management or the Congress reliable information on SMAG’s results of operations. DOD has discussed this financial reporting problem in its September 1997 plan on improving the working capital funds and has identified several actions to correct the problem. However, it has not yet developed a detailed implementation plan to help ensure that the problems are corrected. Until SMAG can (1) determine its cost of goods sold and (2) reconcile the net operating results reported in the 1307 report to the net operating results reported for budgeting purposes, SMAG’s financial reports will continue to be questioned and lack credibility, and it will be extremely difficult, or impossible, to determine if the prices charged customers reflect what they need to be in order to recover costs. To ensure that DOD acts to correct SMAG’s financial reporting problems and develop an accurate cost-of-goods sold figure, we recommend that the Secretary of Defense develop a detailed implementation plan to ensure that the actions identified by DOD in September 1997 to correct the financial reporting problems are carried out promptly. The plan should (1) identify specific actions that need to be taken including the modification of existing systems or development of new systems, (2) establish milestones, (3) clearly delineate responsibilities for performing the tasks in the plan, and (4) ensure compliance with accounting standards on accounting for inventory and related property. To help link information contained in the accounting report to budget formulation, we also recommend that the Secretary of the Air Force direct a reconciliation of the net operating results in the 1307 accounting report to the reported actual net operating results in the budget justification report that is used for budgeting purposes. In its written comments, DOD concurred with our recommendation to develop a detailed implementation plan to ensure that the actions identified by DOD in September 1997 to correct the financial reporting problems are carried out in a timely manner. DOD has established three working groups that will develop specific implementation and execution plans and procedures for financial reporting. These three groups will meet throughout the summer of 1998 with reports expected later this year. DOD also concurred with our recommendation to reconcile the net operating results in the 1307 accounting report to the reported actual net operating results in the budget justification report that is used for budgeting purposes. DOD stated that additional lines have been added to the Working Capital Fund 1307 accounting reports to help explain the net operating differences that are reflected in the two reports. DOD policy requires the military services to develop prices for the working capital funds and use these prices as a basis for determining customer funding requirements. The baseline for this process should be the cost of buying and/or repairing items that are sold—which is known as the cost of goods sold. The Air Force, however, does not have reasonably reliable estimates of the number and type of items that SMAG customers will need or the expected cost of buying and/or repairing these items. Since it cannot use the cost of goods sold as the basis for SMAG’s prices, the Air Force has had to resort to using two separate processes to develop prices. The first process is used to develop a composite, or aggregate, price change in terms of percentage from one fiscal year to the next. This price change is then used to develop customers funding levels. The second process is used to establish prices for individual inventory items that reflect the expected cost of providing these items to customers. Ideally, these two processes would ensure that customers will have sufficient funds to buy the items they need. However, this objective is not always accomplished for the following two reasons. First, there are no checks to ensure that the composite price changes approved by the Under Secretary of Defense (Comptroller) are implemented. Second, problems with the implementation of new procedures for allocating operating cost to individual inventory items could result in some customers receiving either too much or too little funding in fiscal year 1998, and have left Air Force officials without the reliable historical and/or budget execution data they need to effectively reallocate funds. The two primary objectives of SMAG’s price-setting process are to ensure that (1) prices charged for individual items reflect the expected cost of providing these items to customers and (2) SMAG’s composite, or aggregate, price change is identified—so that it can be properly factored into customer budgets. Because it lacks reliable data on SMAG’s cost of goods sold (see chapter 2) and reasonable estimates of customers’ needs, the Air Force cannot accomplish these objectives through the traditional approach—developing prices for individual items and then applying these prices to estimates of customer needs as the basis for determining customer funding requirements. As a result, it uses a summary-level analysis to establish a composite price change for SMAG customers and, in turn, customer funding levels, and then attempts to establish prices for individual items that are consistent with the composite price change. During the annual budget review process, the Air Force develops an estimate of customer funding requirements that is subsequently approved by the Office of the Under Secretary of Defense (Comptroller). This estimate is based on factors such as (1) what customers have spent on inventory purchases in the past, (2) anticipated changes in requirements, such as planned deactivations of units, and (3) expected changes in SMAG’s costs, such as the anticipated effect of planned cost reduction actions. For example, during fiscal years 1997 through 1999, using this estimating process, SMAG’s prices and its customers’ funding levels were reduced by about $950 million to reflect the savings the Air Force expects to achieve from its Lean Logistics initiative. During the annual budget review process, Air Force headquarters also develops, and the Office of the Under Secretary of Defense (Comptroller) approves, a composite, or aggregate, price change that represents the average percentage price increase or decrease that SMAG customers will experience during the budget year. As shown in table 3.1, from fiscal years 1992 through 1998, SMAG’s authorized composite price change ranged from a 26.7 percent increase to a 26.2 percent decrease. To ensure that SMAG and other working capital fund activities operate on a break-even basis over time, DOD policy requires that prices be (1) based on expected costs and (2) adjusted to return prior year profits to customers or recoup prior year losses from them. It also requires that the prices be established at the beginning of the fiscal year and remain constant throughout the year. Prices that customers actually pay for SMAG’s individual inventory items are determined by adding a surcharge to each item’s latest acquisition cost or latest repair cost. Specifically, “standard” prices are determined by adding surcharges to the latest acquisition costs, and “exchange” prices are determined by adding surcharges to the latest repair costs. SMAG charges exchange prices when customers turn in broken repairable items and receive serviceable items in return. It charges standard prices for all nonrepairable items and for repairable items if customers do not turn in broken items. The surcharges that are added to the price of each inventory item are expected to cover SMAG’s operational costs for such things as salaries, inventory storage, and accounting and automated data processing services. They also cover other factors, such as (1) reductions to reflect the anticipated effect of cost reduction initiatives and (2) returning profits or recouping prior year losses. Between fiscal years 1994 and 1998, these surcharges ranged from a low of $1.0 billion to a high of $2.1 billion and accounted for between 30 and 50 percent of SMAG’s expected wholesale revenue. It is important that the prices established for individual items be consistent with the composite price change approved by the Under Secretary of Defense (Comptroller) and used in budgeting. If actual prices are set lower than the approved level, then customers may have more funds than they need and scarce resources may be wasted. Conversely, if actual prices are set higher than the approved level, then customers may not have enough funds to buy the items they need. However, the Air Force does not have effective procedures to ensure that the actual prices are, in fact, consistent with the composite price change that has been approved by the Under Secretary of Defense (Comptroller). It generally does not know that there is a problem with SMAG’s prices until and unless the problem is reflected in budget execution data. For example, SMAG’s fiscal year 1997 prices were reduced by about 18 percent, effective April 1, 1997, when budget execution data showed that customers were spending much more than expected for inventory items, and Air Force officials determined that customers did not have sufficient funds to last the remainder of the fiscal year. According to Air Force headquarters officials, this problem occurred because (1) SMAG had to pay more than budgeted for the repair of items and (2) when these higher-than-expected costs were incorporated into SMAG’s prices, it caused SMAG’s composite price increase to be higher than the one approved by the Office of the Under Secretary of Defense (Comptroller) during the budget review process. Reducing SMAG’s prices without reducing its cost adversely affected SMAG’s net operating results and the Air Force Working Capital Fund’s cash balance. The Air Force has recognized that it needs to take additional steps in the price-setting process. Specifically, after the prices for individual items are established and before the start of the fiscal year, the Air Force believes, and we agree, that it should (1) determine if the new prices, when applied to the best available estimate of customer orders, will result in the approved composite price change and (2) adjust the prices for individual inventory items, if necessary. AFMC officials acknowledged the need for corrective action such as this and indicated that they plan to take it. New procedures for allocating SMAG’s operational costs to individual inventory items (calculating surcharges), combined with data reliability problems, have resulted in fiscal year 1998 price changes that have varied significantly not only from one inventory item to the next but also from one month to the next. As a result, the Air Force’s initial allocation of funding to SMAG customers left some with either too much or too little funding. Further, although Air Force headquarters can and has alleviated this problem by reallocating available funds, it lacks the reliable historical and budget execution data it needs in order to properly do so now and in the future. As discussed above, prior to fiscal year 1998, SMAG recouped its expected operational costs by applying a standard surcharge percentage to all wholesale items. The advantage of this approach is that it does not require reliable data on an individual supply activity’s operational costs or projected revenue—because all operational costs are aggregated and then allocated uniformly to all items at all supply activities. The disadvantage is that, under this approach, the operations of inefficient supply activities are, in essence, subsidized by more efficient activities. This, in turn, makes it difficult to identify inefficient operations and activities, and causes many customers to pay either more or less than they should for their inventory purchases. For example, if one ALC’s overhead costs were higher than the other four ALCs’, some of its overhead costs would be included in the prices charged by the other four ALCs even though they may operate more efficiently. On October 1, 1997, the Air Force made two major changes in SMAG’s cost allocation procedures in order to better match costs with the prices customers were being charged. First, under the new cost allocation procedures, SMAG will, where possible, identify the estimated costs associated with individual supply activities—the five ALCs—and allocate each ALC’s costs to only those items that it manages. Second, the estimated cost of procuring inventory items to replace repairable items that can no longer be repaired economically (condemned items) will be recouped by adding a surcharge to the cost of the item being replaced rather than by adding a standard surcharge to all repairable items, which was the previous practice. Air Force headquarters officials stated that the implementation of the new cost allocation procedures has led to increased awareness of costs and increased emphasis on accurately estimating both costs and sales revenue. For example, they told us that because SMAG’s operational costs are now allocated, where possible, directly to the individual ALCs that incur them, the ALCs are now much more aware of and concerned about these costs. Similarly, they noted that, because the ALCs’ overhead cost allocations and surcharge percentages are based largely on their projected sales revenues, there is also increased emphasis on accurately projecting individual ALC’s sales revenue. To effectively implement the new cost allocation procedures, SMAG needs reliable sales revenue and operational cost data for individual ALCs. However, it has neither—in part because it did not begin accumulating actual sales data for individual ALCs until fiscal year 1997. As a result, SMAG’s initial fiscal year 1998 prices, which became effective on October 1, 1997, were based on unreliable sales and operating cost data and, therefore, had to be revised, effective November 1. In addition, because some of the November 1, 1997 price changes were not processed properly by the ALCs’ automated systems, another price change had to be implemented for many items, effective December 1, 1997. Each of these price changes was based on a reallocation of sales revenue and/or operational costs among the five ALCs and was associated with a major change in the size of the surcharges added to individual items. For example, the surcharges used to establish exchange prices for Sacramento ALC-managed items ranged from a low of about 46.6 percent to a high of 287.1 percent. Table 3.2 shows how these changes affected the price of individual items. For example, customers paid $8,859 for an alternating generator on October 1. On November 1, customers paid $23,391—about 2.6 times as much as the price on October 1. On December 1, they paid $16,727. These large price changes distort SMAG customers’ budget execution data for fiscal year 1998 and make it difficult for the customers and those providing oversight over their operations to determine if an appropriate level of funding has been provided. Although SMAG’s December 1, 1997, price change resulted in surcharges of at least 25 percent for all items at all ALCs, the surcharges are especially high for Sacramento ALC-managed items. This is primarily because Sacramento, which is scheduled to close in July 2001, has a much lower sales volume than the other ALCs and therefore, must spread its operational costs over a smaller base. As shown in table 3.3, the surcharges added to Sacramento’s standard prices (132.3 percent) and exchange prices (176.8 percent) are at least three times higher than those of the other ALCs. Because the ALCs used a standard surcharge in fiscal year 1997, Sacramento’s substantially higher fiscal year 1998 surcharges will cause price increases for its items to be much higher than the SMAG average. As a result, customers that rely heavily on the Sacramento ALC for their support, such as those that operate communications-electronics and space systems, are the ones that are most likely to have received insufficient funding. For example, the Air Force Space Command used more than half of its fiscal year 1998 spares funding during the first quarter of the year, and Space Command officials believe that their units will be unable to acquire the parts they need unless Air Force headquarters provides additional funds or they can transfer funds from another program. Customers that purchase inventory from the other ALCs also expressed concern. For example, officials of the Air Combat Command—which purchases inventory items from all the ALCs—stated that the implementation of the new cost allocation procedures have caused them “tremendous concern.” They acknowledged that the numerous pricing changes that occurred during fiscal year 1998 make it virtually impossible for them to determine whether they will have sufficient funding to cover their needs during fiscal year 1998. However, their analysis shows that they expect to experience funding shortages in most of their major weapons systems in fiscal year 1998 if additional funds are not provided. Because (1) SMAG’s new procedures for allocating operating cost to individual inventory items significantly impacted the fiscal year 1998 prices charged customers for individual items and (2) the overall impact varied significantly from one customer to the next, the Air Force does not have historical data on the amount of money needed by individual customers to purchase inventory. Air Force budget officials told us that it would take at least 1 to 2 years, perhaps even more, of actual experience to have sufficient data to reliably estimate individual customer needs. As a result, although the Air Force has already adjusted customer funding levels once, these officials acknowledged that they will have to continue to monitor budget execution data and to make further adjustments if necessary. To develop prices that will enable SMAG to operate on a break-even basis, the Air Force needs reliable information on (1) SMAG’s expected cost of goods sold and on (2) the expected sales revenue and operational costs of the individual ALCs. However, the Air Force does not have this reliable information. It also does not have adequate procedures to ensure that customers receive sufficient funds to purchase required inventory items and, as a result, had to reduce SMAG’s prices half way through fiscal year 1997 so that customers would not run out of money. Compounding this problem are the new cost allocation procedures and the implementation of those procedures which resulted in three different sets of prices so far during fiscal year 1998. This has caused substantial price fluctuations during fiscal year 1998 that may cause customers to either purchase fewer inventory items than they planned or transfer funds from other accounts. As recommended in the previous chapter, the Air Force needs to develop an effective process for determining the cost of goods sold. In addition, we recommend that the Secretary of the Air Force assess the impact of price changes to determine whether customers can acquire the goods they need in fiscal year 1998 and take funding reallocation actions, as appropriate, to meet the highest priority needs; and direct the AFMC Commander to develop and implement procedures to ensure that the prices that are established for individual inventory items are consistent with the composite prices developed and approved by the Office of the Under Secretary of Defense (Comptroller) during the budget process. In commenting on this report, DOD concurred with our recommendation to assess the impact of price changes to determine whether customers can acquire the goods they need in fiscal year 1998 and take funding reallocation actions as appropriate. The Air Force has already begun the process of reallocating resources to customers to ensure program integrity (vital functions can be performed). DOD also concurred with our recommendation to develop and implement procedures to ensure that the prices that are established for individual inventory items are consistent with the composite prices developed and approved by the Office of the Under Secretary of Defense (Comptroller). The Air Force maintains one cash balance at the overall Air Force Working Capital Fund level and manages the fund’s cash at that level. To ensure that the fund maintains an adequate level of cash to pay its bills, it is essential that managers (1) accurately project cash collections and disbursements and (2) actively monitor the fund’s cash position by performing such analyses as comparing budget estimates for collections and disbursements to actual collections and disbursements and determining the reasons for the variances. DOD policy requires that if the level of cash becomes low and there is a possibility of incurring an Antideficiency Act violation, immediate actions be taken to resolve the cash shortage by advance billing customers for work not yet performed. Since June 1993, the Army, Navy, and Air Force Working Capital Funds, have experienced cash shortages and have advance billed customers for work not yet performed to ensure that sufficient funds were available to meet day-to-day operating expenses. DOD initially expected the working capital funds to eliminate advance billing in fiscal year 1995. However, the Air Force Working Capital Fund has not achieved this goal and has continued the practice of advance billing customers. Since SMAG is the largest activity group in the Air Force Working Capital Fund, it is critical that SMAG properly manage its collections and disbursements. However, we found that SMAG did not accurately project cash (1) collections from sales and (2) disbursements for inventory items purchased from vendors. Further, SMAG was not adequately monitoring account receivable balances and outlay rates, which would have enabled it to identify the problem of inaccurately projecting collections and disbursements so that corrective actions could be taken to resolve the problem. Cash generated from the sale of goods and services is the primary means by which the working capital funds activities pay their bills. The position where the cash balances start each year depends on the outcome of many decisions made during the budget process with regard to (1) projecting the volume of inventory items that will be sold, (2) estimating costs, and (3) setting prices to recover the estimated full cost of the goods and services. During the execution of the budget, the working capital funds operate much like a checking account: collections increase the fund’s cash balance, and disbursements (such as salaries and purchases of inventory) reduce the cash balance. To the extent that the decisions, such as cost reduction initiatives, made during the budget process are reasonably accurate, the funds’ cash balances should fall between the minimum and maximum amount required by DOD. However, if the decisions are not accurate, the funds could have too much or not enough cash. According to DOD’s Financial Management Regulation, Volume 11B, the working capital funds are to maintain the minimum cash balance necessary to meet both operational requirements and to meet disbursement requirements in support of the capital asset program. In essence, the funds are to maintain a minimum cash balance which, at the same time, is sufficient to cover expenses, such as paying employees for repairing aircraft and vendors for inventory items. DOD’s policy further requires the funds to maintain cash levels to cover 7 to 10 days of operational costs and 4 to 6 months of capital asset disbursements. To comply with DOD’s policy, the Air Force Working Capital Fund should maintain a cash balance between about $465 million and $670 million. If the Air Force Working Capital Fund’s level of cash drops below the minimum required balance and there is a possibility of incurring an Antideficiency Act violation, actions will be taken to resolve the cash shortage by advance billing customers. Within the working capital fund there are three major activity groups—depot maintenance, supply management, and information services—whose operations significantly impact the fund’s cash balance.Of these activity groups, SMAG is the largest with 65 percent, or about $8.8 billion in reported disbursements out of the total $13.5 billion in reported disbursement made by the Air Force Working Capital Fund in fiscal year 1997. We have previously reported that the Defense Working Capital Funds have had a long-standing cash management problem including the practice of advance billing customers. Since 1993, the working capital funds have advance billed customers because they have not been able to generate enough cash to pay their bills. When the responsibility for managing cash was returned to the military services and DOD components in February 1995, the Air Force (as well as the Army and Navy) continued to advance bill customers so that its cash portion of the Defense Business Operations Fund would not have a negative balance. According to DOD budget documents, DOD anticipated that the working capital funds, including the Air Force Working Capital Fund, would be able to generate enough cash to eliminate advance billing in fiscal year 1995. This was to be achieved by (1) not replacing sold inventory on a one-for-one basis, (2) reducing costs, and (3) increasing prices for various reasons, such as to recoup prior year losses. When the fund failed to generate this cash, subsequent DOD budgets provided for an end to advance billing in fiscal year 1996, and again in fiscal year 1997. We found, however, that the Air Force Working Capital Fund did not achieve DOD’s goal of eliminating the routine practice of advance billing customers. The Air Force steadily reduced the Working Capital Fund’s outstanding advance billing balance from about $1.3 billion in February 1995 to $77 million in November 1996. At the same time, the Fund’s cash balance declined from $1.1 billion to $90 million. To ensure that its cash balance would remain positive, the Air Force Working Capital Fund advance billed customers over $1 billion in December 1996 and about $700 million in the June/July 1997 period. As of fiscal year end 1997, the Air Force Working Capital Fund’s reported outstanding advance billing balance was $464 million. Air Force officials told us that they now plan to eliminate the outstanding advance billing balance by the end of fiscal year 1999. The following figure shows the (1) reported cash balance for the Air Force Working Capital Fund and (2) cash balance if the Air Force Working Capital Fund had not advance billed its customers from February 1995 through September 1997. The Air Force recognizes that it has a cash shortage problem and added surcharges to generate cash totaling (1) $200 million to SMAG’s fiscal years 1998 and 1999 customers’ prices and (2) $75 million to the Air Force Depot Maintenance Activity Group’s fiscal years 1998 and 1999 prices. If these cash surcharges do not alleviate the problem, the Air Force may have to continue adding a surcharge to the prices to generate cash. Further, to improve cash management in the Air Force Working Capital Fund, the Air Force held a meeting in February 1998. Attending the meeting were officials from the Office of the Secretary of the Air Force, AFMC, DFAS, and various ALCs. The Air Force developed specific action items for the Depot Maintenance, Information Services, and Supply Management Activity Groups. According to Air Force officials, accounting system enhancements should result in better forecasting which will help the Air Force reduce the need for additional cash surcharges and advance billings. A follow-up meeting to discuss progress is scheduled for November 1998. To facilitate the cash management process, DOD policy requires that the working capital funds develop cash plans which include estimated collection and disbursement data. Being able to accurately project collections and disbursements is critical to the working capital funds’ ability to maintain an adequate level of cash to meet operational and capital requirements. DOD’s cash management policy further requires that projected collections and disbursements be monitored during execution in order to assess operational and financial problems and take the necessary actions to correct the problems. However, we found that SMAG did not accurately project cash (1) collections from foreign military sales (FMS) and (2) disbursements to be made to vendors for inventory items. In addition, SMAG managers did not adequately monitor account balance data on FMS collections and vendor disbursements. Our analysis of SMAG’s cash plans and reports show that SMAG made a reported $237.3 million in cash from fiscal year 1992 through fiscal year 1997. Our analysis also shows that SMAG made $683.3 million less than projected in fiscal years 1996 and 1997. This is of particular concern since SMAG disbursed more money than it collected at the same time the Air Force Working Capital Fund was experiencing a cash shortage problem and was advance billing customers. From fiscal year 1993 through fiscal year 1996, AFMC did not accurately project cash collections from FMS. AFMC erroneously estimated FMS revenue based on charging FMS customers the standard price (acquisition cost of the item plus surcharges) for depot level repairable inventory items rather than the exchange price (repair price of the item plus surcharges) that FMS customers actually paid. According to Air Force officials, AFMC budgeted FMS revenue and collections based on the standard price because it assumed that FMS customers would not be turning in broken inventory items (referred to as carcasses) in exchange for good, “useable” items. However, FMS customers turned in the carcasses. As a result, actual cash collections were about $429 million less than budgeted from fiscal year 1993 through fiscal year 1996. The difference between the standard price and exchange price that the FMS customer paid was recorded in a receivable account called “other assets accounts receivable—deliveries suspense.” As shown below, the amount steadily grew from fiscal year 1993 through fiscal year 1996. Because AFMC personnel were not adequately monitoring account balance information, they did not realize that FMS customers were turning in broken items until SMAG started to experience cash problems in early fiscal year 1996. This was soon after the Air Force received cash management responsibilities for the working capital fund. Until that time, AFMC managers were not monitoring the account’s balances because, from an overall cash position, there were no adverse issues regarding SMAG’s cash. AFMC has begun to resolve this problem. Beginning in fiscal year 1997, revenue from FMS customers was budgeted at the exchange price. From a cash management standpoint, when SMAG orders inventory items from vendors, it is critical that the Air Force accurately project the timing of delivery since SMAG pays the vendors based on the delivery of the inventory items. The time period used for projecting outlays depends on the type of inventory items being purchased and can cover several years starting from the time the items are ordered from vendors. AFMC officials assumed that vendor deliveries, and thus cash outlays, would be greater in the earlier years of the delivery period and less in the later years. Accordingly, AFMC projected its cash outlays to fit this delivery pattern. However, Air Force officials stated that AFMC’s projected outlays have not materialized as expected. Cash outlays in the early years were significantly less than expected, while outlays in the later years were more than expected. According to Air Force supply management officials, until recently, outlay rates for the Reparable Support Division inventory buys were not being updated each year to better reflect vendor delivery patterns. The following table illustrates the shift in projected outlay rates over a 5-year outlay period. It also shows the (1) outlay rates that have been used over the past several years and (2) revised outlay rates—based on the Air Force’s analysis of current outlay pattern—used in developing the Air Force’s fiscal year 1999 budget estimate submission for SMAG. AFMC managers did not realize that there was a problem with projected cash outlays until SMAG started to experience cash problems in early fiscal year 1996. According to AFMC officials, outlay rates were not monitored because there were no adverse issues regarding SMAG’s cash. AFMC officials told us that they do not have the basic information for projecting cash outlays, such as item managers’ delivery projections or good, historical data by fiscal year on vendor contracts. DOD’s September 1997 report also acknowledged that managers do not have the necessary information nor an automated cash model to assist them in predicting required cash levels, forecasting cash positions, or for predicting end-of-period cash positions on a weekly, monthly, or annual basis. AFMC officials acknowledged that they need better management tools for projecting cash outlays for all types of SMAG outlays. They noted that projecting outlay rates is relatively easy for obligations and disbursements that occur within the same year (such as paying salaries or accounting services provided by the DFAS). However, this process becomes much more complicated when it comes to disbursements that occur several years after the obligation (such as purchases of repairable inventory items from vendors), and thus there is a critical need for better information and tools that can guide fund managers. Recognizing this, AFMC contracted with a major public accounting firm to develop a cash forecasting model. The Air Force Working Capital Fund depends on its activity groups to effectively manage their cash in order for the fund to have enough cash to pay for day-to-day operating expenses. However, the activity groups have not generated sufficient cash to eliminate the practice of advance billing customers. With regard to SMAG, this activity group has made less cash than estimated from fiscal year 1993 through fiscal year 1997. This problem will undoubtedly persist until SMAG (1) develops and uses management tools, such as cash forecasting models to project the amount of collections to be received and disbursements to be made in future years and (2) emphasizes the need to monitor account balances and takes steps needed to identify and correct the problems. We recommend that the Secretary of the Air Force direct the Commander, Air Force Materiel Command to ensure that the development of the cash forecasting model includes the capabilities to forecast (1) required cash level, (2) end-of-period cash positions on a weekly, monthly, or annual basis, (3) disbursements to be made in future years based on when vendors are scheduled to deliver items to SMAG and the prices charged by the vendors, and (4) receipts based on SMAG’s sales; and monitor accounts receivable balances and cash outlay rates to identify anomalies and their causes so that corrective actions can be taken. In its comments, DOD concurred with our recommendation to ensure that the development of the cash forecasting model includes the capabilities to forecast (1) required cash levels, (2) cash balances, (3) disbursements, and (4) receipts based on sales. DOD also concurred with our recommendation to monitor accounts receivable balances and cash outlay rates to identify anomalies and their causes so that corrective actions can be taken. | Pursuant to a congressional request, GAO reviewed the financial operations of the Department of Defense (DOD) Working Capital Fund, focusing on the: (1) accuracy and consistency of the Air Force supply management activity group's (SMAG) accounting and budgetary reports; (2) group's price-setting process; and (3) Air Force Working Capital Fund's cash management practices. GAO noted that: (1) the Air Force has had difficulties producing reliable financial information on the supply management activity group's operations, setting prices for inventory the group sells to customers, and generating sufficient cash to help discontinue the Air Force Working Capital Fund's practice of advance billing its customers since 1993; (2) these weaknesses impair the Air Force's ability to: (a) ensure that customers can purchase inventory items when needed; and (b) achieve the goals of the working capital funds, which are to focus management attention on the full costs of carrying out operations and to manage those costs effectively; (3) at the core of many of the supply management activity group's financial management weaknesses is its inability to produce reliable information on its cost of goods sold and net operating results; (4) this financial information is critical since the activity group must set prices that reflect the expected costs of providing inventory items to customers; (5) if these data are inaccurate, the activity group's prices may not cover its full cost of operations or generate enough cash to pay its bills, which has been the case in recent years; (6) until the Air Force can: (a) develop accurate information on the supply management activity group's net operating results and cost of goods sold; (b) use this information to develop an effective price-setting process; and (c) acquire and use management tools for projecting cash outlays--its customers will remain susceptible to wide price fluctuations and a corresponding depletion of funds; (7) further, the Air Force Working Capital Fund will have to continue to advance bill customers so that it has enough cash to pay its bills; and (8) finally, senior managers and those responsible for providing oversight will continue to lack the information they need to make informed decisions on Air Force supply operations. |
Medicare, Medicaid, and CHIP beneficiaries receive health care from a variety of providers and in different settings. When suspected cases of fraud emerge, many agencies are involved in investigating and prosecuting these cases and they rely on multiple statutes. Medicare, Medicaid, and CHIP beneficiaries receive health care from a variety of providers—including physicians, nurses, dentists, and other medical professionals—in many different settings, such as hospitals, medical practices, clinics, and health centers. Additionally, beneficiaries may receive care and assistance from home health agencies and aides, durable medical equipment suppliers, and medical transportation companies. In 2010, about $478 billion in federal Medicare, Medicaid, and CHIP spending was attributable to hospital care (41.3 percent of total spending) and physician and clinical services (18.3 percent of total spending) based on National Health Expenditure Account data from CMS. Expenditures for prescription drugs accounted for 9.3 percent of spending in these programs, and nursing home care accounted for 7.8 percent. Many other categories of providers accounted for the remaining 23.4 percent. Several agencies are involved in investigating and prosecuting health care fraud cases, including the HHS-OIG; DOJ’s Civil and Criminal divisions; the 94 USAOs; the FBI; and state MFCUs. The HHS-OIG and FBI primarily conduct investigations of health care fraud, and DOJ’s divisions typically prosecute or litigate those cases.additional information about the role of each agency in fraud investigations and prosecutions. These agencies often work together to investigate and prosecute health care fraud cases. For example, HHS-OIG may open a fraud case, work with the FBI during the investigation, and then refer the case to a USAO for prosecution. Additionally, HHS-OIG, the FBI, a USAO, and DOJ’s Criminal Division work jointly on health care fraud cases handled by Medicare Strike Force teams. Health care fraud cases are opened by the agencies either when they receive information about suspected fraudulent activity from a source—which can include program beneficiaries and CMS and its contractors—or if they proactively identify possible fraudulent behavior through data analysis. Additionally, in civil cases known as qui tam cases, individuals—referred to as relators—with evidence of fraud can file a civil suit under the False Claims Act (FCA). These qui tam cases are handled by a USAO or DOJ’s Civil Division, though they may receive assistance in the investigation from HHS-OIG or the FBI. In other fraud cases, if a fraud case is opened by HHS-OIG, the agency typically conducts its investigation, determines whether the case has merit, and refers the case to DOJ for criminal prosecution or civil litigation. Alternatively, HHS-OIG may find that the case does not have merit and may close the case. HHS-OIG also has authority to impose civil monetary penalties or exclude the provider from participating in federal health care Similarly, DOJ’s divisions may choose not to pursue a fraud programs.case for a number of reasons, including a lack of evidence or insufficient evidence to support the charges, or a lack of resources for investigation or prosecution. MFCUs investigate and typically prosecute health care fraud cases in the state’s Medicaid program under state laws, and frequently coordinate with HHS-OIG and DOJ on the investigation and prosecution of certain fraud cases. Many MFCUs have authority to prosecute cases of fraud, but not all MFCUs are able to do so and refer cases to other agencies for prosecution. For example, Texas’ MFCU does not have the authority to prosecute cases and refers cases to another agency or office, such as the U.S. Attorney’s Office or the state’s District Attorney, for prosecution. Several statutes concern health care fraud.following: Civil monetary penalty provisions of the Social Security Act are applicable to certain enumerated activities, such as knowingly presenting a claim for medical services that is known to be false and fraudulent. The Social Security Act also provides for criminal penalties for knowing and willful false statements in applications for payment. In addition, providers may be excluded on a mandatory or permissive basis from participating in federal health care programs for engaging in certain fraudulent activities. The Anti-Kickback statute makes it a criminal offense for anyone to knowingly and willfully solicit, receive, offer, or pay any remuneration in return for or to induce referrals of items or services reimbursable under a federal health care program, subject to statutory exceptions and regulatory safe harbors. For example, a payment program under which a hospital paid physicians who referred patients for admission would implicate the anti-kickback statute. The Stark law and its implementing regulations prohibit physicians from making “self-referrals”—certain referrals for “designated health services” paid for by Medicare to entities with which the physician (or immediate family) has a financial relationship. The Stark law also prohibits these entities that perform the “designated health services” from presenting claims to Medicare or billing for these services. The Federal Food, Drug, and Cosmetic Act makes it unlawful to, among other things, introduce an adulterated or misbranded pharmaceutical product or device into interstate commerce. The False Claims Act (FCA) is often used by the federal government in health care fraud cases.including the knowing presentation of a false claim for payment by the federal government. Claims that are submitted in violation of certain other statutes may also be considered false claims and, as a result, create additional liability under the FCA. Many health care fraud cases pursued under the FCA are for billing for goods or services not rendered, billing for unnecessary health care goods or services, or billing for goods or services at a higher rate than what was provided. Under the FCA, civil cases can be brought by the U.S. government or by a private citizen. The FCA prohibits certain actions, The outcome of a fraud case can depend on whether the case is civil or criminal, and if the case is prosecuted or litigated, the penalties authorized under the relevant statutes. For example, civil cases that are litigated may result in judgments imposed by a court or settlements reached by the subjects and litigators of the fraud case. In criminal cases, outcomes can include incarceration, probation, and fines. HHS-OIG may also impose civil monetary penalties on providers for committing fraud, and may exclude providers from participating in federal health care programs. In some cases, a subject may receive both civil and criminal penalties, and be excluded. According to 2010 data, 10,187 subjects were investigated for health care fraud. Medical facilities (such as medical centers, clinics, and medical practices) and durable medical equipment suppliers were the most frequent subjects of criminal fraud cases in 2010. Hospitals and medical facilities were the most frequent subjects of civil fraud cases, including cases that resulted in judgments or settlements. Nearly 2,200 individuals were excluded from program participation by HHS-OIG, about 60 percent of whom were in the nursing profession. According to 2010 data, 10,187 subjects were investigated for health care fraud—of which, 7,848 were subjects of criminal fraud cases, and 2,339 were subjects of civil fraud cases. Data from 2010 shows that HHS-OIG investigated health care fraud cases for nearly 8,900 subjects, many more than were opened by the USAOs and DOJ’s Civil Division. Table 2 contains information on health care fraud subjects by agency, reflecting the work of each agency in 2010. To fully reflect the work of each agency, data on subjects that were included in more than one agency database were included in the top portion of the table. The duplicate cases were removed to arrive at the unique count of subjects and were not included in our other analyses. Data comparing cases handled in 2005 and 2010 show that HHS-OIG investigated cases for nearly 2,800 more subjects in 2010 than it did in 2005, while the USAOs and DOJ’s Civil Division handled cases for approximately the same number of subjects. According to 2010 HHS-OIG data, most of the subjects involved in fraud cases were referred to HHS-OIG by federal law enforcement agencies— such as the FBI—(38 percent), or state or local law enforcement agencies (10 percent). Case subjects were also referred to HHS-OIG by CMS contractors tasked with program integrity (14 percent), current or former employees of providers (9 percent), or individuals (9 percent), and the remainder were from other sources. (See table 3 for additional information on the source of health care fraud cases referred to HHS- OIG.) About 49 percent of criminal health care fraud subjects were, or were affiliated with, medical facilities (such as medical practices, clinics, or centers), durable medical equipment suppliers, and home health agencies. Of the 7,848 subjects associated with criminal cases, about 1,100 were charged, and 85 percent of those charged were found guilty or pled guilty or no contest. Of those subjects who were found guilty or pled guilty or no contest, about 37 percent were medical facilities and durable medical equipment suppliers. According to 2010 data, many different types of providers—including medical facilities and hospitals, or individuals affiliated with these entities—were suspected of health care fraud. Specifically, about one- quarter of subjects investigated in criminal health care fraud cases were medical facilities or were affiliated with these facilities. Additionally, about 16 percent of subjects were durable medical equipment suppliers. Over 19 percent were subjects for which we could not determine an affiliation. See table 4 for additional information on the subjects of criminal health care fraud cases by provider type for 2010. Among the 7,848 subjects in 2010 criminal cases, nearly 50 percent were the entities themselves, rather than individuals affiliated with those entities. See table 5 for more detailed information on the types of providers that were subjects in 2010 criminal cases. Of the 3,864 subjects that were entities, most were durable medical equipment suppliers (819), home health agencies (507), medical centers or clinics (506), or medical practices (486). Additionally, more than 15 percent were physicians, and about 14 percent were management employees—such as owners, operators, or managers. Our data show that 2010 criminal cases involved 2,300 more subjects than 2005 cases. Additionally, some provider types had particularly large increases in 2010 compared to the number of subjects investigated in criminal cases in 2005. For example, cases where pharmacies were the subjects increased from 99 subjects in 2005 to 321 in 2010 (an increase of 224 percent), and the number of home health agency subjects increased from 284 to 639 (an increase of 125 percent). The 2005 data show that medical facilities and durable medical equipment suppliers were the provider types with the most subjects investigated in cases, as was also the case with 2010 data. In 2005, medical facilities represented 23 percent of all subjects in criminal cases, and durable medical equipment suppliers accounted for 18 percent. Similarly, in 2010, medical facilities accounted for 24 percent of all subjects in criminal cases, and durable medical equipment suppliers accounted for 16 percent. Most of the 7,848 subjects who were investigated for criminal fraud in 2010 were not pursued—meaning that HHS-OIG did not refer the subject’s case to DOJ for prosecution. Most subjects—about 85 percent—were investigated in criminal cases that were not pursued for a variety of reasons, mainly due to lack of resources or insufficient evidence. The 2010 data indicated that 1,086 subjects were charged in criminal fraud cases, which represented about 14 percent of all criminal case subjects. Additionally, nearly 1 percent of subjects were involved in criminal case appeals, most of which were decided favorably for the U.S. government. See table 6 for additional information about the number of subjects in criminal cases by outcome. Among the 1,086 subjects that were charged, over 85 percent (925 subjects) were found guilty, pled guilty, or pled no contest to some or all of the criminal charges against them. For the remaining 15 percent of subjects, charges were dismissed (9.4 percent), subjects were found not guilty (1.2 percent), or had another outcome (4.2 percent). Of the 925 subjects who were found guilty or pled guilty or no contest, about 19 percent were from medical facilities—including medical centers, clinics, or practices. Although 2010 Medicare, Medicaid, and CHIP expenditures on durable medical equipment services was 1.3 percent of total spending in those programs, approximately 19 percent of subjects that were found guilty or pled guilty or no contest were durable medical equipment suppliers. Many different provider types were among the remaining subjects found guilty or that pled guilty or no contest. We could not identify the affiliation of nearly one-third of the subjects, including both health care providers and individuals. See table 7 for additional information on these subjects in 2010 criminal cases by provider type. Of the 925 subjects who were found guilty or pled guilty or no contest, 60 percent were sentenced to incarceration, and 73 percent were sentenced to probation. Nearly 26 percent of those sentenced to incarceration were subjects affiliated with durable medical equipment suppliers, and 21 percent were affiliated with medical facilities. Similarly, both durable medical equipment suppliers and medical facilities each represented 17 percent of subjects sentenced to probation. The average length of a sentence to incarceration was about 3.5 years, and the maximum sentence received was a life sentence. Nearly 60 percent of subjects sentenced to incarceration received sentences between 2 and 5 years, while nearly 21 percent received a term of 1 year or less. More than 13 percent received sentences between 6 and 10 years and about 5 percent received sentences of more than 10 years of incarceration. The average probation term was 2.8 years, and the maximum term was 10 years. Nearly 78 percent of subjects sentenced to probation received a probation term between 2 and 5 years. Subjects of criminal fraud cases could also be sentenced to home detention, public service, or their sentences could be suspended. Additionally, subjects could also be ordered to pay fines and restitution. Data from HHS-OIG contained information on these types of penalties, but data we received from the USAOs did not. According to 2010 data from HHS-OIG 56 subjects were sentenced to home detention terms; 75 subjects were sentenced to complete public service; 31 subjects received suspended sentences; 440 subjects were required to pay a fine; and 307 subjects were required to pay restitution. Among those subjects that were required to pay fines or restitution, or both, the average amounts required were $898,361 in fines, and $1.8 million in restitution. In total, subjects were ordered to pay nearly $960 million in combined fines and restitution. According to 2010 civil case data for health care fraud, 2,339 subjects were investigated in civil cases. Hospitals represented nearly 20 percent of these subjects, and medical facilities about 18 percent. Civil cases involving approximately 1,100 subjects were pursued—meaning that the USAOs or DOJ’s Civil Division received the cases and took some sort of action, such as litigating the case; and of those, 55 percent resulted in a judgment for the government or in a settlement. For those cases that resulted in a judgment or settlement, or both, about 44 percent of the subjects were hospitals and medical facilities. According to 2010 data, hospitals were nearly 20 percent of the subjects of civil fraud cases, and medical facilities were also frequently the subjects of civil cases, making up about 18 percent of the subjects. We were unable to determine the provider type or their affiliation for about 18 percent of the subjects of civil cases. (See table 8 for additional information on the subjects of civil health care fraud cases by provider type for 2010.) As previously mentioned, individuals can bring civil health care fraud suits, known as qui tam cases, under the FCA. According to 2010 data from the USAOs and DOJ’s Civil Division, 88 percent of subjects investigated in civil cases were investigated in qui tam cases. Nearly 61 percent of the subjects investigated in 2010 civil cases were entities themselves, rather than individuals affiliated with those entities. Most of these entities were hospitals, medical centers or clinics, medical practices, or pharmaceutical manufacturers or suppliers. Additionally, physicians represented 12 percent of the subjects; and management employees, such as owners, operators, or managers, represented 8 percent of the civil case subjects. (See table 9 for more-detailed information on the types of providers that were subjects in 2010 civil cases.) In 2010, over 600 more subjects were investigated in civil cases than in 2005, about a 35 percent total increase. Changes in provider types for civil cases are not reported here because we were unable to identify provider types for about 31 percent of the subjects in the 2005 data. In the 2010 data, we were unable to identify the provider type for about 18 percent of subjects. Because of this limitation, the percentage increases in certain provider types investigated in civil fraud cases may not be an accurate reflection of the actual increases in provider types of civil fraud cases. Not all of the subjects investigated in 2010 civil cases were pursued— meaning that the USAOs or DOJ’s Civil Division received the case and took some sort of action. According to the data we received, 1,087 subjects were involved in civil cases that were pursued, representing nearly 47 percent of all civil case subjects. Among other subjects of civil cases, more than 53 percent were not pursued for numerous reasons, including a lack of resources or insufficient evidence. Additionally, less than 1 percent of subjects were involved in civil appeals cases. (See table 10 for additional information about the number of subjects involved in civil cases by outcome.) According to data from the USAOs and DOJ’s Civil Division, most qui tam cases did not result in a judgment or settlement. For example, 52 percent of subjects in qui tam cases were either voluntarily dismissed by the relator (34 percent) or were declined by the USAOs or DOJ’s Civil Division (18 percent). Nearly 24 percent of qui tam cases were settled and in 8 percent of qui tam cases there was a judgment for the government. For the 602 subjects for which cases resulted in a settlement or judgment for the government or for the relator, 27 percent of the subjects were hospitals and about 17 percent were medical facilities. For nearly 16 percent of subjects, we were unable to determine the affiliation of the provider or individual. (See table 11 for information on provider types for subjects where the case resulted in a settlement or judgment for the government or relator.) According to data from HHS-OIG, of those subjects investigated in cases with a judgment or settlement, 275 subjects were to pay restitution as a result of the judgment or settlement and 89 subjects were to pay fines. Approximately 38 percent of the subjects that were to pay restitution were hospitals; 17 percent were medical facilities; and 11 percent were physicians whose affiliation we were unable to determine. Among those subjects that were to pay fines or restitution, or both, the average amounts were about $7.1 million in fines and about $5.4 million in restitution. In total, subjects were to pay over $2.1 billion in combined fines and restitution as a result of the judgments or settlements. HHS-OIG excluded individuals and entities from participating in federal health care programs for a variety of reasons in 2010. These reasons included convictions for health care fraud as well as reasons other than for health care fraud, such as patient abuse or neglect. When individuals or entities are excluded, their provider enrollment is revoked According to 2010 and they are not eligible to bill for services provided.exclusion data we received from HHS-OIG, 2,190 individuals and entities were excluded. About 60 percent of the individuals and entities excluded were those in the nursing profession, such as nurses and nurses’ aides. The next-largest provider type excluded was pharmacies or individuals affiliated with pharmacies, though they only represented about 7 percent of the 2010 exclusions. (See table 12 for additional information on the types of providers excluded.) There were a number of reasons why the 2,190 individuals and entities were excluded; about 42 percent were excluded for license revocation, suspension, or surrender; over 28 percent were for program-related convictions; and about 10 percent were for felony health care fraud convictions. Most of those excluded because of revoked, suspended, or surrendered licenses were in the nursing profession. (See table 13 for additional information on the reasons for excluding individuals in 2010.) Data we received from 10 state MFCUs show that more than 40 percent of the fraud subjects were home health care providers, and health care practitioners. Home health care providers also accounted for nearly 40 percent of criminal convictions and about 45 percent of subjects sentenced in 2010. In 2010, pharmaceutical manufacturers were to pay more than 60 percent of the total amount of civil judgments and settlements. Of the 2,742 subjects of health care fraud in Medicaid and CHIP referred to MFCUs for investigation, more than 40 percent were affiliated with two provider categories: home health care providers (26.6 percent) and health care practitioners (14.8 percent). Home health care providers and pharmaceutical manufacturers are the two provider categories that experienced the highest increases when comparing 2005 and 2010 data. For example, the number of home health care providers suspected of fraud increased significantly from 2005 to 2010, from 357 subjects to 730, a 104 percent increase. This was primarily driven by an increase in fraud cases among health care aides, which increased from 79 subjects in 2005 to 324 in 2010. Similarly, the number of pharmaceutical manufacturers in fraud cases increased significantly from 71 in 2005 to 296 in 2010. (See table 14, below, for additional information on provider types referred to MFCUs in fraud investigations.) Over half of the MFCUs’ subjects of fraud cases in 2010 were referred by the states’ Medicaid agencies (30.9 percent) and private citizens (25.1 percent). MFCUs do not pursue all cases of health care fraud that are referred to them. In 2010, 692 subjects were indicted or charged in criminal health care fraud cases handled by 10 MFCUs; of those, nearly 40 percent were home health care providers—which includes home health care agencies, and home health care aides. Home health care providers also accounted for nearly 40 percent of criminal fraud convictions in 2010; health care practitioners—physicians, doctors of osteopathy, nurses, physician assistants, and nurse practitioners—had the second-highest percentage of criminal convictions in 2010 with approximately 16 percent. The number of home health care providers convicted in criminal cases more than doubled from 79 convictions in 2005 to 192 convictions in 2010, and health care practitioners had an increase of 11 convictions compared to 2005. (See table 15 for additional information about criminal case outcomes and prosecutions of subjects by provider type for cases handled by 10 MFCUs.) According to 2010 data for cases handled by the 10 MFCUs, home health care providers had the largest number of subjects sentenced to incarceration, probation, or other criminal case outcomes, accounting for nearly 45 percent of the total number of subjects. Durable medical equipment suppliers accounted for the largest monetary penalties, yet had relatively few subjects sentenced to incarceration, probation, or other criminal case outcomes, such as deferred sentences. Of all of the subjects sentenced, 42 percent were sentenced to probation, 32 percent were sentenced to incarceration, and 26 percent received other criminal case outcomes.outcomes.) GAO provided a draft of the report to DOJ and HHS. DOJ provided technical comments, which have been incorporated as appropriate. HHS did not comment on the draft. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretaries of Health and Human Services and Justice, the Inspector General of HHS, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. To identify subjects of health care fraud cases in Medicare, Medicaid, and the Children’s Health Insurance Program (CHIP)—including referrals, investigations, prosecutions, and outcomes—by provider type, and to examine changes in the distribution of provider types in 2005 and 2010, we obtained data on health care fraud cases from the Department of Health and Human Services’ Office of Inspector General (HHS-OIG), the Department of Justice’s (DOJ) Executive Office of U.S. Attorneys (EOUSA)—which provides administrative support for the 94 U.S. Attorney’s Offices (USAO)—and DOJ’s Civil Division. We obtained data on fraud cases involving Medicare, Medicaid, and CHIP that were closed in calendar year 2005 or 2010. We collected data for closed cases only— meaning that the agencies were no longer actively investigating or prosecuting a case—to avoid concerns about analyzing or reporting information about open cases. We obtained data from HHS-OIG’s Investigative Reporting and Information System, which contains information on health care fraud cases received or investigated by HHS-OIG. The data we received contained information on civil and criminal health care fraud cases closed in calendar years 2005 or 2010, as well as exclusions from program participation. The HHS-OIG data included information about the subjects, sources of the cases, outcomes of the investigations and prosecutions (if the cases were pursued), and the reasons for which the cases were closed (such as lack of evidence). The data we received from HHS-OIG also contained information on the provider types of the subjects. Additionally, we obtained data from two divisions within DOJ—EOUSA and the Civil Division. The data we received from EOUSA was from the Legal Information Office Network System and contained information about the subjects of the fraud cases, outcomes of the prosecutions, and the reasons for which the cases were closed. Provider type is not a required field in the USAOs database; consequently the USAOs do not consistently have provider type information. DOJ’s Civil Division provided us data from the CASES database. The data received contained information about the subjects, outcomes of the fraud cases, and reasons the cases were closed. The DOJ Civil Division does not collect information on the subject’s provider type. The data we received from HHS-OIG pertained only to health care fraud in Medicare, Medicaid, and CHIP; however, data we received from the USAOs and DOJ’s Civil Division may have also included other federal health care program fraud as well as fraud in the private sector as the databases used to track fraud cases do not capture fraud exclusively in Medicare, Medicaid, and CHIP. Many fraud cases are handled jointly with HHS-OIG, USAOs, and DOJ’s Civil Division, and are entered separately into each agency’s database that tracks fraud cases. As a result, the data we received contains duplicate information on health care fraud cases and subjects. In order to minimize the duplication across the data we received, we identified fraud case subjects that were in more than one data set we received by comparing subject information to the extent possible. We then excluded the duplicate data that we identified so that each subject was only included once. However, it is possible that our analysis still includes some duplication in fraud cases and subjects. For cases and subjects that we identified as a match, we used the information in the HHS-OIG data instead of either the USAO data or DOJ’s Civil Division data because the HHS-OIG data contained information on the subject’s provider type. Among the data involving criminal cases, we identified 590 subjects—291 subjects in the 2005 data and 299 subjects in the 2010 data—that were matches between the HHS-OIG data and the USAO data. For civil case data, we identified 423 subjects—166 subjects in the 2005 data and 257 subjects in the 2010 data—that were matches between data we received from HHS-OIG, the USAOs, or DOJ’s Civil Division. We removed the duplicate subjects we identified from parts of our analysis. In the USAO and Civil Division data, there were 2,470 subjects—1,484 of which were investigated in civil cases, and 986 that were investigated in criminal cases—for which we did not identify a duplicate case in the HHS- OIG data, and did not contain information on the provider type. To identify the type of provider for these subjects, we obtained information from court records using the Public Access to Court Electronic Records (PACER). We reviewed court documents, such as indictments and plea agreements, to obtain information on the subject’s provider type. We reviewed information we found using PACER and categorized it into one of the provider categories in our analysis. However, our analysis of the changes in the types of providers in 2005 and 2010 is limited since the percentage of subjects for which we were unable to determine the provider type was substantially higher in 2005 for civil case subjects. One of the reasons we could not determine the provider type was because many of the court records for 2005 were not available in PACER. After we identified the provider types for data we received from USAOs and DOJ’s Civil Division, and after reviewing the data on provider types in the HHS-OIG data, we created categories of providers in order to analyze the data. We assigned the subjects categories: the entity in which health care was provided, and the subject’s role in providing care (if care was provided). For example, an owner of a durable medical equipment supply company was categorized into an entity (durable medical equipment supplier) and a role (management employee); a physician employed by a hospital would be categorized as hospital for the entity and physician for the role. Table 18 provides additional details about the categories we developed for our analysis. To assess the reliability of the data we received from HHS-OIG, USAOs, and DOJ’s Civil Division, we interviewed officials from each of those agencies about the quality of the data, reviewed relevant documentation, and examined the data for reasonableness and internal consistency. We found these data were sufficiently reliable for the purposes of our report. To identify subjects of Medicaid and Children’s Health Insurance Program (CHIP) fraud cases investigated or prosecuted, or both, by Medicaid Fraud Control Units (MFCU) by provider type, and to examine changes in the distribution of provider types investigated and prosecuted for fraud in 2005 and 2010, we collected data from 10 state MFCUs. Using data about MFCUs collected by the Department of Health and Human Services’ Office of Inspector General (HHS-OIG), we selected the 10 state MFCUs that collectively accounted for the majority of open fraud investigations, fraud indictments or charges, fraud convictions, MFCU grant expenditures, and number of MFCU staff for all MFCUs in fiscal year 2010. The state MFCUs we selected also represented over 40 percent of the civil settlements and judgments—though we were not able to analyze fraud-specific civil settlements and judgments because the HHS-OIG data available do not separate out fraud settlements and judgments from abuse and neglect case settlements and judgments. The 10 selected MFCUs were in California, Florida, Illinois, Indiana, Louisiana, Massachusetts, New York, Ohio, Texas, and Virginia. The 10 selected MFCUs accounted for 66 percent of MFCU grant expenditures. (See table 19 for additional information about the MFCUs.) We collected data from the state MFCUs by developing a standardized data-collection instrument based on the HHS-OIG’s Quarterly Statistical MFCU Report Template and accompanying definitions. (See table 20 for additional information about the definitions for the categories of provider types.) Before finalizing the data-collection instrument, we asked officials from two MFCUs to review the instrument to determine if the instrument would elicit appropriate responses, and to identify any data that would be particularly challenging for a MFCU to provide. We also interviewed officials from the Centers for Medicare & Medicaid Services, the HHS- OIG’s Office of Evaluation and Inspections, and the National Association of MFCUs to obtain information on fraud cases handled by the MFCUs. We collected data for closed health care fraud cases only—meaning that agencies were no longer actively investigating or prosecuting a case—to avoid concerns about analyzing or reporting information about open cases. We requested data from the state MFCUs for any actions—such as indictments, convictions, or penalties—that occurred on a subject’s fraud case in 2005 or 2010. For example, if a subject was indicted in 2004 and sentenced in 2005, the MFCU data would only include information about the subject’s sentencing in 2005, because the indictment occurred in a year outside of our data request. We requested aggregate subject- level data, rather than case-level data, from the MFCUs using a standardized data-collection instrument. The MFCUs reported information on the total number of fraud subjects they investigated and prosecuted, and did not provide detailed information for each instance of fraud. Because the state MFCUs may work together on certain cases that cross state lines, it is possible that duplicate data are included in our analysis. We relied on the data as reported by the 10 MFCUs and did not independently verify these data. However, we reviewed the data for reasonableness and followed up with state officials for clarification when necessary. We found that these data were sufficiently reliable for the purposes of our report. In addition to the contact named above, key contributors to this report were Martin T. Gahart, Assistant Director; Christie Enders; Jawaria Gilani; Dan Lee; Drew Long; Dawn Nelson; and Monica Perez-Nelson. | GAO has designated Medicare and Medicaid--which are administered by the Centers for Medicare & Medicaid Services (CMS), an agency of HHS--as high-risk programs partly because their size and complexity make them vulnerable to fraud. Several federal agencies conduct health care fraud investigations and related activities, including HHS-OIG and DOJ's Civil Division, and the 93 U.S. Attorney's Offices (USAO). In fiscal year 2011, the federal government devoted at least $608 million to conduct such activities. Additionally, state MFCUs investigate health care fraud in their state's Medicaid and CHIP programs. GAO was asked to provide information on the types of providers that are the subjects of fraud cases. This report identifies provider types who were the subjects of fraud cases in (1) Medicare, Medicaid, and CHIP that were handled by federal agencies, and changes in the types of providers in 2005 and 2010; and (2) Medicaid and CHIP fraud cases that were handled by MFCUs. To identify subjects of fraud cases handled by federal agencies, GAO combined data from three agency databases--HHS-OIG, USAOs, and DOJ's Civil Division--and removed duplicate subject data. GAO also reviewed public court records, such as indictments, to identify subjects' provider types because the USAOs and DOJ Civil Division data did not consistently include provider type. To describe providers involved in fraud cases handled by the MFCUs, GAO collected aggregate data from 10 state MFCUs, which represented the majority of fraud investigations, indictments, and convictions nationwide. According to 2010 data from the Department of Health and Human Services' Office of the Inspector General (HHS-OIG) and the Department of Justice (DOJ), 10,187 subjects--individuals and entities involved in fraud cases--were investigated for health care fraud, including fraud in Medicare, Medicaid, and the Children's Health Insurance Program (CHIP). These subjects included different types of providers and suppliers--such as physicians, hospitals, durable medical equipment suppliers, home health agencies, and pharmacies--that serve Medicare, Medicaid, and CHIP beneficiaries. For criminal cases in 2010, medical facilities--including medical centers, clinics, or practices--and durable medical equipment suppliers were the most-frequent subjects investigated. Hospitals and medical facilities were the most-frequent subjects investigated in civil fraud cases, including cases that resulted in judgments or settlements. Subjects of criminal cases: Many of the 7,848 criminal subjects in 2010 were medical facilities or durable medical equipment suppliers, representing about 40 percent of subjects of criminal cases. Similarly, in 2005, medical facilities and durable medical equipment suppliers accounted for 41 percent of criminal case subjects. Data from 2010 show that most of the subjects were in cases that were not referred by HHS-OIG to DOJ for prosecution (85 percent). Of the subjects whose cases were pursued, most were found guilty or pled guilty or no contest. Subjects of civil cases: Over one-third of the 2,339 subjects of civil cases in 2010 were hospitals and medical facilities. In 2010, about 35 percent more subjects were investigated in civil fraud cases than in 2005. Nearly half of the subjects of 2010 cases were pursued. Among the subjects whose cases were pursued, 55 percent resulted in judgments or settlements. Additionally, data from HHS-OIG show that nearly 2,200 individuals and entities were excluded from program participation for health care fraud convictions and other reasons, including license revocation and program-related convictions. About 60 percent of those individuals and entities excluded were in the nursing profession. Pharmacies or individuals affiliated with pharmacies were the next-largest provider type excluded, representing about 7 percent of those excluded. According to data GAO collected from 10 state Medicaid Fraud Control Units (MFCU), over 40 percent of the 2,742 subjects investigated for health care fraud in Medicaid and CHIP in 2010 were home health care providers and health care practitioners. Of the criminal cases pursued by these MFCUs, home health care providers comprised nearly 40 percent of criminal convictions and 45 percent of subjects sentenced in 2010. Civil health care fraud cases pursued by these MFCUs in 2010 resulted in judgments and settlements totaling nearly $829 million. Pharmaceutical manufacturers were to pay more than 60 percent ($509 million) of the total amount of civil judgments and settlements. GAO provided a draft of the report to DOJ and HHS. DOJ provided technical comments, which have been incorporated as appropriate. |
SEC is an independent agency created to protect investors; maintain fair, honest, and efficient securities markets; and facilitate capital formation. SEC’s five-member Commission oversees SEC’s operations and provides final approval of SEC’s interpretation of federal securities laws, proposals for new or amended rules to govern securities markets, and enforcement activities. Enforcement staff located in headquarters and 11 regional offices conduct investigations through informal inquiries, interviews of witnesses, examination of brokerage records, reviews of trading data, and other methods. At the request of Enforcement staff, the Commission may issue a formal order of investigation, which allows the division’s staff to compel witnesses by subpoena to testify and produce books, records, and other documents. Following an investigation, SEC staff present their findings to the Commission for its review, recommending Commission action either in a federal court or before an administrative law judge. On finding that a defendant has violated securities laws, the court or the administrative law judge can issue a judgment ordering remedies, such as civil monetary penalties and disgorgement. In many cases, the Commission and the party charged decide to settle a matter without trial. In these instances, Enforcement staff negotiates settlements on behalf of the Commission. Total Enforcement staffing has declined 4.4 percent, from a peak of 1,169 positions in fiscal year 2005 to 1,117 positions in fiscal year 2008. While overall Enforcement resources and activities have remained relatively level in recent years, the number of non-supervisory investigative attorneys, who have primary responsibility for developing enforcement cases, decreased by 11.5 percent, from a peak of 566 in fiscal year 2004 to 501 in fiscal year 2008. Enforcement management attributed this greater decline to several factors: promotion of staff attorneys into management during a hiring freeze, which left their former positions vacant; diversion of investigative positions to other functions; and reduction of opportunities for non-attorney support staff to move to positions outside the agency. At the same time, staff turnover has decreased and staff tenure increased. The majority of Enforcement’s non-supervisory attorney workforce has 10 years of experience or less, but the distribution of experience in this category has reversed in recent years. The portion with less than 3 years of experience has declined by about 50 percent, and the portion with 3 to less than 10 years of experience has increased by about 55 percent. The portion with 10 to less than 15 years, while small overall, has grown by about 14 percent. Enforcement management welcomed these trends, but believed they resulted from a weaker private-sector job market for attorneys. They felt that had market conditions been better recently, departures would have been more numerous, which would have depressed the experience level. Measured by the number of enforcement cases opened and number of enforcement actions brought annually, Enforcement activity has been relatively level in recent years. Case backlog has declined somewhat as the division has made case closings a greater priority. Nevertheless, Enforcement management and investigative attorneys agreed that resource challenges have affected their ability to bring enforcement actions effectively and efficiently. Enforcement management told us that the current level of resources has not prevented the division from continuing to bring cases across a range of violations. But management and staff acknowledged that current staffing levels mean some worthwhile leads cannot be pursued, and some cases are closed without action earlier than they otherwise would have been. More specifically, investigative attorneys cited the low level of administrative, paralegal, and information technology support, unavailability of specialized services and expertise, and a burdensome system for internal case review as causing significant delays in bringing cases, reducing the number of cases that can be brought, and potentially undermining the quality of cases. Enforcement management concurred with the staff’s observations that resource challenges undercut enforcement efforts. Effective and efficient use of resources is important to accomplishing Enforcement’s mission. SEC’s strategic plan calls for targeting resources strategically, examining whether positions are deployed effectively, and exploring how to improve program design and organizational structure. Some attorneys with whom we spoke estimated that they spend as much as a third to 40 percent of their time on the internal review process. Recently, Enforcement management has begun efforts that seek to streamline the case review process. The initiative focuses on process, but our review suggests that organizational culture issues, such as risk aversion and incentives to drop cases or narrow their scope, are also present. If the division does not consider such issues in its initiative, it may not be as successful as it otherwise could be. Enforcement staff consider a number of factors when determining the dollar amounts of penalties and disgorgements, which in total have declined in recent years. To determine a penalty in an individual case, Enforcement staff consider factors such as the nature of the violation, egregiousness of conduct, cooperation by the defendant, remedial actions taken, and ability to pay. Disgorgement is intended to recover ill-gotten gains made, or losses avoided, through a defendant’s actions. In 2006 and 2007, the Commission articulated certain policies for determining the appropriateness and size of corporate penalties. The 2006 policy—which the Commission said was based in part on the legislative history of a 1990 act that provided SEC with civil penalty authority—established nine factors for evaluating imposition of corporate penalties, but said two were of primary importance: (1) direct benefit to the corporation and (2) additional harm to shareholders. The 2007 policy, now discontinued, required Enforcement staff, when contemplating a corporate penalty, to obtain Commission approval of a penalty range before settlement discussions could begin. Cases that subsequently were settled within the range specified by the Commission were eligible for approval on an expedited basis. At the same time the Commission provided the settlement range, it also granted Enforcement staff authority to sue. According to Enforcement staff and former commissioners with whom we spoke, and as stated by the then-Chairman, the purpose of the policy, also known as the “pilot program,” was to: provide earlier Commission involvement in the penalty process; strengthen Enforcement staff’s negotiating position; and maintain consistency, accountability, and due process. Setting aside the effect of the implementation of any policy, the total amount of penalties and disgorgement ordered on an annual basis can vary according to the type and magnitude of cases concluded in a given period. As shown in figure 1, since reaching peaks in fiscal years 2005 and 2006, total annual penalty and disgorgement amounts have declined. While both penalties and disgorgements fell in recent years, penalties have been declining at an accelerating rate, falling 39 percent in fiscal year 2006, another 48 percent in fiscal year 2007, and then 49 percent in fiscal year 2008. Also, penalties declined in the aggregate by a greater amount than disgorgements. In particular, penalties fell 84 percent, from a peak of $1.59 billion in fiscal year 2005 to $256 million in fiscal year 2008. Disgorgements fell 68 percent, from a peak of $2.4 billion in fiscal year 2006 to $774.2 million in fiscal year 2008. Compared to fiscal year 2006, SEC brought more corporate penalty cases in fiscal 2007, but for smaller amounts. In 2007, SEC brought 10 cases, compared to 6 in 2006. Four of the six cases in 2006 resulted in penalties of $50 million or more, with the two largest, American International Group, Inc. and Fannie Mae, totaling $100 million and $400 million, respectively. In contrast, in the fiscal year 2007 cases, only two issuers, MBIA, Inc., and Freddie Mac, were assessed penalties of at least $50 million. The distribution of enforcement actions by type of case generally has been consistent in recent years. Enforcement management said that the division has met its goal that a single category of cases not account for more than 40 percent of all actions. We found that Enforcement management, investigative attorneys, and others concurred that the 2006 and 2007 penalty policies, as applied, have delayed cases and produced fewer and smaller corporate penalties. On their face, the penalty policies are neutral, in that they neither encourage nor discourage corporate penalties. However, Enforcement management and many investigative attorneys and others said that Commission handling of cases under the policies both transmitted a message that corporate penalties were highly disfavored and caused there to be fewer and smaller corporate penalties. According to a number of Enforcement attorneys and division managers, investigative attorneys began avoiding recommendations for corporate penalties. For example, when the question of whether to seek a corporate penalty is a close one, the staff will default to avoiding the penalty. Or, if investigative staff decides to seek a penalty, they will change their focus from pursuing what they otherwise would recommend as most appropriate to tailoring recommendations to what they believe the Commission will find acceptable. According to many investigative attorneys, the penalty policies contributed to an adversarial relationship between Enforcement and the Commission, where some investigative attorneys came to see the Commission less as an ally and instead more as a barrier to bringing enforcement actions. Enforcement management told us they concurred with these observations about the effect of the application of the penalty policies. Although the Commission never directed there be fewer or smaller penalties, the officials said this has been the practical effect because Commission handling of cases made obtaining corporate penalties more difficult. Over time, the officials said they struggled with implementation and were unable to provide guidance to the staff, because they saw the Commission’s application of the penalty factors as inconsistent. Furthermore, the widely held view in Enforcement was that the unstated purpose of the 2006 policy was to scale back corporate penalties. Our review identified several other concerns voiced by Enforcement staff and others: That the policies have had the effect of making penalties less punitive in nature—by conditioning corporate penalties in large part on whether a corporation benefited from improper practices, penalties effectively become more like disgorgement. That the 2007 policy (Commission pre-approval of a settlement range) could have led to less-informed decisions about corporate penalties. That is, the Commission would decide on a penalty range in advance of settlement discussions, when settlement discussions themselves can reveal relevant information about the conduct of the wrongdoer. That the policies have reduced incentives for subjects of enforcement actions to cooperate with the agency, because of the perception that SEC has retreated on penalties. That it became more difficult to obtain formal orders of investigation, which allow issuance of subpoenas to compel testimony and produce books. Since fiscal year 2005, the number of formal orders approved by the Commission has decreased 14 percent. Our review also showed that in adopting and implementing the 2006 and 2007 corporate penalty policies, the Commission did not act in concert with agency strategic goals calling for broad communication with, and involvement of, the staff. In particular, Enforcement, which is responsible for implementing the policies, had only limited input into their development. According to Enforcement management, the broad Enforcement staff had no input into either policy. Senior division management did have input into the 2006 policy, but none into the 2007 policy. As a result, Enforcement attorneys say there has been frustration and uncertainty about application of the penalty policies. Mr. Chairman, this concludes my prepared statement. I would be happy to respond to any questions that you or other members of the subcommittee might have. For further information on this testimony, please contact Orice M. Williams at (202) 512-8678 or [email protected], or Richard J. Hillman at (202) 512-8678 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this testimony include Karen Tremba, Assistant Director and Christopher Schmitt. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | In recent years, questions have been raised about the capacity of the Securities and Exchange Commission's (SEC) Division of Enforcement (Enforcement) to manage its resources and fulfill its law enforcement and investor protection responsibilities. This testimony focuses on (1) the extent to which Enforcement has an appropriate mix of resources; (2) considerations affecting penalty determinations, and recent trends in penalties and disgorgements ordered; and (3) the adoption, implementation, and effects of recent penalty policies. The testimony is based on the GAO report, Securities and Exchange Commission: Greater Attention Needed to Enhance Communication and Utilization of Resources in the Division of Enforcement ( GAO-09-358 , March 31, 2009). For this work, GAO analyzed information on resources, enforcement actions, and penalties; and interviewed current and former SEC officials and staff, and others. Recent overall Enforcement resources and activities have been relatively level, but the number of investigative attorneys decreased 11.5 percent over fiscal years 2004 and 2008. Enforcement management said resource levels have allowed them to continue to bring cases across a range of violations, but both management and staff said resource challenges have delayed cases, reduced the number of cases that can be brought, and potentially undermined the quality of some cases. Specifically, investigative attorneys cited the low level of administrative, paralegal, and information technology support, and unavailability of specialized services and expertise, as challenges to bringing actions. Also, Enforcement staff said a burdensome system for internal case review has slowed cases and created a risk-averse culture. SEC's strategic plan calls for targeting resources strategically, examining whether positions are deployed effectively, and improving program design and organizational structure. Enforcement management has begun examining ways to streamline case review, but the focus is process-oriented and does not give consideration to assessing organizational culture issues. A number of factors can affect the amount of a penalty or disgorgement that Enforcement staff seek in any individual enforcement action, such as nature of the violation, egregiousness of conduct, cooperation by the defendant, remedial actions taken, and ability to pay. In 2006, the Commission adopted a policy that focuses on two factors for determining corporate penalties: the economic benefit derived from wrongdoing and the effect a penalty might have on shareholders. In 2007, the Commission adopted a policy, now discontinued, that required Commission approval of penalty ranges before settlement discussions. Setting aside the effect of any policies, total penalty and disgorgement amounts can vary on an annual basis based on the mix of cases concluded in a particular period. Overall, penalties and disgorgements ordered have declined significantly since the 2005-2006 period. Total annual penalties fell 84 percent, from a peak of $1.59 billion in fiscal year 2005 to $256 million in fiscal year 2008. Disgorgements fell 68 percent, from a peak of $2.4 billion in fiscal year 2006 to $774.2 million in fiscal year 2008. Enforcement management, investigative attorneys, and others agreed that the two recent corporate penalty polices--on factors for imposing penalties, and Commission pre-approval of a settlement range--have delayed cases and produced fewer, smaller penalties. GAO also identified other concerns, including the perception that SEC had "retreated" on penalties, and made it more difficult for investigative staff to obtain "formal orders of investigation," which allow issuance of subpoenas for testimony and records. Our review also showed that in adopting and implementing the penalty policies, the Commission did not act in concert with agency strategic goals calling for broad communication with, and involvement of, the staff. In particular, Enforcement had limited input into the policies the division would be responsible for implementing. As a result, Enforcement attorneys reported frustration and uncertainty in application of the penalty policies. |
Subsets and Splits